It's been a while, I hope you're good. I'm fine thanks, busy as sin but isn't that always the way? So where did we leave off? From reading back through my previous post, we'd scanned our little guts out and pulled a list of all ports that were open and all the services that can be interacted with. Boy haven't we been busy!
It just so happens that now is when the real fun begins. Have a bit of a perouse through the results, not that easy to read aye? Sure we can quickly find some 135, 445 and have a quick fiddle through the lovely lovely file shares, but where's the automation? This post should cover some basics about gathering even more data from the services we've identified using our ever faithful set of tools such as nikto, gnome-web-photo, curl et al and keeping the data useable.
First things first, let's bring all of our results together in a more machine readable way. From the previous post we've grabbed all of our nmap output into the three decent formats, plain, greppable and xml. For the purposes of this post we'll be using the xml format and parsing it using xmlstarlet (for those of you that aren't already using starlet, grab a copy, it's a brilliant little command line parser that I can't live without, nessus, nmap, surecheck, anything that dumps xml suddenly becomes friendly to use again!)
The little gem i've been using for a while now is:
port_scans/hot-targets.tcp.services | xmlstarlet sel -T -t -m "//state[@state='open']" -m ../../.. -v address/@addr -m hostnames/hostname -i @name -o ' (' -v @name -o ')' -b -b -b -o "," -m .. -v @portid -o ',' -v @protocol -o "," -m service -v @name -i "@tunnel='ssl'" -o 's' -b -o "," -v @product -o ' ' -v @version -v @extrainfo -b -n -| sed 's_^\([^\t ]*\)\( ([^)]*)\)\?\t\([^\t ]*\)_\1.\3\2_' | sort -n -t.
You get the idea.
While we're on the subject, a nice precursor to nikto is a bit of web scouring. We've all been in the situation before where we've been on a internal test with limited time only to discover 100 webservers spread across the network, it's always a case of best efforts and being left wondering if the ones we missed were the ones that would've bent over. This is where webscour comes in. It's a little script from Geoff over at Cyberis (Available here) that given a list of addresses grabs screenshots (using gnome-web-photo) and header information from each webserver and produces a handy html file to view them all from. Suddenly all the default content and HP OpenViews can be found quickly and we can move straight onto the Accounting App running Classic ASP on IIS4 that hasn't been in use for 5 years.
Incidently this can be run in a similar fashion to the one liner above:
root@bt:~# cat output.csv | grep http | cut -f 1,2 -d "," | tr "," ":" | ./webscour.pl webservers.htm
As you can imagine there is a lot more we can do with webservers, dirbuster, skipfish the sitekiller and any other forcedbrowse/fuzzer can usually be used in this way, and as always the more data the better.
Next steps as far as web servers are concerned usually involve getting this information into Burp, that way we can play with it properly. Buby is a sensible choice and further down the line we'll look into automated spidering and active scanning from the terminal, replaying nikto/dirbuster output directly into Burp and also utilising the FuzzDB to profile out any CMSs we come across. But that unfortunately will have to wait for another post.
As usual any thoughts, critiques or straight forward calling out either head down to the comments or hit me up on twitter. Hwyl am Nawr!
It wouldn't be fair if I weren't to go off topic at least once in a post so here you go....
cat output.csv | grep 161,udp | cut -f 1 -d "," | /pentest/enumeration/snmp/onesixtyone/onesixtyone -c /pentest/enumeration/snmp/onesixtyone/dict.txt -o onesixtyone.out -i -
Have some snmp fun ya'll!