Friday, 27 March 2015

Penetration Testing: You’re Doing it Wrong (?) – Part One

Sexual innuendos aside, I've wanted to write an article about the unspoken thoughts of penetration testers (at least my own and the great testers I've been lucky enough to work with) for quite some time, but 100 hour weeks and international travel for work tend to get in the way! The main focus of this post is to describe the typical approaches of both the security assessment services industry and organisations that consume them. I have given particular focus to new attempts to make testing more realistic and how this compares to what I term as ‘Traditional Penetration Testing’. I also touch on actionable OSINT and Threat Intelligence, as well as Threat and Risk modelling as a function of assessments strategies. The context of this post assumes a medium to large sized organisation and at least a moderate level of maturity such as maintaining an accreditation framework such as PCI DSS or ISO27001.

What Do I Define as Traditional Penetration Testing?

I describe a traditional penetration test, in much the same way as industry and accreditation bodies do, which is in terms of application and infrastructure testing. Typically, a large organisation will have two main streams of testing that pinpoint the new and the old. Business as usual (BAU) testing, includes existing infrastructure and applications that are tested on an annual (or periodic) basis, which includes compliance driven assessments. Project-based testing often comprises wholly new applications and infrastructure that need to be assessed before going ‘live’. The two most common (in my experience) approaches to servicing this requirement are a penetration testing framework that involves multiple suppliers (often using a round-robin approach to avoid claims of inequality) and a single supplier approach. Additionally, it’s worth noting that SMEs will typically use a single or multiple suppliers to service ad hoc requirements and will often create RFPs for each requirement.

So, Why is This Wrong?

One of the key criticisms of this approach is that it’s unrealistic. For example, performing an eight-day black box Web application penetration test in your staging environment, is the equivalent of building a temporary six-foot wall around your brand new house and attesting the security of the en-suite bathroom by getting a fat kid to lean against the front gate for a couple of hours and play knock-down-ginger with their bespectacled best friend. This criticism is something that I agree with, as tests often lack context, realism and their approach is based on improper calculation of the likelihood of attack and compromise. Often, this type of approach is justified due to the low business impact or revenue generation of the asset or cries of ‘industry best practice’. The truth is that baked-in ‘security as a feature’ - with its genesis in SDLC, secure architecture and user education – consistently provides better return on investment (not to be mistaken for cheapness) and more robust attack surfaces than atomic assessment of network sub-sets and standalone applications.

Moreover, many key elements of how threat actors will approach an attack are left out, as vectors such as (D)DoS attacks and social engineering are ubiquitously omitted from scopes. There are obviously ‘good’ reasons why this may be the case, such as cost, perceived ROI, perceived risk and legal implications. Fundamentally, most of these assumptions are wrong. There are ways of reducing these types of risks to acceptable levels for almost all scenarios - so why don’t people do it? I think the most correct answer is that the decision / policy makers don’t know how, and more importantly they don’t want to admit they don’t know how. It’s a very safe option to subscribe to conventional (circa 1995) wisdom and take an additive approach to anything new, giving a hat-tip and a wink to the industry and your peers that you’re at the forefront. This often involves tacking ‘bleeding edge’ services or appliances on to end-of-year budget surplus rather than questioning the value of what’s being done and going down the rabbit hole bottom-up (ooo err), armed with the detail of new approaches.

Another key cogitation, is the quality of testing and the ultimate understanding that’s passed on to those who design and create IT systems and infrastructures. As traditional penetration testing services are largely undifferentiated and a commodity (in the UK market), it’s difficult to know what ‘good’ is, even within the scope of a concept that is arguably broken. The industry is largely underpinned by the CESG CHECK scheme as a baseline for individual skill, with CTM and CTL status being used as metrics. In reality, the testing quality you get depends on the individual doing the testing, and not so much the company you hire (although this affects the overall experience as a consumer). As a minimum, most pen testing service providers will normally quote a framework or a baseline that they align to (such as OSTMM, PTES or NIST 800-155), have a defined methodology, and in lots of cases a checklist they follow when testing. A good example of a baseline is the OWASP top 10, used as a measure for assessing web applications. The OWASP top 10 is simply a list of the top 10 most common vulnerability categories discovered on the Internet by OWASP. The issue with these types of measures, is that your top 10, may not be the OWASP top 10 and you may be missing key tests due to lack of context and a one-size-fits all approach. From experience of testing frameworks, if a client gives a supplier 100 web applications to test over the course of the year it’s likely that most of the vulnerabilities discovered will be repeated again and again due to reuse of badly written libraries or learned mistakes during the development process and server build / configuration. How many paid-for hours could be negated by trending analysis, code review of re-used libraries, documentation of secure builds and hardening rather than testing what’s essentially the same application over and over again and finding the same issues. It’s not uncommon to see this carry over year on year, with a new testing firms being brought in to validate findings.


To conclude, I believe that a lot of the shortfalls in basic security hygiene come from the people running the show (read CISO, CTO, InfoSec Manager). There is a simple lack of understanding of Cyber threat / risk and how to quantify, prioritise and remediate. It’s a lot easier to not rock the boat, make a metaphorical pinkie swear with China and North Korea to the effect of ‘don’t ask don’t tell’, than to admit you don’t understand your attack surface or how to manage it.

And then?

I think that I've carried on quite enough for a single post about the issues, so I’ll be continuing in a new post on how I believe these issues can be remediated and a detailed discussion of: simulated attacks, CREST STAR and CBEST, the pitfalls of changing tact, and the risk of doing nothing.

No comments:

Post a Comment