Joe Barrett bio photo

Joe Barrett

INFOSEC Professional

Email Twitter

This is continuing from part one of this series and going into another of my ground rules. The importance of these rules cannot be overstated when we’re conducting offensive operations against client networks. The purpose of adversarial emulation is to as closely model realistic attack profiles as we can. Having defenders set up artificial constraints only hurts themselves, but unfortunately they don’t always see it that way. I’m trying to change that.

Rule 2: As Few Boundaries As Possible

“Hey, don’t scan that subnet - that’s where all of our legacy stuff is.”

“Don’t worry about scanning that system, it will be decommissioned next month.”

Yea, I know. Because the bad guys are going to leave that alone, too. Well, I pestered and got permission to scan that subnet on a particular engagement with a client. And I ended up finding a Solaris box that was missing an 8-year old critical patch with a publicly available exploit. And on that box was a passwordless SSH key pair that provided me access to about a thousand other Linux systems resulting in complete enterprise compromise, including numerous “Holy Grail” compromise objectives. And in tracking back the compromise, it turns out that the patching system had been lying to administrators for eight years about whether or not the patch (and others) had been successfully applied. Whoops. Good thing we scanned it, huh?

We understand that certain types of systems react poorly to being scanned. I’ve worked with Industrial Control System (ICS) networks before and they’re incredibly finicky, especially if you’re foolish enough to fire something like Nessus at a Modbus/TCP interface. A friend of mine crashed an entire Exchange environment about a decade ago because a certain model of Brocade fiber switch handled nmap scans very poorly - but to his credit he’d asked the client if there were any systems sensitive to being scanned and the client had said no. We do our very best to ensure that we don’t crash systems, not only because it makes us look like amateurs but it’s our professional reputation on the line if we take down your network. That said, we don’t want to scan anything on your network, we’d much rather take it over without scanning! After all, your admin’s network diagram is much more accurate picture than anything we can piece together from nmap!

Likewise, many organizations react predictably when we ask if there are targets off-limits for phishing or similar social-area attacks. “Well of course you can’t attack the CEO, that would be bad.” Really? Because I’m sure the bad guys are going to attack him (or at least his executive assistant). Thankfully, we’ve got a few clients that have taken this to heart and agreed with us that literally every single employee in the organization is a valid target. When your client says “If our CEO opens your phishing e-mail, well, shame on him” - that’s an awesome feeling. Then again, that takes a CEO who understands risks and is willing to be humbled. Not everyone has that.

By allowing us to have as free of a range as possible on your network, we’re more likely to find and demonstrate attack paths which an attacker may actually use instead of being contrived to use the exact same attack path year-over-year. If you’re changing consulting firms every time you do an assessment and they’re generally still giving you the same results, you need to take a hard look at yourself and the rules of engagement you’re authorizing.

What you won’t let us do is very illuminating of how strong you really believe your defenses are (Rule 12).

Edit: Now that part three is published, go read it.