One of the common challenges when we’re meeting with clients is helping them understand the appropriate rules of engagement around Adversarial Emulation or Red Team consultations. Many people are used to traditional penetration testing where you give the consultant a particular subnet to scan or you ask them to get domain administrator permissions but haven’t truly understand the reasons why we ask for the rules we do (hint: it’s because you don’t understand how attackers work). In the next few blog posts, I’ll tackle the four major ground rules that we push our clients on when testing their networks. Here’s the first:
Rule 1: No One Can Know
I was at a client site for one-week traditional penetration test a few years back. Mid-way through the week, we were having a conversation with one of the lead administrators and was asking for clarification on what a particular server did. He said he didn’t know off the top of my head, but could get me the information the following week. Knowing that he’d been asked to stay available to assist us during the week, we asked what the delay would be as the information would be pointless next week. Turns out that he’d decided to power off his desktop and refused to log in anywhere the entire week we were on site to ensure that we didn’t hack him. We attempted to explain that our findings weren’t punitive in nature but he wasn’t convinced.
Needless to say, that didn’t work for us. We got in touch with the overall client, gave him a rundown of the situation, and got it resolved. But that’s not acceptable behavior. We don’t try to hack to you to brag to your boss about how bad your security is or how your work machine is running unapproved software. We do it as part of justifiable campaign of access and exploitation.
As a result, when we’re doing attack campaigns against customer networks, we request that the absolute smallest number of people possible know about the engagement. I’ve been on engagements with two different Fortune 500 companies with tens of thousands of employees each and (at both clients) only three people knew about the engagement. As long as key people in the escalation process for security incidents are aware, they can deconflict your activities against legitimate attackers. Beyond that, the goal is to have the defenders react as naturally as normal. Unless China1 takes out a full page advertisement in the New York Times telling you that they’re going to attack your network tomorrow, you don’t know when you’re going to be attacked.
The whole point of adversarial emulation is to test your defenders’ natural reactions against threats, after all.
Edit: Now that part two is published, go read it.
By China, I obviously don’t mean the People’s Republic of China since they’ve promised not to hack commercial companies anymore. That just happens to be a randomly chosen set of five letters I’m using as a placeholder for you to substitute whomever you think might be a nation state interested in your data. ↩