To Be A CISO For A Day

A lot of the top-tier companies that I work with have fallen into the same trap of running a compliance driven security function which means policies are created purely to be compliant and not necessarily adopted or fully integrated by the organisation for security purposes. Policies and procedures is a great thing and provide a structure for an organisation to adhere to, including compliance with explicit laws and regulations but really need to be adopted from the top down and not just set at the top and never enforced or promoted.

This is not a slant on compliance but one shoe does not fit all. Compliance for me has always been about displaying a minimum level security, to allow a company to operate in a space or sector. How then does the same model work for a company that has a turnover of one million pounds and is targeted by hacktivists or low-level cyber crime versus a company that supports critical economic functions and has billions of revenue flowing through its accounts and is being targeted by nation-state backed threat actors.

All companies, even in the same sector, are not equal and each company should move to a risk-based approach, which invests and builds a security programme or cyber security strategy that is commensurate to the threat that it faces and to an acceptable risk appetite. Each organisation will have a different risk profile, risk appetite and budget, making each companies approach different and unique to them.

Again, this is not a slant on compliance but I would absolutely still want the business to continue to make sure we meet any regulatory requirements set out. I would just want to make sure that we vastly exceed the requirements set out and build a security strategy fit to defend the most sensitive parts of the business and ensure its adopted by the entire business and not set at the top and left until the next audit. This strategy should be the rules that are lived by and breathed throughout the organisation. This is where culture comes into the mix and you need to make sure you bring employees into the journey and be part of it rather than demanding they abide by them.

In a previous red team debrief, one of the senior managers turned to me and said “what surprised you most about our environment?” to which I responded “the fact that you have a critical function sitting in the middle of your network for everyone to access with no additional security or monitoring controls”. I said that if this was me, I would lock away what was most precious to me, protect it to the best of my ability and for what I couldn’t protect I would heavily monitor and create use cases that can detect abnormal or anomalous activity. I think the above scenario helps me articulate what I mean by a risk-based approach, for which is a shared view given frameworks like NIST.

The NIST framework sets out five phases IDENTIFY, PROTECT, DETECT, RESPOND and RECOVER. This model helps change the culture and starts with the most important phase, which is often lost or bypassed in large enterprise businesses – IDENTIFY.

While speaking to many enterprise businesses, it becomes clear that what is important or critical to one department is not always important or critical to another and in fact has limited impact to the overall business. The business’s view must become succinct and follow a top down approach to identifying what assets or functions pose the biggest risk to the business and what would have the biggest impact to the businesses ability to function, should it become compromised. Often the outcome can also be quite varied when dealing with the executive leadership team or the senior managers. Usually a combined approach is what best provides the right outcome.

Secondly, we must identify the WHO, WHY, WHEN, WHERE and HOW for the threats we face. We can acquire this knowledge through internal or external threat intelligence and other inter sector resources that may exist. The end goal here is to gain insight into who is going to target us, what they want from us and the methods and techniques they are likely to use. Once we understand this we can overlay the attack picture against what we want to protect, the costs of protecting against those attacks and subsequently we can quantify the costs of not protecting those assets. As an organisation we should also change the mindset from “we will never be breached” to “when we get attacked” which will help change the way security is approached and budgeted across the business. Another extremely common question I get asked when reviewing and organisations security posture is “what are our competitors doing to defend themselves and how do we compare?”. This optimises the culture shift that has to occur, we should not be held to lower standards because competitor companies harbour a higher risk appetite we should all be striving to be the best we can be.

Given the new picture obtained through the “Identify” phase we can now start to plan our security strategy and what we need to do to defend our business. We can look at elongating attack paths, protecting critical assets and data, detecting what we cannot protect and when a cyber incident occurs, respond and recover appropriately.

Initially when I started to write this blog post I thought I could distil all of my thoughts about building a defensive capability, but instead I think a better approach would be to cut it into multiple posts. Therefore, having touched on culture and the shift that has to occur, I want to focus the remainder of this post on immediate programmes of work that have hampered me as an attacker and result in the best return on investment (ROI) in the short term.

Five things my target did last summer….

1 – Endpoint

“Out with the old, in with the new” – I should probably state that this is not new thinking but some companies still find themselves protecting the perimeter more than the endpoint. Phishing and insider threats are such a big deal now and have been for sometime, that they make up about 80% of the ways we break into a business. I have seen companies rapidly improve their security posture with the implementation of Endpoint Detection and Response (EDR) or a good lock down.

So what does a good endpoint configuration look like from both a detect and protect perspective. We are going to utilise these options to align many of our endpoint defences to MITRE and make sure we have good coverage:

Group Policy Settings – If you are adopting a Windows enterprise, there are many security features and settings that are available but disabled by default. Microsoft and NCSC have provided a robust set of group policy configurations (ADMX files) that can be imported into Active Directory and deployed, providing you have a good test environment and roll out strategy, as these may require tweaking for your environment.

It should be noted that certain features such as Device Guard and App Locker settings will not be applied unless you are running an enterprise ready version of Windows, e.g. Windows 10 Enterprise, and this is an absolutely must have. Most organisations are running with some variant of Microsoft 365, such as E5 which will often include Defender ATP and Windows Enterprise as default. These GPO settings provided from NCSC include basic settings which restrict access to certain area’s of the operating system which promotes good security practice and enables a number of proactive Microsoft security measures that every organisation should make use of.

Most threats or serious breaches start with some form of malware which obtains code execution on an endpoint. Whether that be by leveraging a drive-by attack, phishing payload or even an insider that is trying to cause damage, they all require some level of execution. Many forms of execution can either be disabled or detected with the use of Windows GPOs and a half decent EDR. The LOLBAS project for one is a great place to start. However, having built the weaponisation for many attacks, there are many parts to the initial execution and weaponisation phase, e.g:

  1. The download mechanism
  2. The script engine
  3. The execution method

We will look to tackle the problem with a live example that has been used across many threat actors looking to compromise an organisation:

  1. The download mechanism – JS file downloaded straight from the browser
  2. The scripting engine – JSCRIPT / Windows SCRIPTING HOST to pull down the core implant / malware
  3. The execution method – RUNDLL to run a DLL downloaded and lay persistence

The solution:

  1. Group Policy can be used to block common browsers from downloading files, the inclusion of tooling such as smart screen can also assist with downloaded files
  2. Windows scripting host can be disabled along with blocking WSCRIPT and CSCRIPT execution with AppLocker
  3. App Locker can also be used to block the use of applications not regularly executed by a user, e.g. DLLs or unsigned binaries

Many companies have expressed to me during debriefs about the difficulty in applying a robust user lock down as they have a huge user base with many different roles and many different applications. However, even if they have 1000 applications used across the estate this is still a viable method of lock down when focusing on stopping the operating system being used against itself. Yes multiple lock downs may have to be written to apply to multiple user roles, but the problem most face is support to implement the change and a top down approach to solving the issue.

LAPS (Local Administrator Password Solution) – There is little conversation here, but a total freebie from Microsoft to fix the long burning issue of local administrator credential reuse. LAPS is extremely easy to roll out to both workstations and servers and will add another layer of security.

EDR (Endpoint Detection and Response) – This generally fall into two categories and then subsequently fall into another two categories pending your involvement in use case development. Lets deal with the first problem.

EDR software, while it runs with the highest privilege on a system is generally designed in one of two ways – user land hooking or kernel mode. In short, user land hooking equals problems and kernel hooking equals minimal visibility, therefore a blend of both approach is what you are looking for when it comes to a suitable EDR solution in my experience.

With user land hooking, when you start a process the EDR injects its DLL into your process and redirects a number of Windows features (SYS calls) for additional monitoring, if it doesn’t like what it sees then it can alert and either terminate the process or allow it to continue but silently trigger an alert int the background to the anomalous activity, e.g. createremotethread. A standard use case would be process injection, if the EDR detects anomalous or malicious actions it can alert. However, user land hooking has a number of deficiencies, and one is that the user owns the process. This means that while the EDR product can inject the process and redirect a number of Microsoft features, a more sophisticated attacker can undo the EDR’s work and disable its ability to monitor use of features and ultimately attackers actions. Additionally, and some what comically, Microsoft released the ability to disable loading of any DLL’s not signed by Microsoft into a process, thus rendering any EDR solutions null and void until they get there solution and product fixed. This setting can be applied using the “SetProcessMitigationPolicy” when starting a new process.

Tools such as Microsoft Defender, Carbon Black and CrowdStrike operate at both the kernel level and user level and cannot be so easily bypassed as they see all features that are invoked. I have the pleasure of working with some really skilled individuals who can bypass products that operate at kernel level given the right level of privileges on a host. We will not go into the methods but state the level of sophistication has to be greatly increased, which is always the play that you are trying to force as a defender.

The second part that we need to discuss is whether the vendor is all “secret sauce / blackbox” about their product or development focused. Some EDR products make writing rules the focus of the buying community while providing some basic capability, while others make it more difficult to write rules and rely on their own development to protect a company, their so called “secret sauce”.

The truth here is that you should buy whats right for you and based on your own requirements and size of security team. I have seen small teams make significant strides with the “secret sauce” model as there is less desire to get behind the wheel but know how to use their tooling really well. MITRE did a really good comparison on the coverage of EDR’s against the MITRE framework, for anyone looking for more info. Although, I would recommend a POC while utilising some purple teaming to provide some assurances as to the review process.

Don’t forget the freebies such as sysmon and elk, these should be considered to get access to writing kernel based detection’s that cannot be so easily subverted and increase the attackers level of sophistication to go undetected. Some organisations have rolled out their own sysmon configuration with great success, Swift on security has also done some great work in this space too.

2 – Segregation

Segregation is an interesting one, I was working with a company who had been completely compromised as part of a test and the point was made that they had compliant firewalls. I think this is because they 1) had firewalls and 2) locked away PCI data. Other than that the firewalls were operating as routers…..

Everyone is so accepting of the term “principle of least privilege” but mention “privilege of least access” and you seem to be fighting a losing battle.

Imagine a Utopian environment where a user workstation can talk only to:

  • the exchange server for email (HTTP/SSL/SMTP)
  • the proxy server for Internet access (HTTP/S)
  • the file server or online storage utility for storage (SMB/HTTPS)
  • the sharepoint server or collaborative web services (HTTPS)
  • and finally, the domain controllers (all Microsoft recommended ports)

Now that machine becomes compromised and administrator credentials are found for a development system and other local workstations. What can the attacker do with these credentials, well its fair to say very little, especially without some considerable time and effort.

The first argument in regards to segregation is usually “endpoints have to talk to each other”, the brutal truth is they really don’t, out-with the occasional messaging application, such as Skype, no endpoint has to have a direct connection to one another. Also another argument is how do we manage the endpoints, again depending on how you manage the endpoints most sysadmin software will call back to the management agent on a periodic basis and that is how you manage the host. If that is not possible local firewalling and allowing the sysadmins subnet into the various ports required for remote administration is an option. It is essential to open only what is necessary while disallowing common lateral movement ports. Keep in mind this is just workstation to workstation at this point.

Servers should then be reviewed and firewalled off, limiting access only to services that are required for their function. One key thing is to minimise access based on directional access, such that a file server will not be invoking access back into the workstation network and therefore this should be banned. Remember firewalls are mostly stateful and do not need an access rule both ways. You would be surprised how many firewall configurations I have reviewed where access rules were added both ways because they thought this is how it should be implemented.

I have tested an environment which has successfully deployed this type of set up and it is incredibly hard to achieve lateral movement around the network, even when highly privileged credentials are obtained using some common weakness (It should be noted that its not impossible, just toughens the environment and makes you a harder target).

Solutions that should be considered:

  • Windows Firewall – Virtually free but still effective
  • Role Based Access – Authenticate to the switch and get a firewall policy applied to your user
  • Isolation Technologies – this will limit additional broadcast traffic and prevent workstation to workstation communication
  • Hardware Firewalls – cut the network into zones to make segregation easier

At this point its also worth mentioning zero trust networks, but I’m not going to go into this at this stage as this is a topic for another blog post.

While the fight remains creating an operationally secure network, I appreciate that administrators have to be able to administer workstations, servers and carry out maintenance. Solutions such as jump boxes and Citrix allow a company to carve up their network in an attempt to create access paths that can be secured and used to establish longer heavily monitored attack access funnels.

Companies should then create policies and procedures on how to administer your network, as once you have a robust policy you can create use cases to detect activity that does not follow this pattern and alert accordingly. Attackers are not typically going to know your policies and procedures so you have the upper hand and this is simple to implement and enforce providing you have the right buy in at all levels.

My one caveat, while I have been deliberately staying away from wording such as tactical and strategic and blindly dismissing timescales, segregation is usually a lengthy project. However, there are many quick wins that can be enabled to make the life of an attacker difficult, while a larger project of work is underway to properly segregate the network.

3 – Multi-Factor Authentication (MFA) & Privileged Access Management (PAM)

Oh how an attacker hates MFA, pending its construction and ultimately its effectiveness. Again I’ve seen some good and some bad on this front. Basically hardware based OTP (mobile device is acceptable) is the most effective. Solutions such as Duo which offer push notifications as a second factor also work well.

A large portion of the attacks that I have played out from an adversarial point of view have focused on escalating privileges and stealing credentials, ultimately following legitimate users through a number of everyday actions to compromise a function. Taking away the ability to steal and reuse credentials is paramount to squashing the attackers opportunities inside your network.

Its almost criminal to have a critical function which makes us of or is protected by a single set of credentials. While there are some deficiencies of MFA, an example being a website that makes use of MFA – still just authenticates a cookie, which can then be stolen. The result of implementing MFA is about limiting opportunity and replaying of the credentials which is so important to mitigate, lengthen or nullify the attack path.

Another important part is privileged access management (PAM). We won’t go too much into this element but when implemented well, this can hinder an attacker to a subset of computers or servers even when a vulnerability is found and abused. I have been in very well locked down environments and in a position to compromise many servers, but once access to those servers were achieved we were limited to what credentials we could expose due to a good implementation of privileged access management. Whereas other environments, once you gain access to server land it can be game over in minutes due to Tier 0 or Tier 1 credentials being left in memory to be abused.

Solutions such as CyberArk take away the burden of rolling out and managing a MFA PAM solution, in some cases, but to be honest it doesn’t really mater how its executed, its the effectiveness of the solution overall that makes all the difference.

4 – Perimeter Controls

Visibility is key for a company to be able to protect its network, such that any blind spots can cause significant problems to perceived thoughts on the company’s security posture. A regular conversation that I have with defensive teams is that we have TLS inspection but cannot see your C2 traffic. This is because as an attacker I hide in plain sight through techniques such as domain fronting and gravitate towards websites that are categorised as Finance or Healthcare. For privacy reasons most companies generally allow these two categories to be excluded from inspection and sometimes others such as cloud hosting providers or Microsoft domain, all of which really benefit the attacker.

I was going to drone on and on about why its important to inspect traffic and what that looks like etc, but really it’s easier to say inspect all the things, all the time. Its too easy for an attacker to provision infrastructure and get it categorised as financial, to be allowing large categories of non-inspected content. If I have to poke holes in my inspection it will be on a site by site basis after some due diligence.

Inspection is important to block file types on the way in, as well as creating rules for outbound connections. With full inspection enabled, it allows the security team to create robust rules that govern what can be brought into the network (ingress) and what gets blocked outbound (egress). In addition, we can start to tackle the problem of domain fronting where there is a mismatch between the URL and the Host Header.

While I have yet to see anyone do C2 detection well, you cannot start without the appropriate visibility, furthermore its important post compromise to assist an investigation should a company be breached.

One thing I see often which could easily fall into segregation but I will discuss here is access to the proxy and use cases around that access. As an attacker one of our main techniques is to create payloads that are hard coded with the proxy and credentials to authenticate. This allows servers or users who do not have Internet access, such as the SYSTEM account, access to the Internet. Therefore, it is important to build use cases around access to the Internet. Generally about 90% of the user population should not create simultaneous connection to the Internet from different source IP addresses. Also servers should be explicitly denied access to the Proxy Server in the first place.

The reason to discuss this is that I have seen both good and bad setups, but the one that poses the most issues to an attacker is a transparent proxy. Technologies such as Palo Alto support this capability and they identify authorisation based on agent or agent-less techniques. Either way the attack path becomes less straightforward and deviates from the norm, certainly something to consider.

For quick wins on the perimeter side, sandboxing technologies and file conversion solutions should be enabled for all for inbound email and links.

Sandboxing technologies should be used at the perimeter to safely detonate any attachments and links prior to being allowed into the environment, thus allowing an appropriate amount of assurance as to its purpose, prior to release to the user. Solutions that convert documents and filetypes that contain dynamic content can also be used to convert items such as macro documents into PDF’s. Depending on the coverage that is offered through other solutions these should also be reviewed.

5 – SIEM

Up to this point a large portion of the discussion has been focused on the “Protect” element, with some slippage into “Detect”, but predominately about stopping bad guys doing bad things in the first place.

After an attacker has penetrated your first line of defence, e.g. the “Protect” its extremely important to detect the activity and act on this accordingly. This is why having a robust blue team and security incident and event monitoring system is my fifth thing to bring everything together. We will dive into the detail of what good looks like for a SOC in another blog but for this article, I wanted to make the point that an organisation should invest significant time and effort into their SIEM, in terms of people, process and technology. That also goes for educating people that are protecting your organisation from the bad guys, they should be trained and empowered to make change where necessary with full buy in from the top with the correct powers across the business both to affect technical change and procedural ones.

Although I’m not going to go into massive detail, there are a few things I would focus on while starting to think about detect. These are as follows:

MITRE Alignment – MITRE have done a great job of pulling together many of the current attacker TTPs (Tactics Techniques and Procedures). This great work maps many of the likely actions that an adversary will take within your network. Any SOC should be aligning detection’s and gaining coverage for as much of the MITRE map as possible.

Proactive Deception Technologies & Threat Hunting – This is all about getting ahead of the attackers and preparing traps. Honeypot devices, accounts, SPNs, images, SCF files and many more. Transitioning the mindset from defense and awaiting a detection into response mode makes a significant impact on the likelihood of eradicating a threat from your network. Attackers do fall for low hanging fruit, they only want to work as hard as they have to. If you can provision some low hanging fruit along the attack path, they will walk into it. I can confirm this…..sadly! Have a look at https://canarytokens.org/, for some example techniques.

Critical Function Logging and Alerting – Many companies are on the compliance approach to logging and alerting of critical applications. Most actually do a better job on endpoints over critical functions or applications. The security team need to sit down and understand how the systems function, whether its applications, databases, etc and how they are used, then create rules and valid use-cases that identify anomalous activity. An example being, on one test where we accessed the back end database of an application, using the application account, from the application server but carried out the connections from a workstation. This is completely against the design of the solution and not intended functionality. It is also not the norm, and if configured correctly can stick out with the write data. I think time spent understanding the critical function and writing good use-cases is something that is not often done well, but when it is, its very effective and hard to circumvent.

Working Cohesively With The Business – This is another simple win, if the business has known risks, e.g. single factor VPN. Then we should be creating rules that provide some coverage such as identification of concurrent logons to the VPN, brute-force attacks and multiple login attempts. Even geo-fencing is an approach that can minimise the impact if you know where majority of your user-base resides. This is really about acting as one and helping feed quality information between the SOC and other departments to get the correct coverage for the business.

Testing – Use-cases should be subject to regular testing and assurance as to the correct operation of those alerts. Being one of the first lines of defense we better make sure that we can detect bad within our environment and never assume. I think many companies have preconceived idea’s of coverage with one or two rules per tactic, but forget to apply the sophistication multiplier which details many ways to do the same thing. A culture of ongoing continuous improvement and testing should be included in any SOC. To this end an offensive minded presence should make up part of the security team and be supported by regular purple teaming and other assurance activities to test the “what is” vs the “what should be”.

Conclusion

Being just one part to a bigger blog series hopefully this will provide good insight into the changes I would look to make if I was a CISO for a day! Keep the culture simple but empowered, carry out a risk assessment and align this risk to business goals and objectives resulting in a plan of action that the whole company should adhere to and build into everything they do.

If you have any questions please don’t be afraid to DM me or hit us up on Twitter.

Published by b4ggio_su

Red Teamer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s