Good of them to release this, and I have a dog in the race about getting people to think higher-level about security, but ATT&CK, STRIDE and other frameworks tend to be solipsistic, self propagating bullshit.
I would also argue that quantitative security risk models serve mainly as a corporate laundering system to obfuscate risk, do not have any meaningful predictive power, and that security compliance has become a make-work field for the unskilled, whose role is to be both an easy mark and a scapegoat for reckless corporate behaviour.
Hopefully it will mature to where designers and engineers themselves build in mitigations, the way some of them have with environmental and safety risks, but as a business, I think security is due for some scrutiny.
They can be taken by managerial types to obfuscate things in a b.s. way but they really do help organize efforts and map adversary capability in a way that lets you prioritize resources.
I've seen both sides, without this people just use intuition and whatever is trending in the news cycle these days.
>ATT&CK, STRIDE and other frameworks tend to be solipsistic, self propagating bullshit.
Not disagreeing with you, but I was was wondering if you could expand on this. In some circles, ATT&CK is seen as the gold standard. Do you believe this is misguided?
I've been happy to see ATT&CK, yet in practice avoided as a lot of compliance effort for limited value when that same effort can go elsewhere.
The good:
-- Auditing & tuning defense in depth: From a testing/verification perspective, defense in depth should follow both your organizational structure (endpoint, network, cloud, virtualization layers, etc.) and, to economically optimize, double down around attack structure. ATT&CK compliance gives a goahead for auditing your system + vendors for holes in the latter.
-- N+1 standard: Most Elastic/Splunk/etc. SIEM implementations I've seen are half-implemented dirty data messes. The world failed to agree on past efforts like CIM, CEF, CVE, STIX, etc. ATT&CK compliance is auditing your data lake for better SIEM standardization, and an N+1 effort to get folks to agree. If ATT&CK gets you signoff for cleaning your lake, great!
-- Training: A surprisingly large portion of defenders are junior or overly specialized (endpoint vs network vs cloud vs malware vs...)
The bad:
- Expensive. It doesn't solve much more than the above. I'm much more inclined to spend the $ and effort on better data processing & automation & intelligence capabilities instead. Likewise, for auditing, spend time/money on manual + automated red team, or ATT&CK? For big banks & gov, it's more ok b/c the answer is "giant budget; do both."
- Dead end / stepping stone . For the ontological aspect, manual killchain taxonomization is good, but for mapping to real-world data, my sense was, and has only increased, that a lot of that effort needs to be shifted towards (a) centralized logging efforts like Azure Sentinel / Google Chronicle's cloud SIEM's do it for you rather than admin running Splunk/ELK (b) combining w/ ML techniques (BERT?) instead of both the parsing and categorization being 100% manual.
I may be especially attuned to the problem b/c teams come to Graphistry when they realize their SIEM is a mess and can't quickly see stuff like killchains, so are striving for next-level thinking. So we get to see a lot of the many ways manual / rule-based n+1 standards here go wrong. At the same time, I'm optimistic b/c of the automation+ML world we're increasingly in. (Semantic web vs. Google search.)
More on the positive side, we've been working on some anti-misinformation work in covid ProjectDomino.org, and an ATT&CK-like mapping of misinformation has been helpful. Defense in that world is a lot earlier in the understanding needed for automatic detection & response, so manual taxonimization is a good early step.
Central logging efforts with chronicle/sentinel always result in a ton of data loss. On prem is cheap and works well. Elastic/Graylog solve this well. Whether it is sentinel,chronicle or so many of the others in this market, they simply don't and never will know your environment well enough. This will always stab you in the back with the double edged sword of a) losing contextual detail that would have let you make better decisions b) Losing control over data transformation and detection/alerting logic,which means they do too good of a job in suppressing false positives that you miss bad stuff,they generate too many false alerts that waste precioud man hours,many of which could be tuned out(the bureaucracy of a 3rd party,even with admin access to their rules will always be crippling).
In short, my opinion is that you need a fast failing and agile seceng(devsecops?) cycle where you have all the data (logs and enrichment) at your disposal. Your people that spend a lot of time juggling false positives could instead contribute to this, which will result in good alert fidelity and more free time for threat research and hunting! (The fun stuff!!)
We're talking about slightly different things: you're thinking cloud vs. on-prem, while I mean no longer giving SIEM vendors a free pass for crappy parsing & classification of most data and pushing that to users. Running the SIEM via a managed cloud happens to make this easier, but not actually 100% required (e.g., federated learning.)
Historically, Splunk, Elastic, and other SIEM vendors abdicated most responsibilities here despite marketing that suggest they're good at ingest/parsing/correlation across different products ("single pane of glass"). Requiring IT/security teams to do parsing + classification across an ever-changing stack of client/server apps + security tools is a lot of work given the ever-increasing list of other responsibilities they have, and a gross duplication of effort across organizations. The result has meant a lot of incompletely configured systems, a big blocker on most data & automation efforts ("garbage in / garbage out"), and burning internal team time + professional service dollars on week 1 stuff. In contrast, not so hard for a vendor to amortize parsing across all its users. (Which Microsoft and Google are pushing towards, afaict.)
And agreed, teams should be able to iterate on the interesting stuff, not reinventing the wheel on parsing winlogs/apache/netflow/zeek/palo alto/etc and normalizing them to the taxonomy of the year. To the original point about ATT&CK, teams should be training & labeling & sharing models, while ATT&CK is informal ideas (very 1970s/1980s) and manual SIGMA rules (very 1990s/2000s), yet we know that is unreliable and too much work to maintain & grow. They need to be able to quickly work w/ feature engineering teams, not work in ignorance of them nor in isolation.
Sure, it's punchy, but I hope this illustrates it:
The basic problem with them is that they are bottom-up models of threat scenarios that originate with artifacts of technology implementations, which conflate vulnerabilities with attacks, and use news hits as threat actors/agents to justify their importance.
It's %90 an exposition vehicle for displaying how esoterically knowledgeable the practitioners are about hacker trivia and jargon, and from a business perspective, it's just kids playing in the sandbox that produce the compliance artifacts you want to get your project approved. Geeks get to geek, and project managers get their amber status risk, and when Equifax/OPM/LifeLabs happens, everyone says it wasn't foreseeable because they were "compliant." The frameworks externalize risk into models that are divorced from reality, which hides it, and that's why institutions buy into them. I'd say they're the collateralized debt obligations of engineering.
Real world threat scenarios are the counter cases to your business model. It's the top down, "if the C/I/A of this thing we care about is compromised, do we survive?"
People who ask questions like "how do I prevent spoofing in this tech stack?" without first asking "what if these STD test records and results end up on the dark web, and what are we doing to prevent that?" are culpable.
It's when one of the key factors that makes your business viable - goes wrong. The threat actors create the likelihood via their means and opportunity (independent of patch levels), and their motive is literally driven by the consequences of an attack.
Worrying about APT group x is meaningless when it is more plausible and serious that a grad student is going to publish a paper demolishing your elliptic curve implementation at a conference and get you laughed out of your industry.
Vulnerabilities and attacks are random, but the risks and controls are not. Threat scenarios are what happens when a business factor fails hard.
In these ways, the bottom-up focus of modern security frameworks and scoring systems serve more to enable poor quality decision making and the project and product management level anti-patterns that hide risk. That's literally their value. They let cynical people bring crap to market.
The reason they do this is because security people are stuck in the cycle of thinking they still need to explain themselves and convince others they are knowledgeable, and that there is a problem they solve. Everybody already knows, and compliance people have become just marks who hold the bag of bundled risk they think isn't theirs because they explained it all with their framework.
>...that security compliance has become a make-work field for the unskilled, whose role is to be both an easy mark and a scapegoat for reckless corporate behaviour.
I like the cut of your gib, sir.
>It's %90 an exposition vehicle for displaying how esoterically knowledgeable the practitioners are about hacker trivia and jargon, and from a business perspective, it's just kids playing in the sandbox that produce the compliance artifacts you want to get your project approved. Geeks get to geek, and project managers get their amber status risk, and when Equifax/OPM/LifeLabs happens, everyone says it wasn't foreseeable because they were "compliant." The frameworks externalize risk into models that are divorced from reality, which hides it, and that's why institutions buy into them. I'd say they're the collateralized debt obligations of engineering.
I think these risk models are part of a mutually-agreed kabuki illusion. It's hard work to assume prudent risk, to identify hazards specific to an organization's objectives, devise appropriate controls, etc. These frameworks offer a solution: if "industry groups" agree to hold them as valid, then it's like you say -- project managers get their amber risk status, and the large scale breaches are simply "Who could've known?" events, where lessons learned are drafted, reports are produced, commitments are made, and life moves on.
Building up the compliance industry - and I'd add no small part of the cybersecurity industry, tier 1 SOC personnel, etc - seems to be me to be creating a class of worker ripe for having the floor yanked out from under them in a recession. It's cost-center work, but it's marketed as 'cutting edge skills for the burgeoning cybersecurity industry'. What's the revenue generated, or costs cut, by monitoring those dashboards with human eyeballs?
Been down this road before, much harder than it looks. MITRE techniques can be deceptive in that you think you can detect on a technique but that is true only for the specific attack scenario. Example: you can detect anomalous scheduled task creation, but is it because you are looking for specific command lines? If so, why can't attackers just use .NET ? You can detect cred dumping because procdump.exe or wce.exe is seen,but what you are not looking for process handles to lsass. It can lead to a false sense of security if you're not careful.
From a threat hunting and detection perspective, I am so glad they are sharing this tool. It becomes very tedious very fast when you take things like this and apply them against the highly nuanced context of your environment.
"The IAD.gov library is no longer being updated as of October 1, 2018. NSA Cybersecurity (formerly "information assurance") information from October 1, 2018 onward will be available at http://www.nsa.gov/what-we-do/cybersecurity."
In the same way your sales posture determines how well you identify, qualify and convert leads and your product posture determines how well you design, execute and market-fit your products - yet somehow we never felt the need to add 'posture' to these.
The end result is another obfuscation layer of corporate-speak on something that should be pretty concise and bullshit-free.
I would also argue that quantitative security risk models serve mainly as a corporate laundering system to obfuscate risk, do not have any meaningful predictive power, and that security compliance has become a make-work field for the unskilled, whose role is to be both an easy mark and a scapegoat for reckless corporate behaviour.
Hopefully it will mature to where designers and engineers themselves build in mitigations, the way some of them have with environmental and safety risks, but as a business, I think security is due for some scrutiny.