...and fruit flies like a banana.
A lot happened in the past year. I moved to San Francisco, survived being seriously hit by a car - facing a long, maybe a year, recovery... While at the hospital, I figured out a few things about life, universe, and everything. I won't reiterate those things here - I might sound as a buddhist nut if I do.
I will try to start blogging again, slowly. Among plans - translate the best articles drops.wooyun.org has on Android (my current fancy and job).
PS. In case I never gave out a link to my Twitter, it's @agelastic. Still struggling with typing, so most posts are retweets :)
Saturday, 10 October 2015
Tuesday, 2 September 2014
Big data analytics with grep and sed
This was written a couple of years ago, when a gigabyte of logs was reasonably "large scale". Nevertheless, the approach still works for many log monitoring tasks.
Hadoop did not exist yet outside of Yahoo research labs, so we had to do with MySQL, a bit of C and Perl. By the way, the resulting system kept chugging along for at least 5 years, if not 10, before finally succumbing to the likes of ArcSight.
We already had a centralised syslog setup, which produced tons of logs. To start with, we needed to come up with:
What I did was a dozen of iterations of
cat auth.* | grep -v -f stoplist.txt |sed -f sedcommands.txt | sort | uniq -c | sort -r -n > stuff
starting with an empty stoplist and the following file sedcommands.txt:
s/^.\{15\}\ [a-z0-9.-]*\ //
s/\[[0-9]*\]://
This cuts off the initial part of the log line and summarises similar lines. Looking at the sorted results, I add a few lines to the stoplist to get rid of the most uninteresting from security/anomaly point of view entries (usually top hits), rinse, repeat.
The final stoplist (slightly censored):
last message repeated .* times
Accepted publickey for .* from 59\.167\.XX\.XX
Accepted publickey for .* from 10\.65\..*
Accepted publickey for .* from 207\.223\.XX\.XX
session opened for user root by (uid=0)
session opened for user postgres by (uid=0)
session opened for user YYYYYYYY by (uid=0)
session opened for user www-data by (uid=0)
session closed for user
Received disconnect from .*: 11: disconnected by user
Accepted publickey for YYYYYYYY from 67\.221\.XX\.XX
USER=root ; COMMAND=/usr/lib/nagios/plugins/uc_.*
USER=root ; COMMAND=/usr/share/pyshared/pycm/bin/nagios/
USER=root ; COMMAND=/usr/lib/container-manager/.*
USER=root ; COMMAND=/opt/.*
USER=root ; COMMAND=/usr/bin/ZZZZZZ
PWD=/usr/share/pyshared/pycm/bin/vz ; USER=root ;
PWD=/usr/share/pyshared/pycm/bin/vz ; USER=root ;
PWD=/opt/unicorn-repl-tools/var/queue ; USER=root ; COMMAND=/bin/rm \./.*\.net
PWD=/opt/unicorn-repl-tools/var/queue ; USER=root ; COMMAND=/bin/rm \./.*\.com
PWD=/home/hal-uc-1 ; USER=root ; COMMAND=/usr/share/pyshared/pycm/
running '/sbin/halt' with root privileges on behalf of 'root'
This is only for illustration - do not copy the result, the point of this exercise is for you to come up with your own that is best fit for your environment. If you deal with extremely complex logfiles, you could throw in some AWK :)
In all, it took about 2 hours in total from getting the logs to producing the summary, including the remembering how this stuff was done.
Part 1.
A few jobs ago I was on a team tasked with creating a system to monitor security events for a Swiss bank with 20k+ (all physical!) Unix hosts plus 60k Windows desktops and servers.Hadoop did not exist yet outside of Yahoo research labs, so we had to do with MySQL, a bit of C and Perl. By the way, the resulting system kept chugging along for at least 5 years, if not 10, before finally succumbing to the likes of ArcSight.
We already had a centralised syslog setup, which produced tons of logs. To start with, we needed to come up with:
- a list of things to ignore,
- a list of things to definitely alert on,
- and an anomaly detection algorithm.
Part 2.
Back at my current job, I'd spent a couple of hours digging into auth.log from our (then) new and shiny SaaS infrastructure manager node (~ 1 Gb), and it summed up nicely into 38k of "interesting" events. Once you've created your own stoplist, you can run the filtered output in a terminal in background. It contains 10-20 messages an hour:- Manual, as opposed to scripted jobs, ssh logins,
- Unscheduled scripts,
- Adhoc fix work,
- Auth failures,
- Various unique ("never before seen") alarms.
What I did was a dozen of iterations of
cat auth.* | grep -v -f stoplist.txt |sed -f sedcommands.txt | sort | uniq -c | sort -r -n > stuff
starting with an empty stoplist and the following file sedcommands.txt:
s/^.\{15\}\ [a-z0-9.-]*\ //
s/\[[0-9]*\]://
This cuts off the initial part of the log line and summarises similar lines. Looking at the sorted results, I add a few lines to the stoplist to get rid of the most uninteresting from security/anomaly point of view entries (usually top hits), rinse, repeat.
The final stoplist (slightly censored):
last message repeated .* times
Accepted publickey for .* from 59\.167\.XX\.XX
Accepted publickey for .* from 10\.65\..*
Accepted publickey for .* from 207\.223\.XX\.XX
session opened for user root by (uid=0)
session opened for user postgres by (uid=0)
session opened for user YYYYYYYY by (uid=0)
session opened for user www-data by (uid=0)
session closed for user
Received disconnect from .*: 11: disconnected by user
Accepted publickey for YYYYYYYY from 67\.221\.XX\.XX
USER=root ; COMMAND=/usr/lib/nagios/plugins/uc_.*
USER=root ; COMMAND=/usr/share/pyshared/pycm/bin/nagios/
USER=root ; COMMAND=/usr/lib/container-manager/.*
USER=root ; COMMAND=/opt/.*
USER=root ; COMMAND=/usr/bin/ZZZZZZ
PWD=/usr/share/pyshared/pycm/bin/vz ; USER=root ;
PWD=/usr/share/pyshared/pycm/bin/vz ; USER=root ;
PWD=/opt/unicorn-repl-tools/var/queue ; USER=root ; COMMAND=/bin/rm \./.*\.net
PWD=/opt/unicorn-repl-tools/var/queue ; USER=root ; COMMAND=/bin/rm \./.*\.com
PWD=/home/hal-uc-1 ; USER=root ; COMMAND=/usr/share/pyshared/pycm/
running '/sbin/halt' with root privileges on behalf of 'root'
This is only for illustration - do not copy the result, the point of this exercise is for you to come up with your own that is best fit for your environment. If you deal with extremely complex logfiles, you could throw in some AWK :)
In all, it took about 2 hours in total from getting the logs to producing the summary, including the remembering how this stuff was done.
Labels:
automation,
cheatsheet
Saturday, 16 November 2013
Implicit security and waste-cutting
I have a nagging feeling that I've read some of this somewhere a long time ago. Probably in @mjranum's papers.
Imagine you have a product or a process. It kind of works, although you can already see the ways you can improve it by cutting some stuff here and there and changing the way it operates.
A random example: an old paper based process in a company (say, a telco) is replaced with a nice workflow all done on a web site. All information is available there for anyone immediately, all staff are happy, costs decrease, productivity increases 10-fold and so on. The process was concerned with customer contracts, so the information is kind of sensitive.
Bad hackers discover the web site, dump all the data, identity theft sky-rockets, the bank is massively fined and everyone is sad. Downsizing occurs, and so on.
Now what happened here? The old process was slow and cumbersome and low productivity. At the same time it had a number of implicit security measures built in simply by the nature of it being paper based. In order to steal a million paper contracts, one has to break-and-enter the bank facility, plus have a big van to haul all this stuff out. The loss would be immediately discovered (photocopying is not an option due to the time limitations of the heist).
Designers of the new process did not identify these implicit measures or implicit requirements because nobody thought about them. After all, the measures were implicit.
Some of the cost savings of that redesign came from (unintentional) dropping these implicit requirements or measures.
Why I am writing about this? To remind: when you are putting together a new project or re-engineer a process, check if you forgot to implement security that you did not even know was there. Stop for a moment and think about what weaknesses your product or process has, what or who may exploit these weaknesses and what the results will be. "What" could be random event, not necessarily a malicious person. In some places they call this risk management.
The funniest example is OpenSSL in Debian - http://research.swtch.com/openssl.
A less funny example is Vodafone AU customer database project which is more or less described above. It did have one password for all employees.http://www.smh.com.au/technology/security/mobile-security-outrage-private-details-accessible-on-net-20110108-19j9j.html
Imagine you have a product or a process. It kind of works, although you can already see the ways you can improve it by cutting some stuff here and there and changing the way it operates.
A random example: an old paper based process in a company (say, a telco) is replaced with a nice workflow all done on a web site. All information is available there for anyone immediately, all staff are happy, costs decrease, productivity increases 10-fold and so on. The process was concerned with customer contracts, so the information is kind of sensitive.
Bad hackers discover the web site, dump all the data, identity theft sky-rockets, the bank is massively fined and everyone is sad. Downsizing occurs, and so on.
Now what happened here? The old process was slow and cumbersome and low productivity. At the same time it had a number of implicit security measures built in simply by the nature of it being paper based. In order to steal a million paper contracts, one has to break-and-enter the bank facility, plus have a big van to haul all this stuff out. The loss would be immediately discovered (photocopying is not an option due to the time limitations of the heist).
Designers of the new process did not identify these implicit measures or implicit requirements because nobody thought about them. After all, the measures were implicit.
Some of the cost savings of that redesign came from (unintentional) dropping these implicit requirements or measures.
Why I am writing about this? To remind: when you are putting together a new project or re-engineer a process, check if you forgot to implement security that you did not even know was there. Stop for a moment and think about what weaknesses your product or process has, what or who may exploit these weaknesses and what the results will be. "What" could be random event, not necessarily a malicious person. In some places they call this risk management.
The funniest example is OpenSSL in Debian - http://research.swtch.com/openssl.
A less funny example is Vodafone AU customer database project which is more or less described above. It did have one password for all employees.http://www.smh.com.au/technology/security/mobile-security-outrage-private-details-accessible-on-net-20110108-19j9j.html
Labels:
psychology
How I installed pandoc
Probably funny
Pandoc is a "swiss-army knife" utility to transform documents in various markup languages. I needed it for something, and this post is how I managed to get it installed. In retrospect, I could have used the installer from http://code.google.com/p/pandoc/downloads/list
Pandoc is a "swiss-army knife" utility to transform documents in various markup languages. I needed it for something, and this post is how I managed to get it installed. In retrospect, I could have used the installer from http://code.google.com/p/pandoc/downloads/list
$ brew install pandocDONE...
Error: No available formula for pandoc
Searching taps...
#...Googling... Turns out it's a Haskell thing?
$ brew install haskell-platform
...
==> Installing haskell-platform dependency: apple-gcc42
# ZOMG?!
...
...A GFortran compiler is also included
# ooohkay...
...
$ cabal update
...
Note: there is a new version of cabal-install available.
To upgrade, run: cabal install cabal-install
$ cabal install cabal-install
# Those strange Haskell people ^_^...
#...downloading the Internet...
$ cabal install pandoc
# FINALLY WE ARE GETTING SOMEWHERE!
#...downloading the Internet the second time...
$ pandoc
-bash: pandoc: command not found
# WTF?!
$ export PATH=$PATH:~/.cabal/bin
Thursday, 7 November 2013
How to buy code audits
In the past couple of years I've commissioned quite a few external source code audits. It turns out this task is far from simple, despite what the friendly salesmen tell you. I've collected a few thoughts and learnings based on my experience. YMMV.
First of all, all the advice you need is in @mdowd's (et al.) interview http://www.sans.edu/research/security-laboratory/article/192 (question 3). Quote:
Word-of-mouth recommendations often convey the best real-world measure of experience. To cast a wider net though, you can use publications and industry recognition as a good measure of reputation. When approaching a company, you may also want to ask for bios on the auditors likely to perform your assessment. Next, you'll want to ask for a sample report from any auditors you're considering. The quality of this report is extremely important, because it's a large part of what you're paying for. The report should be comprehensive and include sections targeted at the developer, management, and executive levels. The technical content should be clear enough that any developer familiar with the language and platform can follow both the vulnerability details and the recommendations for addressing them. You also need to get some understanding of the audit process itself. Ask if they lean toward manual analysis or if it's more tool-driven. Ask for names and versions of any commercial tools. For proprietary tools, ask for some explanation of the capabilities, and what advantages their tools have over those on the market. You also want to be wary of any process that's overly tool driven. After all, you're paying a consultant for their expertise, not to simply click a button and hand you a report. If a good assessment was that easy, all software would be a lot more secure.
Word-of-mouth is indeed the best indicator of quality. All other criteria are substitutes and make the selection inherently more risky. As with many risks, there are some things you can do to control them.
1. Beware of "bait and switch". This is a technique common in outsourcing when you are promised the best and famous resources when negotiating, but after the contract is signed, you get monkeys.
When engaging any company larger than a boutique, insist on interviewing and approving every person who is going to work on your code. Ask for hard numbers of their experience in code auditing - "How many days in the past 3 years had this person worked on code audits in <language X>?" This metric is good because there is no reason for the vendor not to share it.
Try not to deal with a company that has just acquired another entity or has just been acquired itself. These people have more important issues to sort out than your code, even if they say otherwise.
2. Your boss will probably ask you for an objective metric of progress. We all know the audit is more of a research than assembly line work. It is still possible to produce some useful metrics. Insist on specify your own format of reports, the one that works for you. I use a variation of the following:
- Description of work done during the week, 1-2 pages.
- Which sections of code or which application modules have been audited.
- A summary of any results - positive findings (issues found and filed in our bug tracker) or negative (code looked at is ok security-wise).
- Other work done, e.g. documentation
- Details of each confirmed positive finding (security vulnerabilities found)
- Description
- Relevant places in code
- Screenshot of successful exploitation where applicable
- Recommendations on how to fix
- Proposed severity rating
- Issue number in our bug tracker if already filed.
This way you see not only findings but also coverage and the general direction of the review. After all, if nothing is found you want some reassurane that the vendor had looked in the right places.
3. Run some static/dynamic tools yourself before handing the contract to vendors. Veracode is cheap. Findbugs is free.
A very popular "manual" code review methodology seems to be "grep and gripe." This consists of running an inhouse pattern matcher or a commercial tool, tracing the results to inputs and filing bugs. This way you get tons of issues like "insufficient entropy", "DOM XSS" and other simple cruft, yet business logic issues will never be uncovered. Do reiterate to the vendor (several times) that you value logic issues, not grep results.
A couple of posts on related topics - One way of grepping, Some IntelliJ magic, and On automated tools
4. Require the vendor to work with your developer team. In my experience, the very first thing the best auditors ask for is code access, and the very second - a meeting with developers, in order to verify the auditor's understanding of the code base. NB: ask developers after the meeting what they think of the auditor. If developers are not impressed, you'll probably be wasting your money. Oh, and if you're not in a good relationship with your developers, maybe look for a new job :)
Do weekly reviews of results in person or on the phone, where you and developers review and validate each finding - sometimes things do not really make sense or, on the contrary, may give rise to further exploration ideas. If you leave everything until the review is completed, more likely than not it won't be a satisfactory one.
5. All of the above should ideally be specified in a contract. As a bare minimum, get an acceptance clause in: if you do not accept the final report from the auditor, they do not get paid and have to fix it up for you for free.
Good luck :)
P.S. Someone asked for timeframes for a good audit. It's difficult to say because no "coverage" metrics really apply. At the same time there is anecdotal evidence that you can expect up to 8 kLoC to be eyeballed by an auditor a day (on average, on large code bases). Properly "read" is more like 800 LoC a day - at the same time, hopefully not more than 1/10 of your code base is relevant to security, especially if it's Java with its verbosity.
P.S. Someone asked for timeframes for a good audit. It's difficult to say because no "coverage" metrics really apply. At the same time there is anecdotal evidence that you can expect up to 8 kLoC to be eyeballed by an auditor a day (on average, on large code bases). Properly "read" is more like 800 LoC a day - at the same time, hopefully not more than 1/10 of your code base is relevant to security, especially if it's Java with its verbosity.
Labels:
code audit
Friday, 1 November 2013
Fixing VMWare pain in the ass performance
A tech-ish post for a change. Just for the Googles of the world.
If you are running more than one VM on the same host, and performance is hell, you need to:
Other good settings are:
If you are running more than one VM on the same host, and performance is hell, you need to:
$ mkdir ~/.vmwareand launch VMWare. This makes VMWare stop constantly comparing memory blocks in hopes of finding common ones so that it can fake more memory on the page level. It will start using real RAM. Massive improvement, in my experience. You will obviously need more RAM.
$ echo "sched.mem.pshare.enable = FALSE" >>~/.vmware/config
Other good settings are:
MemTrimRate = 0
prefvmx.useRecommendedLockedMemSize = "TRUE"
prefvmx.minVmMemPct = "100"
Labels:
cheatsheet
Saturday, 12 October 2013
12 steps to saner infosec
Actually, after kicking any references to $deity from the original list, there is about 6 points left.
1. Admit that you cannot be in full control of your systems and networks
There will always be NSA to break your elliptic curves, or a new zero day in a library inside a library that you forked, modified and then used in your code. And if you say "defence in depth", I'll ask you to show me your "perimeter".
2. Recognise that this is not a defeat
Attackers are people too, and are driven by economic motives. If it is too hard and not worth the effort, they will not go after you. Unless they want to make a point, of course.
Make breaking into your stuff not worth the effort. That is, ensure the required effort is hard enough that "the bad guys" will give up.
3. Examine, with the help of others, your past efforts to "secure", "risk manage", "protect" everything to the level of "best practice"
"Best practice" is partly management speak for "I have no idea how to deal with specifics of my business environment" and partly vendor sales pitch. Risk management is good in theory but does not work in practice for infosec, beyond very basic qualitative judgements.
Talk to others, inside your business sector and outside it. Etsy, Facebook, Twitter, and even Salesforce are doing awesome things. Talk to me, I'll buy you a beer! :)
4. Make amends for these errors (or efforts)
Don't be a business prevention specialist. Be nice to your developers, they are generally smarter than you - learn from them. Listen to your network admins, they are often more protective of their hosts than you think.
5. Learn to live a new life
Give people what they need to do their jobs and get out of the way - figure out a "secure enough" method of doing what people need without disrupting their jobs. Set yourself specific time limited goals and don't fall into the trap of "best practices" again (see point 1)
Make your own informed decisions. You cannot outsource understanding to consultants, whitepapers and Google.
6. Help others who suffer from the same addiction to total control
Run an exploit or two for them... Teach them about the halting problem, just because it's fun to see people realising what it entails, at least in theory. Send them a few links:
PS A vaguely related preso I gave is at http://www.slideshare.net/agelastic/security-vulnerabilities-for-grown-ups-gotocon-2012-15479294
1. Admit that you cannot be in full control of your systems and networks
There will always be NSA to break your elliptic curves, or a new zero day in a library inside a library that you forked, modified and then used in your code. And if you say "defence in depth", I'll ask you to show me your "perimeter".
2. Recognise that this is not a defeat
Attackers are people too, and are driven by economic motives. If it is too hard and not worth the effort, they will not go after you. Unless they want to make a point, of course.
Make breaking into your stuff not worth the effort. That is, ensure the required effort is hard enough that "the bad guys" will give up.
3. Examine, with the help of others, your past efforts to "secure", "risk manage", "protect" everything to the level of "best practice"
"Best practice" is partly management speak for "I have no idea how to deal with specifics of my business environment" and partly vendor sales pitch. Risk management is good in theory but does not work in practice for infosec, beyond very basic qualitative judgements.
Talk to others, inside your business sector and outside it. Etsy, Facebook, Twitter, and even Salesforce are doing awesome things. Talk to me, I'll buy you a beer! :)
4. Make amends for these errors (or efforts)
Don't be a business prevention specialist. Be nice to your developers, they are generally smarter than you - learn from them. Listen to your network admins, they are often more protective of their hosts than you think.
5. Learn to live a new life
Give people what they need to do their jobs and get out of the way - figure out a "secure enough" method of doing what people need without disrupting their jobs. Set yourself specific time limited goals and don't fall into the trap of "best practices" again (see point 1)
Make your own informed decisions. You cannot outsource understanding to consultants, whitepapers and Google.
6. Help others who suffer from the same addiction to total control
Run an exploit or two for them... Teach them about the halting problem, just because it's fun to see people realising what it entails, at least in theory. Send them a few links:
- http://sparrow.ece.cmu.edu/group/731-s08/readings/ptacek-newsham.pdf
- http://sparrow.ece.cmu.edu/group/731-s09/readings/Axelsson.pdf
- https://community.qualys.com/servlet/JiveServlet/download/38-10829/Protocol-Level%20Evasion%20of%20Web%20Application%20Firewalls%20(Ivan%20Ristic,%20Qualys,%20Black%20Hat%20USA%202012)%20SLIDES.pdf
- https://www.nsslabs.com/system/files/public-report/files/Correlation%20Of%20Detection%20Failures.pdf
PS A vaguely related preso I gave is at http://www.slideshare.net/agelastic/security-vulnerabilities-for-grown-ups-gotocon-2012-15479294
Tuesday, 8 October 2013
What is Security Anonymous?
First of all, nothing to do with the evil Anonymous, and quite a bit to do with AA's "twelve step" program.
The awesome Spaf recently reminded everyone (excluding people who work for one of the few very awesome companies that actually have a grip on their infosec) that no-one on the "defence" side cares about security enough to seriously change the situation.
Step one in the yet-to-be-written 12 step program: admit that "defence" side is not doing well (be honest with yourself):
Breaking things is thought to be sexier.
"User awareness" does not work.
Blinkenlights on products consoles don't give much reassurance other than psychological, or theatrical level.
Companies that thought they had security programs running well, find their source code dumped by the attackers on a random web server for unknown time.
Governments care mainly about how to break into their (or not) citizens' computers and backdooring crypto standards and implementations.
What's more, there is no "higher power" (see the original 12 steps) to appeal to. It's up to humble engineers who quietly do awesome stuff. I'll be posting about how others deal with their infosec challenges. No fluffy stuff, and probably no mention of "risk management," but you're welcome to convince me it works :)
What's even better, there will be drinkups! Because:
The awesome Spaf recently reminded everyone (excluding people who work for one of the few very awesome companies that actually have a grip on their infosec) that no-one on the "defence" side cares about security enough to seriously change the situation.
Step one in the yet-to-be-written 12 step program: admit that "defence" side is not doing well (be honest with yourself):
Breaking things is thought to be sexier.
"User awareness" does not work.
Blinkenlights on products consoles don't give much reassurance other than psychological, or theatrical level.
Companies that thought they had security programs running well, find their source code dumped by the attackers on a random web server for unknown time.
Governments care mainly about how to break into their (or not) citizens' computers and backdooring crypto standards and implementations.
What's more, there is no "higher power" (see the original 12 steps) to appeal to. It's up to humble engineers who quietly do awesome stuff. I'll be posting about how others deal with their infosec challenges. No fluffy stuff, and probably no mention of "risk management," but you're welcome to convince me it works :)
What's even better, there will be drinkups! Because:
Rules–particularly the dogmatic variety–are most useful for those who aren’t confident enough to make their own damn decisions.
For the rest of us, there’s vodka–so we can cope with the decisions we were foolishly wise enough to make.
So help us, Grey Goose.
Amen.
Sunday, 22 September 2013
Words and works
A short while ago I've mentioned this blog to someone who read through posts and then came back, saying: "Nice ideas, but did you actually implement any of this?"
Here's what we've managed to implement at work, all or most of the ideas in these topics:
Code review tools and techniques
http://www.surrendercontrol.com/2013/05/crutches-and-static-code-analysis.html
http://www.surrendercontrol.com/2012/12/focused-code-reviews-followup.html
Application security for big web apps
http://www.surrendercontrol.com/2012/11/modern-web-application-security.html
Changing security culture
http://www.surrendercontrol.com/2012/12/changing-things-when-change-is-hard.html
Here's what we've managed to implement at work, all or most of the ideas in these topics:
Code review tools and techniques
http://www.surrendercontrol.com/2013/05/crutches-and-static-code-analysis.html
http://www.surrendercontrol.com/2012/12/focused-code-reviews-followup.html
Application security for big web apps
http://www.surrendercontrol.com/2012/11/modern-web-application-security.html
Changing security culture
http://www.surrendercontrol.com/2012/12/changing-things-when-change-is-hard.html
Saturday, 14 September 2013
Wheels inside wheels
Reblogging from http://seclists.org/dailydave/2013/q2/38
… or, Ptolemaic model of the solar system of infosec. Required reading: https://en.wikipedia.org/wiki/Deferent_and_epicycle In all enterprise-y security courses they will teach you that there are several components to defence processes: 10. If you can, try to prevent bad guys getting to you 20. If you cannot, try to detect an attempt to get in before it succeeds 30. If you cannot detect attempts, aim to detect whether you've been compromised 40. If you've been compromised, do incident response and clean up (Imagine your enterprise assets is the Earth and those 4 items are other planets, orbiting it) When the reality demonstrates that the current approach to any of the components is inadequate, it gets updated with "smarter" technology. What this "smarter" technology comprises changes with time, but it always goes through stages of 1. Add more signatures, then 2. Do some sort of local behaviour analysis, then 3. "Big data" / "data mining" or similar magical words, then 4. Whatever else the market fancies (These are equivalents of "wheels within wheels", or epicycles in Ptolemy's astronomy) Examples: - AV is permanently stuck on line 20 with a few epicycles, from signatures to big data, under its belt already; - IoC (Indicators of Compromise) is line 30, only just at the beginning of its spiral. The main take away here is that the defending side is, unfortunately, retreating. Those "let's clean up compromises quicker" contests Spafford was lamenting recently only illustrate this tendency further.
The other take-away is that I love lists…
Oh and if someone comes up with a true Copernican concept of security,
please tell me. I have to be part of that!
Labels:
change,
psychology
Subscribe to:
Posts (Atom)