Sunday 25 November 2012

Modern Web application security: Facebook, Twitter, Etsy

In the past 6 months at least 3 big modern webapp companies published details of how they do application security. Etsy was the first, with Twitter and Facebook close second. The presos are at:


Despite small differences caused by frameworks and technologies these companies use, they all do the same set of things:

Code reviews

Security team does regular code reviews, and does them in a smart way. They have set up triggers for reviews, automated by unit tests or simply grep scripts looking at (D)VCS commits, e.g. git. These scripts monitor for two different kinds of changes:
  1. Any change in "important" files. These usually are the parts of the app source code that deal with CSRF protection, encryption, login, session mgmt, XSS encoding.
  2. Any new instances of "potentially nasty" snippets of code anywhere in the code base. These include introduction of file system operations, process execution, HTML decoding calls.
The above can be mixed and matched in a number of ways. For example, on can also monitor for any new URI endpoints (this can also be done via dynamic scanning, see below), or for people explicitly disabling automatic protections for CSRF, XSS, if you have these protections in place.

Dynamic scans

Security team have set up a number of Web bots to periodically scan their app for "simple" security issues.

NB: Do not use commercial scanner monsters, they are geared to produce as many results as (inhumanly) possible and are much more willing to produce false positives to reduce false negatives. In other words, they would rather alert on 10 "possible" issues that turn out to be non-issues, than miss one. The sad part is that they still miss a lot anyway.

What you (and everyone, unless they are paid by weight of the report) need is minimal false positives even at the cost of missing a number of things. Some mathematical reasoning behind the idea can be gathered from a 1999 paper The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection by Axelsson, who calculated that the IDS's false positive rate should be 10^-5 (yes, 1/100 000) in order for the alerts to be actionable in a high traffic environment.

All three companies use scanner bots to monitor for regressions ("hey, we fixed an XSS here, let's make sure it does not reappear"), for detecting new URI (if they do not detect them in source code), and other similar tasks, check their presos for details.

Secure by default

They developed (and it is a good idea for everyone else to do the same) their own or adopted "secure by default" frameworks. These frameworks are nothing grand - they achieve simple and important outcomes: provide automatic output encoding for XSS, automatically assign CSRF tokens and so on. Remember code monitoring scripts earlier? They trigger a security review if any of these security frameworks are disabled or opted out of on a specific page.

Security headers

Headers such as
  • X-Content-Type-Options
  • X-Xss-Protection
  • CSP headers 
have gained popularity. They require care to implement and here the approaches differ, see the original presos.

A nice touch is deploying CSP policies in monitoring mode, without blocking anything, and analysing the resulting alerts (slide 45 in the Facebook deck). Applying CSP in blocking mode to a large existing app is a huge task and is likely not to gain traction with your developers. The CSP candidate spec says:
Content Security Policy (CSP) is not intended as a first line of defense against content injection vulnerabilities. Instead, CSP is best used as defense-in-depth, to reduce the harm caused by content injection attacks.
There is often a non-trivial amount of work required to apply CSP to an existing web application. To reap the greatest benefit, authors will need to move all inline script and style out-of-line, for example into external scripts, because the user agent cannot determine whether an inline script was injected by an attacker.

Graphing stuff

There are many things that can be graphed or at least thresholded with security benefits - CSP alerts, increase in traffic containing HTML, number of failed login attempts,...


All of the measures in this post are help the security team and make the team deliver the most bang for the buck. My next post will be on how to use similar tools for security "evangelism" (I try avoiding this word, it is misguiding), or "get those developers to not release vulnerable software"

Monday 19 November 2012

Auditing Java code, or IntelliJ IDEA as poor man's Fortify

This Twitter exchange got me to look at IDEA. The below are a few tips and config files that can get you started with using IDEA for manual audits. IMHO the results will be close to what you get from commercial code scanner, albeit slightly slower :)

IntelliJ features useful for code auditing

  1. "Analyze" ->"Data flow to here" or "Data flow from here"
  2. "Analyze" -> "Inspect code"(next section below)
  3. All kinds of source navigation - to definition, to usages etc. Shortcuts below.

Custom code inspections

One of good starting points for basic concepts of code audits is Chapter 19 of "Web Application Hacker's Handbook" 2ed. You obviously have to have a clue what this security thing is all about, and this post is not an audit tutorial.

I've created some custom inspections for IDEA, as its original "Security" set is quite arbitrary and if it targets anything, it is applets, not web apps. My inspection policies are based on the WAHH2E book and are geared towards identifying user input, dodgy file operations and so on.

You can get the policies from

Installing the inspection policies

Open Options / Inspections. Import the policy you want to use (the full policy may produce a lot of results on very large projects, e.g. full Confluence produces about 2000 results, so I made partial policies as well), check "Share Profile", or you won't be able to use this policy.

Configuring custom code inspections in IDEA
Each of my inspection configs contains a single enabled item - "General"/"Structural Search Inspection" with a number of templates.

TIP: For some reason IntelliJ sometimes ignores the imported policy and as a result you will have no findings. What seems to work is to scan the code with any builtin policy, for example, "Default", and then run the security one. If you do not see the "Inspecting code..." progress window, then analysis did not happen.

Templates will find points in code that are interesting for security auditor - HTTP parameters, file access, sessions, SQL queries, etc. Then you can use data flow analysis (point 1 in the list above) or simply navigate through source (below).

Running an analysis

Open "Analyze" / "Inspect code", select the policy, scope etc, run, check results. There are various ways of marking things as reviewed - see "suppress" options in the results pane. They are the same as with any other alerts produced by code inspection engine.

It may be useful (depending on a kind of the finding) to investigate data flow to and from the found input/output point.

IntelliJ's docs for this feature, which in turn uses very powerful Structural Search & Replace engine are at

Code navigation shortcuts

These are collected from Stackoverflow posts. They are for Mac OS, Windows shortcuts usually have CTRL instead of CMD

Cmd + Shift + A - opens a window where you can search for GUI commands
Cmd + Alt + Left to get back, Cmd + Alt + Right  to “go forward”
Cmd + B - Go to declaration
Cmd + Alt + B - Go to implementation
Cmd + U - Go to super-method/super-class
Cmd + Alt + F7 - Show usages
Cmd + N - Go to class
Cmd + P - Parameter info
Cmd-F - Find, F3/Shift + F3 - Find next/previous, Ctrl + Shift + F - Find in path
F2/Shift + F2 - Next/previous highlighted error
Cmd + E - recently used files
Cmd + P - Parameter info
Ctrl + Shift + Q - Context info
Ctrl + J - Quick documentation

Learning shortcuts

There is a great plugin that will teach you shortcuts fast - Key Promoter. Any time you do something with menus, it will show you what shortcut you could use to achieve the same effect.

Bonus: Code Reviewing Web App Framework Based Applications

Tuesday 13 November 2012

Thought distortions, or why some of my infosec friends are alcoholics

@dinodaizovi recently quipped that infosec industry is a hybrid of "Mensa and a mental hospital," these are related thoughts.

You all know one or, more likely, many "security consultants" who are telling others that in order to improve security of $system they must do A and B, otherwise imminent failure will occur. Then these consultants go around being upset at their advice not being followed, they perceive the situation as personal failure, end up "burning out"...

Below is a list of cognitive distortions that, according so some theories in psychology, lead to perpetuation of a number of psychological conditions, including depression and alcoholism. I think I've got it from an iPhone app called "MoodKit" (by the way, try it). Have a think - aren't most of these associated with "security consultants," especially the internal consultants, in the eyes of their customers?

Common Thought Distortions

All-or Nothing Thinking 
Seeing people or events in absolute (black-or-white) terms, without recognizing the middle ground (e.g., success/failure; perfect/worthless).
"Without perfect security there is no security"

Blaming yourself or others too much. Focusing on who is to blame for problems rather than what you can do about them.
"These people just do not want to understand the importance of security!"
Blowing things out of proportion, telling yourself that you won’t be able to handle something, or viewing tough situations as if they will never end.
"Ehrmergerd, these people just hate me, I will never be able to do anything to improve security here"
Downplaying Positives 
Minimizing or dismissing positive qualities, achievements, or behaviors by telling yourself that they are unimportant or do not count.
"Well, we got these vulns fixed, but there are soooo many more, probably!"
Emotional Reasoning 
Believing something is true because it “feels” true. Relying too much on your feelings to guide decisions.
"I have a gut feeling the attackers are out to get us!"
Fortune Telling 
Making negative predictions about the future, such as how people will behave or how events will play out.
"The company data will be breached in the most harmful way"
Intolerance of Uncertainty 
Struggling to accept or tolerate things being uncertain or unknown (e.g., repeatedly wondering “what if?” something bad happens).
"What if a firewall is misconfigured? What if there is a new RCE in Struts tomorrow?..."
Describing yourself or others using global, negative labels (e.g., making judgments about one’s character or name calling).
"These lazy developers just do not care!"
Mind Reading 
Jumping to conclusions about another person’s thoughts, feelings, or intentions without checking them out.
"I know they are not interested in fixing this stuff"
Negative Filtering 
Focusing only on the negatives and ignoring the positives in a situation, such that you fail to see the “big picture.”
Ok I give up with examples - the list is getting somewhat repetitive, but you get the drift...
Not Accepting 
Dwelling on an unpleasant situation or wishing things were different, instead of accepting what has happened and finding ways to move forward.

Drawing sweeping conclusions on the basis of a single incident, such as when we say people or things are “always” or “never” a certain way.

Telling yourself that events relate to you when they may not.

“Should” and “Must” Statements 
Focusing on how things or people “should” or “must” be. Treating your own standards or preferences as rules that everyone must live by.
Who hasn't done that??? :)
One additional point for thoughts is that the above mindset is occasionally perpetuated by infosec vendors. Send them your therapist's invoice...

Monday 5 November 2012

Paradigms of failure in brigde design, or learning from failing

I've read a book some time ago - Design Paradigms: Case Histories of Error and Judgement in Engineering by Henry Petroski. It is mostly about bridge engineering (did you know there is a major bridge failure in US/UK every 30 years?)

"Paradigms" Petroski talks about turned out very much applicable to software engineering - to topics of security, availability and so on. Here is the summary of the paradigms, so that you do not have to read the book (unless you're into of bridge architecture)
  1. Conceptual errors. Fundamental errors made at the conceptual design stage are the most elusive. They manifest only when the prototype is tested (=too late), often with disastrous results. They are invariably human errors and by definition cannot be prevented.
  2. Overlooking effects of scale. Every design can be scaled only to a certain limit, after which the initial assumptions are no longer valid and a failure occurs. If you are scaling a successful design or a model, be mindful that this limit exists. You probably will not know what this limit is exactly.
  3. Design change for the worse. This paradigm involves improvements over existing safe designs without re-evaluating the original design constraints. Any such change can introduce a new failure mode. Any change, no matter how seemingly benign or beneficial, needs to be analysed with the objectives of the original design in mind. An "improved" or enlarged design could hold unpleasant surprises over the original.
  4. Blind spots are preconceived ideas about failure modes that drive analysis or design of the system, while other failure modes are ignored. The point here is that no hypothesis can ever be proved incontrovertibly, yet it takes only one failure (in analysis or reality) to provide a counterexample.
  5. False confirmations. An incorrect formula for design is arrived to from wrong assumptions, yet due to a large initial safety factor it is "confirmed" by subsequent designs. Following this "success", the safety factor is gradually reduced to the stage when it no longer compensates for the wrong results and failure occurs.
  6. Tunnel vision in design. Not considering failure outside of the narrow confines of the principal design challenge to the same degree as inside it. The designer needs a special effort to step back from each design and consider more mundane, less challenging aspects of the problem, those appearing to lie on the periphery of the central focus.
  7. Not considering failure seriously. Document expected failure: what failure modes were anticipated in the design, what failure criteria were employed, what failure avoidance strategies were incorporated. Do not ignore case histories of failure and do not misuse them either - often such histories are used only to justify extrapolations to larger and lighter structures.

TL;DR failure is the only real learning tool in engineering - without failing there is no learning, but blind luck or "confirmation" of wrong theories.