Wednesday 12 December 2012

Focused code reviews - a followup

I promised something more technical than book reviews, so here it goes.

Earlier I posted about how to limit the amount of code for day-to-day security reviews if the code base is huge. I took Confluence (I work for Atlassian) as an example. The application uses Webworks 2, and other frameworks. Source code is not entirely free or public, but you can get it if you have almost any kind of Confluence license. I will keep some details out of this example.

Here are some things to trigger security reviews on this codebase.

Java generalities

Monitor for these being added, but there is no urgent need to review code if any of these get removed by developers. The list in this section is Java generic (and incomplete) and can be used for other apps, the other sections are more Confluence-specific. You might not need to trigger on all of these strings. You can also try structures from the IntelliJ searches from another blog entry.


Monitor for disappearance of any sanitisers from your code. There are legitimate reasons for this - for example a sanitiser in a view disappears but the corresponding model starts escaping or filtering data.
...others skipped...


Being a Webwork2 webapp, Confluence utilises a number of filters and interceptors. You can get a list of filters your application uses with something like
grep -Rh --include=*.xml "<filter-name" . |sed -e 's/<filter-name>//'|sed -e 's/<\/filter-name>//'|sed -e 's/^[ \t]*//' |sort |uniq
Review the list and decide which ones have important security function. Monitor any change mentioning interceptors (both in web.xml files and for any change of their source)


Some of these are generic, some are Confluence specific. One way of getting a list of all annotations is
grep -Rh --include=*.java "^\s\+@" . |sed -e 's/^[ \t]*//'  |sort |uniq

Example of what to monitor for:
any change

XML config files (new endpoints)

Action mapping etc - they introduce new URL endpoints. Monitor for adding, not removal.
"<action name" 

Other XML

Any change mentioning your filters or interceptors in web.xml, for example
<interceptor-ref name="params"/>
<interceptor-ref name="permissions"/>
<interceptor-ref name="xsrfToken"/>
<interceptor-stack name 

Files and path

Look for any change in files used to implement crucial security features - login, session management, authorisation, sanitizers, CSRF protection and so on. 
Monitoring for any web.xml changes is probably an overkill, you will catch interesting stuff with the items from other sections above).

Monday 10 December 2012

Everyone says do what you love, but what is it?

Hmm, this may be turning into a book blog... Stay tuned, I'll be posting less fluffy stuff as well.

It is a familiar phrase - "do what you love", and it has been repeated over and over again at several hacker/security cons all over the world. I do not know about you, but it took me some time to sit down and figure out what I love. Being a book nerd, I picked up Business Model You for some inspiration. It is a strange book, somewhat an offshoot of a very successful (apparently) book Business Model Generation and applies the same framework to individuals instead of businesses.

What I really liked about this book is not the "business model". Instead, have a look at Chapter 4 "Who are you?" It has a lot of great advice on figuring out what it is that you really love, if you do not know it yet (many people do not).
A thought experiment. Think back to any time before you were 20 years old:
What did you love to do?
(I do not think the authors include sex under this rubric, hehe)
What activities - games, hobbies, sports, extracurricular events, school subjects did you enjoy? Recall your natural, uncoerced proclivities.
Think about what kept you absorbed for hours and made you happily oblivious to the rest of the world. What tasks made time fly?
They include a bunch of other thinking prompts - e.g. thinking over what events in your life related to what feelings, what kind of environment you like to be in and so on, yet this "inner teenager" exercise is the most unusual and most powerful. Obviously these memories need to be re-interpreted in the world you are living in, abstracted, re-applied - the core idea still stays.

So, people who love what they do are following their inner teenager..

P.S. If you are wondering, I love solving complex puzzle-like problems (preferably computer-related), working alone or in a small group of peers who share  goals and learn from each other. The rest is, erm, syntactic sugar.

Monday 3 December 2012

Changing things when change is hard

NB: If the post below makes you think that I have succumbed to managementese and became some kind of consultant, this is a false impression. I am simply reflecting on an unexpected connection between security improvements in code produced by Twitter developers and a management book.


A recent read of mine, recommended by one of the Atlassian owners - Switch: How to Change Things When Change Is Hard. I am not a huge fan of management books - many of them turn out self help books in disguise, others spend 200 pages chewing through an idea that can be explained in a paragraph. "Switch" initially looked like it belonged to the latter category, but to be honest it is worth reading from cover to cover.

The book is about exactly what its title says - changing things when change is hard (Hello there, "security evangelists"!). The premise is simple (and borrowed from another book):

"Jonathan Haidt in "The happiness hypothesis" says that our emotional side is an Elephant and our rational side is its Rider. Perched atop the Elephant, the Rider holds the reigns and seems to be the leader. But the rider's control is precarious because the Rider is so small relative to the Elephant. Anytime the six-ton Elephant and the Rider disagree about which direction to go, the Rider is going to lose. He's completely over-matched."
They draw lessons about change efforts:
Elephant looks for quick payoff over the long term payoff. When change efforts fail, it is usually the Elephant's fault, since the kinds of change we want typically involve short term sacrifices for long term payoffs. Yet it's the Elephant who gets things done in change situations. You need to appeal to both. The Rider provides the planning and direction, and the Elephant provides the energy. Understanding without motivation vs. passion without direction.
...And make another simple but non-obvious observation that change is hard because people wear themselves out. The "one paragraph" summary of the book is that there are three components to a successful difficult change:

  1. Direct the Rider - provide crystal clear direction. What looks like resistance is often a lack of clarity.
  2. Motivate the Elephant - Engage the people's emotional side. The Rider cannot get his way by force for very long. What looks like laziness is often exhaustion.
  3. Shape the path - Shape the situation in a way that facilitates your change. What looks like people problem is often a situation problem.
There are other interesting simple thoughts sprinkled throughout the text. For example:
  • Build habits if you want the change to stick
  • Shrink change - give simple actions
  • Create a destination postcard (pretty vision of the final state) to motivate

Twitter, SADB and elephants

Now, why am I going on about a management book?

In my previous post I included a slideshare link to a talk about security automation from Twitter. There is also a video at Prominently featured is Twitter's central security dashboard, SADB ("sad-bee", funny) - Security Automation Dashboard.

One of its main functions is checking newly pushed code for known vulnerable patterns with Brakeman (see slides 46+ in the slideshare and quick demo video at and immediately bugging the responsible developer with specific recommendations on what has to be fixed and how.

This strikes me as a perfect implementation of "Direct the Rider" principle and "Shrink the change" approach.

I am going to try similar approach at work, we will see how sticky the resulting improvement is going to be :)

Some links:

Extracts from the book:

A related behaviour change framework: - from Stanford

Monday 26 November 2012

Modern Web application security: Facebook, Twitter, Etsy

In the past 6 months at least 3 big modern webapp companies published details of how they do application security. Etsy was the first, with Twitter and Facebook close second. The presos are at:


Despite small differences caused by frameworks and technologies these companies use, they all do the same set of things:

Code reviews

Security team does regular code reviews, and does them in a smart way. They have set up triggers for reviews, automated by unit tests or simply grep scripts looking at (D)VCS commits, e.g. git. These scripts monitor for two different kinds of changes:
  1. Any change in "important" files. These usually are the parts of the app source code that deal with CSRF protection, encryption, login, session mgmt, XSS encoding.
  2. Any new instances of "potentially nasty" snippets of code anywhere in the code base. These include introduction of file system operations, process execution, HTML decoding calls.
The above can be mixed and matched in a number of ways. For example, on can also monitor for any new URI endpoints (this can also be done via dynamic scanning, see below), or for people explicitly disabling automatic protections for CSRF, XSS, if you have these protections in place.

Dynamic scans

Security team have set up a number of Web bots to periodically scan their app for "simple" security issues.

NB: Do not use commercial scanner monsters, they are geared to produce as many results as (inhumanly) possible and are much more willing to produce false positives to reduce false negatives. In other words, they would rather alert on 10 "possible" issues that turn out to be non-issues, than miss one. The sad part is that they still miss a lot anyway.

What you (and everyone, unless they are paid by weight of the report) need is minimal false positives even at the cost of missing a number of things. Some mathematical reasoning behind the idea can be gathered from a 1999 paper The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection by Axelsson, who calculated that the IDS's false positive rate should be 10^-5 (yes, 1/100 000) in order for the alerts to be actionable in a high traffic environment.

All three companies use scanner bots to monitor for regressions ("hey, we fixed an XSS here, let's make sure it does not reappear"), for detecting new URI (if they do not detect them in source code), and other similar tasks, check their presos for details.

Secure by default

They developed (and it is a good idea for everyone else to do the same) their own or adopted "secure by default" frameworks. These frameworks are nothing grand - they achieve simple and important outcomes: provide automatic output encoding for XSS, automatically assign CSRF tokens and so on. Remember code monitoring scripts earlier? They trigger a security review if any of these security frameworks are disabled or opted out of on a specific page.

Security headers

Headers such as
  • X-Content-Type-Options
  • X-Xss-Protection
  • CSP headers 
have gained popularity. They require care to implement and here the approaches differ, see the original presos.

A nice touch is deploying CSP policies in monitoring mode, without blocking anything, and analysing the resulting alerts (slide 45 in the Facebook deck). Applying CSP in blocking mode to a large existing app is a huge task and is likely not to gain traction with your developers. The CSP candidate spec says:
Content Security Policy (CSP) is not intended as a first line of defense against content injection vulnerabilities. Instead, CSP is best used as defense-in-depth, to reduce the harm caused by content injection attacks.
There is often a non-trivial amount of work required to apply CSP to an existing web application. To reap the greatest benefit, authors will need to move all inline script and style out-of-line, for example into external scripts, because the user agent cannot determine whether an inline script was injected by an attacker.

Graphing stuff

There are many things that can be graphed or at least thresholded with security benefits - CSP alerts, increase in traffic containing HTML, number of failed login attempts,...


All of the measures in this post are help the security team and make the team deliver the most bang for the buck. My next post will be on how to use similar tools for security "evangelism" (I try avoiding this word, it is misguiding), or "get those developers to not release vulnerable software"

Tuesday 20 November 2012

Auditing Java code, or IntelliJ IDEA as poor man's Fortify

This Twitter exchange got me to look at IDEA. The below are a few tips and config files that can get you started with using IDEA for manual audits. IMHO the results will be close to what you get from commercial code scanner, albeit slightly slower :)

IntelliJ features useful for code auditing

  1. "Analyze" ->"Data flow to here" or "Data flow from here"
  2. "Analyze" -> "Inspect code"(next section below)
  3. All kinds of source navigation - to definition, to usages etc. Shortcuts below.

Custom code inspections

One of good starting points for basic concepts of code audits is Chapter 19 of "Web Application Hacker's Handbook" 2ed. You obviously have to have a clue what this security thing is all about, and this post is not an audit tutorial.

I've created some custom inspections for IDEA, as its original "Security" set is quite arbitrary and if it targets anything, it is applets, not web apps. My inspection policies are based on the WAHH2E book and are geared towards identifying user input, dodgy file operations and so on.

You can get the policies from

Installing the inspection policies

Open Options / Inspections. Import the policy you want to use (the full policy may produce a lot of results on very large projects, e.g. full Confluence produces about 2000 results, so I made partial policies as well), check "Share Profile", or you won't be able to use this policy.

Configuring custom code inspections in IDEA
Each of my inspection configs contains a single enabled item - "General"/"Structural Search Inspection" with a number of templates.

TIP: For some reason IntelliJ sometimes ignores the imported policy and as a result you will have no findings. What seems to work is to scan the code with any builtin policy, for example, "Default", and then run the security one. If you do not see the "Inspecting code..." progress window, then analysis did not happen.

Templates will find points in code that are interesting for security auditor - HTTP parameters, file access, sessions, SQL queries, etc. Then you can use data flow analysis (point 1 in the list above) or simply navigate through source (below).

Running an analysis

Open "Analyze" / "Inspect code", select the policy, scope etc, run, check results. There are various ways of marking things as reviewed - see "suppress" options in the results pane. They are the same as with any other alerts produced by code inspection engine.

It may be useful (depending on a kind of the finding) to investigate data flow to and from the found input/output point.

IntelliJ's docs for this feature, which in turn uses very powerful Structural Search & Replace engine are at

Code navigation shortcuts

These are collected from Stackoverflow posts. They are for Mac OS, Windows shortcuts usually have CTRL instead of CMD

Cmd + Shift + A - opens a window where you can search for GUI commands
Cmd + Alt + Left to get back, Cmd + Alt + Right  to “go forward”
Cmd + B - Go to declaration
Cmd + Alt + B - Go to implementation
Cmd + U - Go to super-method/super-class
Cmd + Alt + F7 - Show usages
Cmd + N - Go to class
Cmd + P - Parameter info
Cmd-F - Find, F3/Shift + F3 - Find next/previous, Ctrl + Shift + F - Find in path
F2/Shift + F2 - Next/previous highlighted error
Cmd + E - recently used files
Cmd + P - Parameter info
Ctrl + Shift + Q - Context info
Ctrl + J - Quick documentation

Learning shortcuts

There is a great plugin that will teach you shortcuts fast - Key Promoter. Any time you do something with menus, it will show you what shortcut you could use to achieve the same effect.

Bonus: Code Reviewing Web App Framework Based Applications

Tuesday 13 November 2012

Thought distortions, or why some of my infosec friends are alcoholics

@dinodaizovi recently quipped that infosec industry is a hybrid of "Mensa and a mental hospital," these are related thoughts.

You all know one or, more likely, many "security consultants" who are telling others that in order to improve security of $system they must do A and B, otherwise imminent failure will occur. Then these consultants go around being upset at their advice not being followed, they perceive the situation as personal failure, end up "burning out"...

Below is a list of cognitive distortions that, according so some theories in psychology, lead to perpetuation of a number of psychological conditions, including depression and alcoholism. I think I've got it from an iPhone app called "MoodKit" (by the way, try it). Have a think - aren't most of these associated with "security consultants," especially the internal consultants, in the eyes of their customers?

Common Thought Distortions

All-or Nothing Thinking 
Seeing people or events in absolute (black-or-white) terms, without recognizing the middle ground (e.g., success/failure; perfect/worthless).
"Without perfect security there is no security"

Blaming yourself or others too much. Focusing on who is to blame for problems rather than what you can do about them.
"These people just do not want to understand the importance of security!"
Blowing things out of proportion, telling yourself that you won’t be able to handle something, or viewing tough situations as if they will never end.
"Ehrmergerd, these people just hate me, I will never be able to do anything to improve security here"
Downplaying Positives 
Minimizing or dismissing positive qualities, achievements, or behaviors by telling yourself that they are unimportant or do not count.
"Well, we got these vulns fixed, but there are soooo many more, probably!"
Emotional Reasoning 
Believing something is true because it “feels” true. Relying too much on your feelings to guide decisions.
"I have a gut feeling the attackers are out to get us!"
Fortune Telling 
Making negative predictions about the future, such as how people will behave or how events will play out.
"The company data will be breached in the most harmful way"
Intolerance of Uncertainty 
Struggling to accept or tolerate things being uncertain or unknown (e.g., repeatedly wondering “what if?” something bad happens).
"What if a firewall is misconfigured? What if there is a new RCE in Struts tomorrow?..."
Describing yourself or others using global, negative labels (e.g., making judgments about one’s character or name calling).
"These lazy developers just do not care!"
Mind Reading 
Jumping to conclusions about another person’s thoughts, feelings, or intentions without checking them out.
"I know they are not interested in fixing this stuff"
Negative Filtering 
Focusing only on the negatives and ignoring the positives in a situation, such that you fail to see the “big picture.”
Ok I give up with examples - the list is getting somewhat repetitive, but you get the drift...
Not Accepting 
Dwelling on an unpleasant situation or wishing things were different, instead of accepting what has happened and finding ways to move forward.

Drawing sweeping conclusions on the basis of a single incident, such as when we say people or things are “always” or “never” a certain way.

Telling yourself that events relate to you when they may not.

“Should” and “Must” Statements 
Focusing on how things or people “should” or “must” be. Treating your own standards or preferences as rules that everyone must live by.
Who hasn't done that??? :)
One additional point for thoughts is that the above mindset is occasionally perpetuated by infosec vendors. Send them your therapist's invoice...

Monday 5 November 2012

Paradigms of failure in brigde design, or learning from failing

I've read a book some time ago - Design Paradigms: Case Histories of Error and Judgement in Engineering by Henry Petroski. It is mostly about bridge engineering (did you know there is a major bridge failure in US/UK every 30 years?)

"Paradigms" Petroski talks about turned out very much applicable to software engineering - to topics of security, availability and so on. Here is the summary of the paradigms, so that you do not have to read the book (unless you're into of bridge architecture)
  1. Conceptual errors. Fundamental errors made at the conceptual design stage are the most elusive. They manifest only when the prototype is tested (=too late), often with disastrous results. They are invariably human errors and by definition cannot be prevented.
  2. Overlooking effects of scale. Every design can be scaled only to a certain limit, after which the initial assumptions are no longer valid and a failure occurs. If you are scaling a successful design or a model, be mindful that this limit exists. You probably will not know what this limit is exactly.
  3. Design change for the worse. This paradigm involves improvements over existing safe designs without re-evaluating the original design constraints. Any such change can introduce a new failure mode. Any change, no matter how seemingly benign or beneficial, needs to be analysed with the objectives of the original design in mind. An "improved" or enlarged design could hold unpleasant surprises over the original.
  4. Blind spots are preconceived ideas about failure modes that drive analysis or design of the system, while other failure modes are ignored. The point here is that no hypothesis can ever be proved incontrovertibly, yet it takes only one failure (in analysis or reality) to provide a counterexample.
  5. False confirmations. An incorrect formula for design is arrived to from wrong assumptions, yet due to a large initial safety factor it is "confirmed" by subsequent designs. Following this "success", the safety factor is gradually reduced to the stage when it no longer compensates for the wrong results and failure occurs.
  6. Tunnel vision in design. Not considering failure outside of the narrow confines of the principal design challenge to the same degree as inside it. The designer needs a special effort to step back from each design and consider more mundane, less challenging aspects of the problem, those appearing to lie on the periphery of the central focus.
  7. Not considering failure seriously. Document expected failure: what failure modes were anticipated in the design, what failure criteria were employed, what failure avoidance strategies were incorporated. Do not ignore case histories of failure and do not misuse them either - often such histories are used only to justify extrapolations to larger and lighter structures.

TL;DR failure is the only real learning tool in engineering - without failing there is no learning, but blind luck or "confirmation" of wrong theories.