Friday 15 November 2013

Implicit security and waste-cutting

I have a nagging feeling that I've read some of this somewhere a long time ago. Probably in @mjranum's papers.

Imagine you have a product or a process. It kind of works, although you can already see the ways you can improve it by cutting some stuff here and there and changing the way it operates.

A random example: an old paper based process in a company (say, a telco) is replaced with a nice workflow all done on a web site. All information is available there for anyone immediately, all staff are happy, costs decrease, productivity increases 10-fold and so on. The process was concerned with customer contracts, so the information is kind of sensitive.

Bad hackers discover the web site, dump all the data, identity theft sky-rockets, the bank is massively fined and everyone is sad. Downsizing occurs, and so on.

Now what happened here? The old process was slow and cumbersome and low productivity. At the same time it had a number of implicit security measures built in simply by the nature of it being paper based. In order to steal a million paper contracts, one has to break-and-enter the bank facility, plus have a big van to haul all this stuff out. The loss would be immediately discovered (photocopying is not an option due to the time limitations of the heist).

Designers of the new process did not identify these implicit measures or implicit requirements because nobody thought about them. After all, the measures were implicit.
Some of the cost savings of that redesign came from (unintentional) dropping these implicit requirements or measures.

Why I am writing about this? To remind: when you are putting together a new project or re-engineer a process, check if you forgot to implement security that you did not even know was there. Stop for a moment and think about what weaknesses your product or process has, what or who may exploit these weaknesses and what the results will be. "What" could be random event, not necessarily a malicious person. In some places they call this risk management.

The funniest example is OpenSSL in Debian - http://research.swtch.com/openssl.

A less funny example is Vodafone AU customer database project which is more or less described above. It did have one password for all employees.http://www.smh.com.au/technology/security/mobile-security-outrage-private-details-accessible-on-net-20110108-19j9j.html

How I installed pandoc

Probably funny

Pandoc is a "swiss-army knife" utility to transform documents in various markup languages. I needed it for something, and this post is how I managed to get it installed. In retrospect, I could have used the installer from http://code.google.com/p/pandoc/downloads/list
$ brew install pandoc
Error: No available formula for pandoc
Searching taps...
#...Googling... Turns out it's a Haskell thing?

$ brew install haskell-platform
...
==> Installing haskell-platform dependency: apple-gcc42
# ZOMG?!
...
...A GFortran compiler is also included
# ooohkay...
...

$ cabal update
...
Note: there is a new version of cabal-install available.
To upgrade, run: cabal install cabal-install

$ cabal install cabal-install
# Those strange Haskell people ^_^...
#...downloading the Internet...

$ cabal install pandoc
# FINALLY WE ARE GETTING SOMEWHERE!
#...downloading the Internet the second time...

$ pandoc
-bash: pandoc: command not found
# WTF?!

$ export PATH=$PATH:~/.cabal/bin
DONE...

Wednesday 6 November 2013

How to buy code audits

In the past couple of years I've commissioned quite a few external source code audits. It turns out this task is far from simple, despite what the friendly salesmen tell you. I've collected a few thoughts and learnings based on my experience. YMMV.

First of all, all the advice you need is in @mdowd's (et al.) interview http://www.sans.edu/research/security-laboratory/article/192 (question 3). Quote:
Word-of-mouth recommendations often convey the best real-world measure of experience. To cast a wider net though, you can use publications and industry recognition as a good measure of reputation. When approaching a company, you may also want to ask for bios on the auditors likely to perform your assessment. Next, you'll want to ask for a sample report from any auditors you're considering. The quality of this report is extremely important, because it's a large part of what you're paying for. The report should be comprehensive and include sections targeted at the developer, management, and executive levels. The technical content should be clear enough that any developer familiar with the language and platform can follow both the vulnerability details and the recommendations for addressing them. You also need to get some understanding of the audit process itself. Ask if they lean toward manual analysis or if it's more tool-driven. Ask for names and versions of any commercial tools. For proprietary tools, ask for some explanation of the capabilities, and what advantages their tools have over those on the market. You also want to be wary of any process that's overly tool driven. After all, you're paying a consultant for their expertise, not to simply click a button and hand you a report. If a good assessment was that easy, all software would be a lot more secure.
Word-of-mouth is indeed the best indicator of quality. All other criteria are substitutes and make the selection inherently more risky. As with many risks, there are some things you can do to control them.

1. Beware of "bait and switch". This is a technique common in outsourcing when you are promised the best and famous resources when negotiating, but after the contract is signed, you get monkeys

When engaging any company larger than a boutique, insist on interviewing and approving every person who is going to work on your code. Ask for hard numbers of their experience in code auditing - "How many days in the past 3 years had this person worked on code audits in <language X>?" This metric is good because there is no reason for the vendor not to share it. 

Try not to deal with a company that has just acquired another entity or has just been acquired itself. These people have more important issues to sort out than your code, even if they say otherwise.

2. Your boss will probably ask you for an objective metric of progress. We all know the audit is more of a research than assembly line work. It is still possible to produce some useful metrics. Insist on specify your own format of reports, the one that works for you. I use a variation of the following:
  • Description of work done during the week, 1-2 pages.
    • Which sections of code or which application modules have been audited.
    • A summary of any results - positive findings (issues found and filed in our bug tracker) or negative (code looked at is ok security-wise).
    • Other work done, e.g. documentation
  • Details of each confirmed positive finding (security vulnerabilities found)
    • Description
    • Relevant places in code
    • Screenshot of successful exploitation where applicable
    • Recommendations on how to fix
    • Proposed severity rating
    • Issue number in our bug tracker if already filed.
This way you see not only findings but also coverage and the general direction of the review. After all, if nothing is found you want some reassurane that the vendor had looked in the right places.

3. Run some static/dynamic tools yourself before handing the contract to vendors. Veracode is cheap. Findbugs is free. 

A very popular "manual" code review methodology seems to be "grep and gripe." This consists of running an inhouse pattern matcher or a commercial tool, tracing the results to inputs and filing bugs. This way you get tons of issues like "insufficient entropy", "DOM XSS" and other simple cruft, yet business logic issues will never be uncovered. Do reiterate to the vendor (several times) that you value logic issues, not grep results.

A couple of posts on related topics - One way of greppingSome IntelliJ magic, and On automated tools

4. Require the vendor to work with your developer team. In my experience, the very first thing the best auditors ask for is code access, and the very second - a meeting with developers, in order to verify the auditor's understanding of the code base. NB: ask developers after the meeting what they think of the auditor. If developers are not impressed, you'll probably be wasting your money. Oh, and if you're not in a good relationship with your developers, maybe look for a new job :)

Do weekly reviews of results in person or on the phone, where you and developers review and validate each finding - sometimes things do not really make sense or, on the contrary, may give rise to further exploration ideas. If you leave everything until the review is completed, more likely than not it won't be a satisfactory one.

5. All of the above should ideally be specified in a contract. As a bare minimum, get an acceptance clause in: if you do not accept the final report from the auditor, they do not get paid and have to fix it up for you for free.

Good luck :)

P.S. Someone asked for timeframes for a good audit. It's difficult to say because no "coverage" metrics really apply. At the same time there is anecdotal evidence that you can expect up to 8 kLoC to be eyeballed by an auditor a day (on average, on large code bases). Properly "read" is more like 800 LoC a day - at the same time, hopefully not more than 1/10 of your code base is relevant to security, especially if it's Java with its verbosity.