Friday 15 November 2013

Implicit security and waste-cutting

I have a nagging feeling that I've read some of this somewhere a long time ago. Probably in @mjranum's papers.

Imagine you have a product or a process. It kind of works, although you can already see the ways you can improve it by cutting some stuff here and there and changing the way it operates.

A random example: an old paper based process in a company (say, a telco) is replaced with a nice workflow all done on a web site. All information is available there for anyone immediately, all staff are happy, costs decrease, productivity increases 10-fold and so on. The process was concerned with customer contracts, so the information is kind of sensitive.

Bad hackers discover the web site, dump all the data, identity theft sky-rockets, the bank is massively fined and everyone is sad. Downsizing occurs, and so on.

Now what happened here? The old process was slow and cumbersome and low productivity. At the same time it had a number of implicit security measures built in simply by the nature of it being paper based. In order to steal a million paper contracts, one has to break-and-enter the bank facility, plus have a big van to haul all this stuff out. The loss would be immediately discovered (photocopying is not an option due to the time limitations of the heist).

Designers of the new process did not identify these implicit measures or implicit requirements because nobody thought about them. After all, the measures were implicit.
Some of the cost savings of that redesign came from (unintentional) dropping these implicit requirements or measures.

Why I am writing about this? To remind: when you are putting together a new project or re-engineer a process, check if you forgot to implement security that you did not even know was there. Stop for a moment and think about what weaknesses your product or process has, what or who may exploit these weaknesses and what the results will be. "What" could be random event, not necessarily a malicious person. In some places they call this risk management.

The funniest example is OpenSSL in Debian - http://research.swtch.com/openssl.

A less funny example is Vodafone AU customer database project which is more or less described above. It did have one password for all employees.http://www.smh.com.au/technology/security/mobile-security-outrage-private-details-accessible-on-net-20110108-19j9j.html