Showing posts with label scams. Show all posts
Showing posts with label scams. Show all posts

Sunday, 12 May 2013

Crutches and static code analysis


First this was going to be a blog, then a DD post, then a blog again...

A while ago I've read an article absolutely not about security but about how great it is to work in small friendly teams - http://pragprog.com/magazines/2012-12/agile-in-the-small

It contains an awesome quote:
"...most best practices are just crutches for having a heterogeneous skill mix in one’s team."
Please hold that quote in mind while I turn to the figures recently released by WhiteHat Security
They say that 39% of their clients use some sort of source code analysis on their webapps. These customers experience (probably meaning 'discover') more vulnerabilities, resolve them *slower* and
have a *worse* remediation rate.

Why is this? If you have ever been a customer to a SCA salesman then you know. Their pitch goes like this:

"All you need to do is to run our magic tool with this 'best practice' configuration and fix all results. The tool does not require a person who understands app security to be involved. It's like a tester in a box. Even better, just use "OWASP top 20" (I call it "fake silver bullet") configuration, this is what everyone else does."

Typical outcomes: the tool finds a large amount of rather unimportant noise, rates the issues overly high just in case. Developers get tired fixing these often nonsensical results. You'd be amazed how many people run SCA (or web scanners) with the default config and then forward results to developer teams, their own or third party's. Eventually, the person running the magical scanner starts being treated as the boy who cried wolf too often.

Now, this is *not* a post against static analysis. Static analysis can be an awesome tool for vulnerability research, especially for C/C++ code (although everyone seems to be 'fuzzing kernels' instead) and maybe even in web apps. That is, if the tool you've got is capable of being used as a research helper, not a checkbox filler.

Unfortunately the reaction of SCA salesmen to such a request (not of all, but many) is usually "You want what? Write your own rules? And drill down on results? And specify sanitisers and stuff? Crazy! Let me find someone back at headquarters who knows what you're talking about…"

Very often, a few simple scripts involving minimal lexing/parsing, written for *your* specific web app (even without getting into ASTs, solvers and data taint analysis) can be way more useful in finding old and preventing new issues. Especially if they are run as commit hooks in your git repo.

Back to the 'best practices' quote - if you are a software vendor and you want to get real benefits from commercial SCA tools (I do not count compliance among the benefits), do two things: hire someone with a clue (about app sec, about development, about SCA) and get a tool which has configurable rules.

Otherwise don't even bother. It will be about as effective as, and much more expensive than, running an IDS or AV.

Saturday, 30 March 2013

Scams in security testing

Dedicated to people who submit Web scanner results to their software vendors.

A while ago I stumbled upon a book on software testing. Not security, mind you, just plain normal software testing. By my favourite "techie" author Gerald Weinberg - Perfect software and other illusions about software testing. It's a great read for app security folks, as long as you are capable of making basic domain substitutions.

My favourite chapter in the book is "Testing scams", where the author follows up his earlier discussion of fallacies in testing with a list of outright scams by vendors promising to sell a magic testing tools. He says
"Here's the secret about tools: Good tools amplify effectiveness. If your testing effectiveness is negative, adding tools will only amplify the negativity. Any other claim a tool vendor makes is, most likely, some kind of scam."
I made a short summary of this chapter, with examples from security testing domain (mostly web, "dynamic" and source code, "static" scanners). Text in quote marks is from the book, apart from the obvious phrases.

1. "Tool demonstration is a scam" - where you are shown a perfect demo, of a scanner running on WebGoat. And when you try it on your own code, the scanner explodes with junk. Of course the demo was carefully designed and tuned to produce the best impression.

Subtype 1a: You are not allowed do your own PoC without vendor's helpful supervision. At the least, they will give you a spreadsheet with criteria to compare their product against others.

Note: If you are not capable of conducting your own PoC without asking vendors for help, you should not be buying any of those tools in the first place.

2. "With all these testimonials, it must be good" - where there isn't even a demo, but you get a pile of whitepapers with pseudo-test results (comparisons, certifications, endorsements). These docs usually "appear to contain information, but ... only identify charlatans who took a fee or some payment [in kind] for the used of their names".

As a test, try requesting personal testimonies from the customers involved, ideally see how they use the tool. If the vendor cannot produce a single customer who is excited about their product so much that they want to show you how wonderful it is, it's a crap tool.

3. "We scam you with our pricing" - where the vendor creates cognitive dissonance among previously scammed people. As a result, despite the expensive purchase being a failure on all levels, from purchasers to end users, they keep this fact to themselves.

Subtype 3a: "[is] discrediting competitive tools with disingeniousness, suggesting, 'With a price so low, how could those tools be any good?'"

4. "Our tool can read minds" - where the tool is presented as a complete replacement of a security specialist - testing even better than a human, and not requiring any human post-processing. In personal experience, this is one of the most common scams in security testing market, with scam #1 being the next most popular, and #3 and #4 reserved for very pricey tools (you know who you are).

A belief that a magic app security silver bullet exists is so deep, that when the tool (or the "cloud" service) quickly fails to deliver to its promises, their customer concludes that this was his/her own mistake and there is another magic mind-reading service elsewhere. Rinse and repeat.

Note: There is no silver bullet. Artificial Intelligence is hard. Turing was right.

5. "We promise that you don't have to do a thing" - where a "cloud" service promises that all the customer has to do is to point it to the web app, and the service will spit out actionable results with no false positives and no false negatives. Less experienced security teams or managers fall for this one quite often, since many of the services come with a promise of manual postprocessing of results by the most experienced analysts in the world (or close to that). Where this fails is lack of context for the testing. The vendor does not know your code, they do not know your context in terms of exploitability, mitigating controls, or impacts. They do not know what technologies and methodologies your developers use. What usually comes out of such services is slightly processed results of a scanner with generic settings.

Subtype 5a: - when the service vendor does the old "bait and switch" between its personnel involved in the sales process (gurus) and who you get once you pay (little to no experience button pushers in a cheap outsourced location).

Still with me? Here's the summary:

If someone promises you something for nothing, it is a scam (or they are Mother Theresa, that is, not a business person). Even if you are promised a magic tool in exchange for a lot of money (scam 3 above), this is still a promise of something for nothing. 

It is impossible to do good security testing other than by employing (in one way or another) people who know the context of your environment and code or who are willing to learn it.