Wednesday, 22 May 2013

Vim for code reading

GUI-based IDEs are nice tools for reading code, when properly set up. SourceInsight is probably the best based on the combination of efficiency/quality/price.

What if you find yourself with only a (colour) text console? I've put together a small .vimrc to make reading code a nicer experience in those situations.

If you are new to Vim, check out the README in https://github.com/agelastic/vim-reading for basic commands. Those are all you need to read and navigate any code in Vim.

.vimrc in that repo does the following:
  • Ensures we are running a vim (and not vi) in a colour console
  • Turns on search highlighting and 'search as you type'
  • Sets detailed status lines
  • Turns on code folding and makes initial state 'unfold all', which I like more than the default 'fold all on opening file'
  • Turns saving some info on exiting file
  • Lets you use Tab for vim command completion
  • Maps Shift-Up and Shift-Down to move between Vim windows
  • Maps Ctrl-X to "exit vim"
There are several plugins that look useful for this task, but I'm not adding them to keep the config lightweight.

If you want do explore those plugins, here's a bunch of links:

Sunday, 12 May 2013

Crutches and static code analysis


First this was going to be a blog, then a DD post, then a blog again...

A while ago I've read an article absolutely not about security but about how great it is to work in small friendly teams - http://pragprog.com/magazines/2012-12/agile-in-the-small

It contains an awesome quote:
"...most best practices are just crutches for having a heterogeneous skill mix in one’s team."
Please hold that quote in mind while I turn to the figures recently released by WhiteHat Security
They say that 39% of their clients use some sort of source code analysis on their webapps. These customers experience (probably meaning 'discover') more vulnerabilities, resolve them *slower* and
have a *worse* remediation rate.

Why is this? If you have ever been a customer to a SCA salesman then you know. Their pitch goes like this:

"All you need to do is to run our magic tool with this 'best practice' configuration and fix all results. The tool does not require a person who understands app security to be involved. It's like a tester in a box. Even better, just use "OWASP top 20" (I call it "fake silver bullet") configuration, this is what everyone else does."

Typical outcomes: the tool finds a large amount of rather unimportant noise, rates the issues overly high just in case. Developers get tired fixing these often nonsensical results. You'd be amazed how many people run SCA (or web scanners) with the default config and then forward results to developer teams, their own or third party's. Eventually, the person running the magical scanner starts being treated as the boy who cried wolf too often.

Now, this is *not* a post against static analysis. Static analysis can be an awesome tool for vulnerability research, especially for C/C++ code (although everyone seems to be 'fuzzing kernels' instead) and maybe even in web apps. That is, if the tool you've got is capable of being used as a research helper, not a checkbox filler.

Unfortunately the reaction of SCA salesmen to such a request (not of all, but many) is usually "You want what? Write your own rules? And drill down on results? And specify sanitisers and stuff? Crazy! Let me find someone back at headquarters who knows what you're talking about…"

Very often, a few simple scripts involving minimal lexing/parsing, written for *your* specific web app (even without getting into ASTs, solvers and data taint analysis) can be way more useful in finding old and preventing new issues. Especially if they are run as commit hooks in your git repo.

Back to the 'best practices' quote - if you are a software vendor and you want to get real benefits from commercial SCA tools (I do not count compliance among the benefits), do two things: hire someone with a clue (about app sec, about development, about SCA) and get a tool which has configurable rules.

Otherwise don't even bother. It will be about as effective as, and much more expensive than, running an IDS or AV.

Saturday, 30 March 2013

Scams in security testing

Dedicated to people who submit Web scanner results to their software vendors.

A while ago I stumbled upon a book on software testing. Not security, mind you, just plain normal software testing. By my favourite "techie" author Gerald Weinberg - Perfect software and other illusions about software testing. It's a great read for app security folks, as long as you are capable of making basic domain substitutions.

My favourite chapter in the book is "Testing scams", where the author follows up his earlier discussion of fallacies in testing with a list of outright scams by vendors promising to sell a magic testing tools. He says
"Here's the secret about tools: Good tools amplify effectiveness. If your testing effectiveness is negative, adding tools will only amplify the negativity. Any other claim a tool vendor makes is, most likely, some kind of scam."
I made a short summary of this chapter, with examples from security testing domain (mostly web, "dynamic" and source code, "static" scanners). Text in quote marks is from the book, apart from the obvious phrases.

1. "Tool demonstration is a scam" - where you are shown a perfect demo, of a scanner running on WebGoat. And when you try it on your own code, the scanner explodes with junk. Of course the demo was carefully designed and tuned to produce the best impression.

Subtype 1a: You are not allowed do your own PoC without vendor's helpful supervision. At the least, they will give you a spreadsheet with criteria to compare their product against others.

Note: If you are not capable of conducting your own PoC without asking vendors for help, you should not be buying any of those tools in the first place.

2. "With all these testimonials, it must be good" - where there isn't even a demo, but you get a pile of whitepapers with pseudo-test results (comparisons, certifications, endorsements). These docs usually "appear to contain information, but ... only identify charlatans who took a fee or some payment [in kind] for the used of their names".

As a test, try requesting personal testimonies from the customers involved, ideally see how they use the tool. If the vendor cannot produce a single customer who is excited about their product so much that they want to show you how wonderful it is, it's a crap tool.

3. "We scam you with our pricing" - where the vendor creates cognitive dissonance among previously scammed people. As a result, despite the expensive purchase being a failure on all levels, from purchasers to end users, they keep this fact to themselves.

Subtype 3a: "[is] discrediting competitive tools with disingeniousness, suggesting, 'With a price so low, how could those tools be any good?'"

4. "Our tool can read minds" - where the tool is presented as a complete replacement of a security specialist - testing even better than a human, and not requiring any human post-processing. In personal experience, this is one of the most common scams in security testing market, with scam #1 being the next most popular, and #3 and #4 reserved for very pricey tools (you know who you are).

A belief that a magic app security silver bullet exists is so deep, that when the tool (or the "cloud" service) quickly fails to deliver to its promises, their customer concludes that this was his/her own mistake and there is another magic mind-reading service elsewhere. Rinse and repeat.

Note: There is no silver bullet. Artificial Intelligence is hard. Turing was right.

5. "We promise that you don't have to do a thing" - where a "cloud" service promises that all the customer has to do is to point it to the web app, and the service will spit out actionable results with no false positives and no false negatives. Less experienced security teams or managers fall for this one quite often, since many of the services come with a promise of manual postprocessing of results by the most experienced analysts in the world (or close to that). Where this fails is lack of context for the testing. The vendor does not know your code, they do not know your context in terms of exploitability, mitigating controls, or impacts. They do not know what technologies and methodologies your developers use. What usually comes out of such services is slightly processed results of a scanner with generic settings.

Subtype 5a: - when the service vendor does the old "bait and switch" between its personnel involved in the sales process (gurus) and who you get once you pay (little to no experience button pushers in a cheap outsourced location).

Still with me? Here's the summary:

If someone promises you something for nothing, it is a scam (or they are Mother Theresa, that is, not a business person). Even if you are promised a magic tool in exchange for a lot of money (scam 3 above), this is still a promise of something for nothing. 

It is impossible to do good security testing other than by employing (in one way or another) people who know the context of your environment and code or who are willing to learn it.

Thursday, 14 March 2013

Medievalism in infosec

Dedicated to the last pope.

In my quest to understand the elusive American puritanist psyche I've been reading up on origins and history of Christianity recently.

As a side note - original biblical languages are so much fun. Not only nobody is quite sure which tense in Biblical Hebrew is past and which - future, but even when the meaning is obvious, translations do so much moralising and sweeping all the blood sex and genocide in the Old Testament under the carpet.

Example: did you notice how many times a woman fiddles (uncovers, kisses, touches etc) man's feet in the OT? But never the other way around or, God forbid, a man to a man? It turns out, "feet" is an euphemism :)

Anyhow, this post is about striking parallels between some old religious metaphors and the modern "cybersecurity" ones.

Infosec thinking as Judaism of 1st century BCE

A quote from a very respected Biblical scholar:

"Apocalyptic eschatology" is ... centering in the belief that  
(1) the present world order, regarded as both evil and oppressive, is under the temporary control of Satan and his human accomplices, and
(2) that this present evil world order will shortly be destroyed by God and replaced by a new and perfect order corresponding to Eden before the fall.  
During the present evil age, the people of God are an oppressed minority who fervently expect God, or his specially chosen agent the Messiah, to rescue them. The transition between the old and the new ages will be introduced with a final series of battles fought by the people of God against the human allies of Satan. The outcome is never in question, however, for the enemies of God are predestined for defeat and destruction. The inauguration of the new age will begin with the arrival of God or his accredited agent to judge the wicked and reward the righteous, and will be concluded by the re-creation or transformation of the earth and the heavens. This theological narrative characterized segments of early Judaism from ea. 200 BCE to ea. 200 CE"
Let's change some words:

  • 'World order' => information systems
  • 'Satan and his human accomplices' => evil hackers, APT!!1!
  • 'God' => well, I guess, it stays?
  • 'The people of God' => various infosec consultants, from Mandiant to the forthcoming 13 tribes of the US cyberdefense.

You get my drift. There is also an awesome document called The War Scroll, describing the final fight of Sons of Light and Sons of Darkness... then there are Gnostics... There is a PhD in this somewhere.

Infosec industry as pre-reformation Catholic church

No needs for extensive quotes here, Catholicism is a popular topic this week. Just one example: exorcisms.

A typical exorcism: a person was apparently possessed by an invisible evil spirit who caused all kind of trouble (Exorcist is a fine movie, watch it), then a licensed exorcist priest was called, who did strange things and expelled the immaterial possessor out of the victim's body, collected fee and told the victim to install a fountain with holy water, pray and not sin any more.

People sinned, confessed, sinned again...

The priests themselves were idealistic young men or hypocritical old farts who did not practice what they preached.

Substitution table:

  • 'Satan', as before => 'bad hackers'
  • 'possessed human' => 'infiltrated company'
  • 'exorcists' => security consultants, especially DFIR type
  • 'sins' => "bad" security practices
  • 'confession' => audit or pentest, perhaps.

I could go on but it's time to wrap up, since we ought to celebrate: according to Iranian sources, Habemus Papam has been elected the new pope

Wednesday, 12 December 2012

Focused code reviews - a followup


I promised something more technical than book reviews, so here it goes.

Earlier I posted about how to limit the amount of code for day-to-day security reviews if the code base is huge. I took Confluence (I work for Atlassian) as an example. The application uses Webworks 2, and other frameworks. Source code is not entirely free or public, but you can get it if you have almost any kind of Confluence license. I will keep some details out of this example.

Here are some things to trigger security reviews on this codebase.

Java generalities

Monitor for these being added, but there is no urgent need to review code if any of these get removed by developers. The list in this section is Java generic (and incomplete) and can be used for other apps, the other sections are more Confluence-specific. You might not need to trigger on all of these strings. You can also try structures from the IntelliJ searches from another blog entry.
Class.forName
ZipFile
Statement
Math.random
sendRedirect
"SELECT "
java.sql.Statement
java.sql.Connection
executeQuery
Runtime.
java.lang.Runtime
getRequestURI
java.sql
BeanUtils.setProp
java.lang.reflect
...

Sanitizers

Monitor for disappearance of any sanitisers from your code. There are legitimate reasons for this - for example a sanitiser in a view disappears but the corresponding model starts escaping or filtering data.
htmlEncode
...others skipped...

Filters

Being a Webwork2 webapp, Confluence utilises a number of filters and interceptors. You can get a list of filters your application uses with something like
grep -Rh --include=*.xml "<filter-name" . |sed -e 's/<filter-name>//'|sed -e 's/<\/filter-name>//'|sed -e 's/^[ \t]*//' |sort |uniq
Review the list and decide which ones have important security function. Monitor any change mentioning interceptors (both in web.xml files and for any change of their source)
HeaderSanitisingFilter
SecurityFilter
...
SafeParametersInterceptor
PermissionCheckInterceptor
...

Annotations

Some of these are generic, some are Confluence specific. One way of getting a list of all annotations is
grep -Rh --include=*.java "^\s\+@" . |sed -e 's/^[ \t]*//'  |sort |uniq

Example of what to monitor for:
@AnonymousAllowed
adding
@GET
adding
@POST
adding
@HttpMethodRequired
any change
@ParameterSafe
removal
@Path
adding
@RequireSecurityToken
removal
...

XML config files (new endpoints)

Action mapping etc - they introduce new URL endpoints. Monitor for adding, not removal.
"<action name" 
...

Other XML

Any change mentioning your filters or interceptors in web.xml, for example
<filter-name>header-sanitiser
<filter-name>request-param-cleaner
<filter-name>login
<interceptor-ref name="params"/>
<interceptor-ref name="permissions"/>
<interceptor-ref name="xsrfToken"/>
<interceptor-stack name 
...

Files and path

Look for any change in files used to implement crucial security features - login, session management, authorisation, sanitizers, CSRF protection and so on. 
confluence-core/confluence-webapp/src/main/webapp/WEB-INF/web.xml
confluence-core/confluence/src/etc/standalone/tomcat/web.xml
confluence-core/confluence/src/java/com/atlassian/confluence/security/login/*
confluence-core/confluence/src/java/com/atlassian/confluence/rpc/auth/*
confluence-core/confluence/src/java/com/atlassian/confluence/security/*
...
Monitoring for any web.xml changes is probably an overkill, you will catch interesting stuff with the items from other sections above).

Monday, 10 December 2012

Everyone says do what you love, but what is it?


Hmm, this may be turning into a book blog... Stay tuned, I'll be posting less fluffy stuff as well.

It is a familiar phrase - "do what you love", and it has been repeated over and over again at several hacker/security cons all over the world. I do not know about you, but it took me some time to sit down and figure out what I love. Being a book nerd, I picked up Business Model You for some inspiration. It is a strange book, somewhat an offshoot of a very successful (apparently) book Business Model Generation and applies the same framework to individuals instead of businesses.

What I really liked about this book is not the "business model". Instead, have a look at Chapter 4 "Who are you?" It has a lot of great advice on figuring out what it is that you really love, if you do not know it yet (many people do not).
A thought experiment. Think back to any time before you were 20 years old:
What did you love to do?
(I do not think the authors include sex under this rubric, hehe)
What activities - games, hobbies, sports, extracurricular events, school subjects did you enjoy? Recall your natural, uncoerced proclivities.
Think about what kept you absorbed for hours and made you happily oblivious to the rest of the world. What tasks made time fly?
They include a bunch of other thinking prompts - e.g. thinking over what events in your life related to what feelings, what kind of environment you like to be in and so on, yet this "inner teenager" exercise is the most unusual and most powerful. Obviously these memories need to be re-interpreted in the world you are living in, abstracted, re-applied - the core idea still stays.

So, people who love what they do are following their inner teenager..

P.S. If you are wondering, I love solving complex puzzle-like problems (preferably computer-related), working alone or in a small group of peers who share  goals and learn from each other. The rest is, erm, syntactic sugar.

Monday, 3 December 2012

Changing things when change is hard

NB: If the post below makes you think that I have succumbed to managementese and became some kind of consultant, this is a false impression. I am simply reflecting on an unexpected connection between security improvements in code produced by Twitter developers and a management book.

"Switch"

A recent read of mine, recommended by one of the Atlassian owners - Switch: How to Change Things When Change Is Hard. I am not a huge fan of management books - many of them turn out self help books in disguise, others spend 200 pages chewing through an idea that can be explained in a paragraph. "Switch" initially looked like it belonged to the latter category, but to be honest it is worth reading from cover to cover.

The book is about exactly what its title says - changing things when change is hard (Hello there, "security evangelists"!). The premise is simple (and borrowed from another book):

"Jonathan Haidt in "The happiness hypothesis" says that our emotional side is an Elephant and our rational side is its Rider. Perched atop the Elephant, the Rider holds the reigns and seems to be the leader. But the rider's control is precarious because the Rider is so small relative to the Elephant. Anytime the six-ton Elephant and the Rider disagree about which direction to go, the Rider is going to lose. He's completely over-matched."
They draw lessons about change efforts:
Elephant looks for quick payoff over the long term payoff. When change efforts fail, it is usually the Elephant's fault, since the kinds of change we want typically involve short term sacrifices for long term payoffs. Yet it's the Elephant who gets things done in change situations. You need to appeal to both. The Rider provides the planning and direction, and the Elephant provides the energy. Understanding without motivation vs. passion without direction.
...And make another simple but non-obvious observation that change is hard because people wear themselves out. The "one paragraph" summary of the book is that there are three components to a successful difficult change:

  1. Direct the Rider - provide crystal clear direction. What looks like resistance is often a lack of clarity.
  2. Motivate the Elephant - Engage the people's emotional side. The Rider cannot get his way by force for very long. What looks like laziness is often exhaustion.
  3. Shape the path - Shape the situation in a way that facilitates your change. What looks like people problem is often a situation problem.
There are other interesting simple thoughts sprinkled throughout the text. For example:
  • Build habits if you want the change to stick
  • Shrink change - give simple actions
  • Create a destination postcard (pretty vision of the final state) to motivate

Twitter, SADB and elephants

Now, why am I going on about a management book?

In my previous post I included a slideshare link to a talk about security automation from Twitter. There is also a video at http://videos.2012.appsecusa.org/video/54250716. Prominently featured is Twitter's central security dashboard, SADB ("sad-bee", funny) - Security Automation Dashboard.

One of its main functions is checking newly pushed code for known vulnerable patterns with Brakeman (see slides 46+ in the slideshare and quick demo video at https://www.youtube.com/watch?feature=player_embedded&v=0ZZKCyBR8cA and immediately bugging the responsible developer with specific recommendations on what has to be fixed and how.

This strikes me as a perfect implementation of "Direct the Rider" principle and "Shrink the change" approach.

I am going to try similar approach at work, we will see how sticky the resulting improvement is going to be :)

Some links:

Extracts from the book:

http://www.heathbrothers.com/resources/download/switch-framework.pdf
http://www.heathbrothers.com/resources/download/switch-for-organizations.pdf

A related behaviour change framework:

http://www.behaviorwizard.org/wp/ - from Stanford

Monday, 26 November 2012

Modern Web application security: Facebook, Twitter, Etsy

In the past 6 months at least 3 big modern webapp companies published details of how they do application security. Etsy was the first, with Twitter and Facebook close second. The presos are at:

Etsy: http://www.slideshare.net/zanelackey/effective-approaches-to-web-application-security
Twitter: http://www.slideshare.net/xplodersuv/putting-your-robots-to-work-14901538
Facebook: http://www.slideshare.net/mimeframe/ruxcon-2012-15195589

Despite small differences caused by frameworks and technologies these companies use, they all do the same set of things:

Code reviews

Security team does regular code reviews, and does them in a smart way. They have set up triggers for reviews, automated by unit tests or simply grep scripts looking at (D)VCS commits, e.g. git. These scripts monitor for two different kinds of changes:
  1. Any change in "important" files. These usually are the parts of the app source code that deal with CSRF protection, encryption, login, session mgmt, XSS encoding.
  2. Any new instances of "potentially nasty" snippets of code anywhere in the code base. These include introduction of file system operations, process execution, HTML decoding calls.
The above can be mixed and matched in a number of ways. For example, on can also monitor for any new URI endpoints (this can also be done via dynamic scanning, see below), or for people explicitly disabling automatic protections for CSRF, XSS, if you have these protections in place.

Dynamic scans

Security team have set up a number of Web bots to periodically scan their app for "simple" security issues.

NB: Do not use commercial scanner monsters, they are geared to produce as many results as (inhumanly) possible and are much more willing to produce false positives to reduce false negatives. In other words, they would rather alert on 10 "possible" issues that turn out to be non-issues, than miss one. The sad part is that they still miss a lot anyway.

What you (and everyone, unless they are paid by weight of the report) need is minimal false positives even at the cost of missing a number of things. Some mathematical reasoning behind the idea can be gathered from a 1999 paper The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection by Axelsson, who calculated that the IDS's false positive rate should be 10^-5 (yes, 1/100 000) in order for the alerts to be actionable in a high traffic environment.

All three companies use scanner bots to monitor for regressions ("hey, we fixed an XSS here, let's make sure it does not reappear"), for detecting new URI (if they do not detect them in source code), and other similar tasks, check their presos for details.

Secure by default

They developed (and it is a good idea for everyone else to do the same) their own or adopted "secure by default" frameworks. These frameworks are nothing grand - they achieve simple and important outcomes: provide automatic output encoding for XSS, automatically assign CSRF tokens and so on. Remember code monitoring scripts earlier? They trigger a security review if any of these security frameworks are disabled or opted out of on a specific page.

Security headers

Headers such as
  • X-Content-Type-Options
  • X-Xss-Protection
  • CSP headers 
have gained popularity. They require care to implement and here the approaches differ, see the original presos.

A nice touch is deploying CSP policies in monitoring mode, without blocking anything, and analysing the resulting alerts (slide 45 in the Facebook deck). Applying CSP in blocking mode to a large existing app is a huge task and is likely not to gain traction with your developers. The CSP candidate spec says:
Content Security Policy (CSP) is not intended as a first line of defense against content injection vulnerabilities. Instead, CSP is best used as defense-in-depth, to reduce the harm caused by content injection attacks.
There is often a non-trivial amount of work required to apply CSP to an existing web application. To reap the greatest benefit, authors will need to move all inline script and style out-of-line, for example into external scripts, because the user agent cannot determine whether an inline script was injected by an attacker.

Graphing stuff

There are many things that can be graphed or at least thresholded with security benefits - CSP alerts, increase in traffic containing HTML, number of failed login attempts,...

Summary

All of the measures in this post are help the security team and make the team deliver the most bang for the buck. My next post will be on how to use similar tools for security "evangelism" (I try avoiding this word, it is misguiding), or "get those developers to not release vulnerable software"
   
   

Tuesday, 20 November 2012

Auditing Java code, or IntelliJ IDEA as poor man's Fortify

This Twitter exchange got me to look at IDEA. The below are a few tips and config files that can get you started with using IDEA for manual audits. IMHO the results will be close to what you get from commercial code scanner, albeit slightly slower :)

IntelliJ features useful for code auditing

  1. "Analyze" ->"Data flow to here" or "Data flow from here" https://www.jetbrains.com/idea/webhelp/analyzing-data-flow.html
  2. "Analyze" -> "Inspect code"(next section below)
  3. All kinds of source navigation - to definition, to usages etc. Shortcuts below.

Custom code inspections

One of good starting points for basic concepts of code audits is Chapter 19 of "Web Application Hacker's Handbook" 2ed. You obviously have to have a clue what this security thing is all about, and this post is not an audit tutorial.

I've created some custom inspections for IDEA, as its original "Security" set is quite arbitrary and if it targets anything, it is applets, not web apps. My inspection policies are based on the WAHH2E book and are geared towards identifying user input, dodgy file operations and so on.

You can get the policies from https://github.com/agelastic/intellij-code-audit

Installing the inspection policies

Open Options / Inspections. Import the policy you want to use (the full policy may produce a lot of results on very large projects, e.g. full Confluence produces about 2000 results, so I made partial policies as well), check "Share Profile", or you won't be able to use this policy.

Configuring custom code inspections in IDEA
Each of my inspection configs contains a single enabled item - "General"/"Structural Search Inspection" with a number of templates.

TIP: For some reason IntelliJ sometimes ignores the imported policy and as a result you will have no findings. What seems to work is to scan the code with any builtin policy, for example, "Default", and then run the security one. If you do not see the "Inspecting code..." progress window, then analysis did not happen.

Templates will find points in code that are interesting for security auditor - HTTP parameters, file access, sessions, SQL queries, etc. Then you can use data flow analysis (point 1 in the list above) or simply navigate through source (below).

Running an analysis

Open "Analyze" / "Inspect code", select the policy, scope etc, run, check results. There are various ways of marking things as reviewed - see "suppress" options in the results pane. They are the same as with any other alerts produced by code inspection engine.

It may be useful (depending on a kind of the finding) to investigate data flow to and from the found input/output point.

IntelliJ's docs for this feature, which in turn uses very powerful Structural Search & Replace engine are at https://www.jetbrains.com/idea/webhelp/creating-own-inspections.html

Code navigation shortcuts

These are collected from Stackoverflow posts. They are for Mac OS, Windows shortcuts usually have CTRL instead of CMD

Cmd + Shift + A - opens a window where you can search for GUI commands
Cmd + Alt + Left to get back, Cmd + Alt + Right  to “go forward”
Cmd + B - Go to declaration
Cmd + Alt + B - Go to implementation
Cmd + U - Go to super-method/super-class
Cmd + Alt + F7 - Show usages
Cmd + N - Go to class
Cmd + P - Parameter info
Cmd-F - Find, F3/Shift + F3 - Find next/previous, Ctrl + Shift + F - Find in path
F2/Shift + F2 - Next/previous highlighted error
Cmd + E - recently used files
Cmd + P - Parameter info
Ctrl + Shift + Q - Context info
Ctrl + J - Quick documentation

Learning shortcuts

There is a great plugin that will teach you shortcuts fast - Key Promoter. Any time you do something with menus, it will show you what shortcut you could use to achieve the same effect.

Bonus: Code Reviewing Web App Framework Based Applications

Tuesday, 13 November 2012

Thought distortions, or why some of my infosec friends are alcoholics

@dinodaizovi recently quipped that infosec industry is a hybrid of "Mensa and a mental hospital," these are related thoughts.

You all know one or, more likely, many "security consultants" who are telling others that in order to improve security of $system they must do A and B, otherwise imminent failure will occur. Then these consultants go around being upset at their advice not being followed, they perceive the situation as personal failure, end up "burning out"...

Below is a list of cognitive distortions that, according so some theories in psychology, lead to perpetuation of a number of psychological conditions, including depression and alcoholism. I think I've got it from an iPhone app called "MoodKit" (by the way, try it). Have a think - aren't most of these associated with "security consultants," especially the internal consultants, in the eyes of their customers?

Common Thought Distortions

All-or Nothing Thinking 
Seeing people or events in absolute (black-or-white) terms, without recognizing the middle ground (e.g., success/failure; perfect/worthless).
"Without perfect security there is no security"

Blaming 
Blaming yourself or others too much. Focusing on who is to blame for problems rather than what you can do about them.
"These people just do not want to understand the importance of security!"
Catastrophizing 
Blowing things out of proportion, telling yourself that you won’t be able to handle something, or viewing tough situations as if they will never end.
"Ehrmergerd, these people just hate me, I will never be able to do anything to improve security here"
Downplaying Positives 
Minimizing or dismissing positive qualities, achievements, or behaviors by telling yourself that they are unimportant or do not count.
"Well, we got these vulns fixed, but there are soooo many more, probably!"
Emotional Reasoning 
Believing something is true because it “feels” true. Relying too much on your feelings to guide decisions.
"I have a gut feeling the attackers are out to get us!"
Fortune Telling 
Making negative predictions about the future, such as how people will behave or how events will play out.
"The company data will be breached in the most harmful way"
Intolerance of Uncertainty 
Struggling to accept or tolerate things being uncertain or unknown (e.g., repeatedly wondering “what if?” something bad happens).
"What if a firewall is misconfigured? What if there is a new RCE in Struts tomorrow?..."
Labeling 
Describing yourself or others using global, negative labels (e.g., making judgments about one’s character or name calling).
"These lazy developers just do not care!"
Mind Reading 
Jumping to conclusions about another person’s thoughts, feelings, or intentions without checking them out.
"I know they are not interested in fixing this stuff"
Negative Filtering 
Focusing only on the negatives and ignoring the positives in a situation, such that you fail to see the “big picture.”
Ok I give up with examples - the list is getting somewhat repetitive, but you get the drift...
Not Accepting 
Dwelling on an unpleasant situation or wishing things were different, instead of accepting what has happened and finding ways to move forward.

Overgeneralizing 
Drawing sweeping conclusions on the basis of a single incident, such as when we say people or things are “always” or “never” a certain way.

Personalizing 
Telling yourself that events relate to you when they may not.

“Should” and “Must” Statements 
Focusing on how things or people “should” or “must” be. Treating your own standards or preferences as rules that everyone must live by.
Who hasn't done that??? :)
One additional point for thoughts is that the above mindset is occasionally perpetuated by infosec vendors. Send them your therapist's invoice...