Monday 1 August 2016

TAOSSA Chapter 1-4 summary

I've recently dug up these old notes of mine - they are probably from around 2008. I was jotting down points from The Art of Software Security Assessment as I was going through the book. 
Don't expect this to be a perfect summary, rather a snapshot of things I thought were worth remembering at the time. So, minimal formatting and no fancy sentences :) 

Oh and I asked @mdowd if he would be terribly upset about me posting this, he said no.

Ch 1 - Software vulnerability fundamentals

Augment source audit with black-box testing for best results
Design / implementation / operational vulnerabilities

Common threads

  • Input and data flow. Tracing input is one of the main tasks
  • Trust relationships (between components on an interface). Often the transitivity of trust plays an important role
  • Assumptions and misplaced trust. Example - assumptions on the structure of input by developers
  • Interfaces. Misplaced trust e.g. in cases when the interface is more exposed to external access than developers think
  • Environmental attacks. OS, hardware, networks, file system etc
  • Exceptional conditions

Ch 2 - Design review

Something about software design fundamentals
Issues in algorithms. Errors in business logic or in the key algorithms
Abstraction and decomposition

Trust relationships

Trust boundaries
Simple and complex trust relationships. Chain of trust
Strong coupling: look out for any strong intermodule coupling across trust boundaries. Usually data validation issues and too much trust b/w modules
Strong cohesion: pay special attention to designs that address multiple trust domains within a single module. Modules should be decomposed along trust boundaries.

Design flaws examples

Shatter attacks in Windows - strong coupling
Automountd + rpc.statd - transitive trust

Authentication vulns

“Forgetting” to authenticate
Untrustworthy credentials, e.g verified on a client
Insufficient validation, e.g. in programmatic authN between systems only the client or only the server is authenticated

Authorisation vulns

Web apps often do authZ checks “at the front door” but the actual handler pages omit authZ checks
Often it is possible to authN as a low prig user, but access info from other users, higher privilege
Insecure authorities - inconsistent logic or leaves room for abuse

Other vulns

Accountability - log injection breaks nonrepudiation

Confidentiality

All sorts of crypto problems
CBC (cypher block chaining) is the only good mode for block cyphers
CTR (counter) mode is the best for stream cyphers. IV (init vector) reuse can lead to trouble
Key exchange can be subject to MITM
Storing more sensitive data than needed, or for longer period. Lack of encryption, obsolete algorithms, data obfuscation instead of encryption. Hash issues
Salt values
“Bait-and-Switch” attacks, or hash collision.

Threat modelling

Microsoft stuff. DFDs
Attack trees
DREAD ratings
Prioritising implementation review based on thread modelling

Ch 3 - Operational review

Insecure defaults - of the application and of the base platform or OS
Access control issues
Unnecessary services
Secure channels
Spoofing
Network profile exposure (unnecessary services)

Web-specific issues

  • HTTP request methods.
  • Directory indexing.
  • File handlers (server-side) leading to source disclosure. Uploads.
  • Misconfiguration of external auth.
  • Default site installed.
  • Verbose error messages.
  • Admin interface is public facing.

Development protective measures

  • Non-executable stack
  • Stack protection (canaries)
  • Heap protection
  • ASLR
  • Registration of function pointers (wrapping in a check for unauthorised modification)
  • Virtual machines (only prevent low-level vulnerabilities)

Host-based protective measures

  • Object (e.g. memory) and file system permissions. Permissions are difficult, can be screwed up
  • Restricted accounts. Details of access are important
  • Chroot jails (more effective when combined with a restricted account). Does not limit network access
  • System virtualisation
  • Kernel protections. System call gateway as a natural trust boundary. Additional checks by kernel modules, e.g. SELinux
  • Host based firewalls
  • Anti-malware applications
  • File and object change monitors. Reactive in nature
  • Host passed ID/PS

Network-based protective measures

  • Network segmentation, on all levels of OSI
  • NAT
  • VPNs
  • Network ID/PS

Ch 4 - Application review process

Code review is a fundamentally creative process; it is also a skill, not strictly a knowledge problem
Some businesses focus on easy-to-detect issues (even of low risk) for the fear of issues found and published by someone else; not subtle complex ones
A good reviewer can do 100 to 1000 LoC an hour, depending on code. 100 kLoC takes less than double the time of 50 kLoC
Information gathering: dev interviews, dev docs, standards, source profiling, system profiling

Application review

Do not do a waterfall-like review. The time you’re best qualified to find more abstract design and logic vulnerabilities is toward the end of the review
Design review is not always the best to begin with - e.g. when there is no docs
Initial phase, then iterate the process 2-3 times a day: plan-work-reflect
Initial preparation: top-down (good only when the design docs are good, which is rare); bottom-up (could end up reviewing a lot of irrelevant code) or hybrid

Hybrid approach questions

  • General application purpose
  • Assets and entry points
  • Components and modules
  • Intermodule relationships
  • Fundamental security expectations
  • Major trust boundaries
Planning: consider goals (depending on the stage and level of understanding); pick the right strategy; create a master ideas list; pick a target/goal; coordinate
Work: keep good notes; don’t fall down rabbit holes (can be waste of time); take breaks
Reflect: status check; re-evaluate; peer review

Code Navigation

Code flow navigation - control-flow sensitive or data-flow sensitive. Surprisingly, not used much.
It is more effective to review functions in isolation and trace the code flow only when absolutely necessary (!)
Forward- and back-tracing. Back-tracing usually start from candidate points
Back-tracing examines fewer code flows - easier to do; but misses logic problems or anything not covered by candidate points

Code-auditing strategies

Code comprehension, candidate point, design generalisation

Code comprehension strategies

  • Trace malicious input - difficult in OO code, especially poorly designed. In these cases, do some module or class review to understand the design. “5 or 6 code files before the system manages to do anything with the input”
  • Analyse a module - reading code file line by line, not tracing or drilling-down. Very popular among experienced reviewers. Especially good for framework and glue code. Easy to go off-track
  • Analyse an algorithm. Less likely to go off-track. Focus on pervasive and security critical algorithms
  • Analyse a class or object. Study interface and implementation of an important object. Good for OO code (obviously). Less likely to go off-track than analysing a module
  • Trace black box hits. Fuzz then investigate crashes. Check Shellcoder’s Handbook - Fault Injection chapter

Candidate point strategies

  • General approach. Trace from potential vulnerabilities to user input.
  • Automated source code analysis tool. Similar. Limited to a set of potentially vulnerable idioms
  • Simple lexical candidate points
  • Simple binary candidate points
  • Black-box generated candidate points. Mostly crash analysis. Microsoft’s gflags is useful for heap overflows - “heap paging” functionality in the debugged process. LD_PRELOAD in Linux is useful. Corruption can happen in a buffer or an array (or heap)
  • Application-specific candidate points. Similarities to previously found vulnerable patterns

Design generalisation patterns

  • Model the system. Detailed modelling for security critical components
  • Hypothesis testing. Guess an abstraction and then test the validity of this guess
  • Deriving purpose and function. Somewhat similar; pick key programmatic elements and summarise them
  • Design conformity check. Look at the “grey areas” and common code paths. Look for discrepancies b/w spec and implementation

Code-auditing tactics

Internal flow analysis. Intra-procedural and intra-module analysis. Especially error-checking branches and pathological code paths
  • Error-checking branches - code paths that are followed when validity checks result in an error. Do not dismiss them
  • Pathological code paths - functions with many small and nonterminating branches - branches that don’t result in abrupt termination of the current function. Exponential explosion of similar code paths
Subsystem and dependency analysis
Re-reading code. At least 2 passes
Desk-checking (symbolic execution)
Test cases. Be wary of input data from other modules. Don’t assume the same level of danger as external input, but a be a bit suspicious about it. Boundary cases

Code auditor’s toolbox

Code navigators: cscope, ctags, source navigator, code surfer (slicing!), understand (scripting)
Debuggers: gdb, OllyDbg, SoftICE (yeah right), (Immunity Debugger),
Binary navigation: IDA Pro, BinNavi Fuzzing: SPIKE, (Sulley)