Don't expect this to be a perfect summary, rather a snapshot of things I thought were worth remembering at the time. So, minimal formatting and no fancy sentences :)
Oh and I asked @mdowd if he would be terribly upset about me posting this, he said no.
Ch 1 - Software vulnerability fundamentalsAugment source audit with black-box testing for best results
Design / implementation / operational vulnerabilities
- Input and data flow. Tracing input is one of the main tasks
- Trust relationships (between components on an interface). Often the transitivity of trust plays an important role
- Assumptions and misplaced trust. Example - assumptions on the structure of input by developers
- Interfaces. Misplaced trust e.g. in cases when the interface is more exposed to external access than developers think
- Environmental attacks. OS, hardware, networks, file system etc
- Exceptional conditions
Issues in algorithms. Errors in business logic or in the key algorithms
Abstraction and decomposition
Simple and complex trust relationships. Chain of trust
Strong coupling: look out for any strong intermodule coupling across trust boundaries. Usually data validation issues and too much trust b/w modules
Strong cohesion: pay special attention to designs that address multiple trust domains within a single module. Modules should be decomposed along trust boundaries.
Automountd + rpc.statd - transitive trust
Untrustworthy credentials, e.g verified on a client
Insufficient validation, e.g. in programmatic authN between systems only the client or only the server is authenticated
Often it is possible to authN as a low prig user, but access info from other users, higher privilege
Insecure authorities - inconsistent logic or leaves room for abuse
CBC (cypher block chaining) is the only good mode for block cyphers
CTR (counter) mode is the best for stream cyphers. IV (init vector) reuse can lead to trouble
Key exchange can be subject to MITM
Storing more sensitive data than needed, or for longer period. Lack of encryption, obsolete algorithms, data obfuscation instead of encryption. Hash issues
“Bait-and-Switch” attacks, or hash collision.
Prioritising implementation review based on thread modelling
Access control issues
Network profile exposure (unnecessary services)
- HTTP request methods.
- Directory indexing.
- File handlers (server-side) leading to source disclosure. Uploads.
- Misconfiguration of external auth.
- Default site installed.
- Verbose error messages.
- Admin interface is public facing.
- Non-executable stack
- Stack protection (canaries)
- Heap protection
- Registration of function pointers (wrapping in a check for unauthorised modification)
- Virtual machines (only prevent low-level vulnerabilities)
- Object (e.g. memory) and file system permissions. Permissions are difficult, can be screwed up
- Restricted accounts. Details of access are important
- Chroot jails (more effective when combined with a restricted account). Does not limit network access
- System virtualisation
- Kernel protections. System call gateway as a natural trust boundary. Additional checks by kernel modules, e.g. SELinux
- Host based firewalls
- Anti-malware applications
- File and object change monitors. Reactive in nature
- Host passed ID/PS
- Network segmentation, on all levels of OSI
- Network ID/PS
Some businesses focus on easy-to-detect issues (even of low risk) for the fear of issues found and published by someone else; not subtle complex ones
A good reviewer can do 100 to 1000 LoC an hour, depending on code. 100 kLoC takes less than double the time of 50 kLoC
Information gathering: dev interviews, dev docs, standards, source profiling, system profiling
Design review is not always the best to begin with - e.g. when there is no docs
Initial phase, then iterate the process 2-3 times a day: plan-work-reflect
Initial preparation: top-down (good only when the design docs are good, which is rare); bottom-up (could end up reviewing a lot of irrelevant code) or hybrid
- General application purpose
- Assets and entry points
- Components and modules
- Intermodule relationships
- Fundamental security expectations
- Major trust boundaries
Work: keep good notes; don’t fall down rabbit holes (can be waste of time); take breaks
Reflect: status check; re-evaluate; peer review
It is more effective to review functions in isolation and trace the code flow only when absolutely necessary (!)
Forward- and back-tracing. Back-tracing usually start from candidate points
Back-tracing examines fewer code flows - easier to do; but misses logic problems or anything not covered by candidate points
- Trace malicious input - difficult in OO code, especially poorly designed. In these cases, do some module or class review to understand the design. “5 or 6 code files before the system manages to do anything with the input”
- Analyse a module - reading code file line by line, not tracing or drilling-down. Very popular among experienced reviewers. Especially good for framework and glue code. Easy to go off-track
- Analyse an algorithm. Less likely to go off-track. Focus on pervasive and security critical algorithms
- Analyse a class or object. Study interface and implementation of an important object. Good for OO code (obviously). Less likely to go off-track than analysing a module
- Trace black box hits. Fuzz then investigate crashes. Check Shellcoder’s Handbook - Fault Injection chapter
- General approach. Trace from potential vulnerabilities to user input.
- Automated source code analysis tool. Similar. Limited to a set of potentially vulnerable idioms
- Simple lexical candidate points
- Simple binary candidate points
- Black-box generated candidate points. Mostly crash analysis. Microsoft’s gflags is useful for heap overflows - “heap paging” functionality in the debugged process. LD_PRELOAD in Linux is useful. Corruption can happen in a buffer or an array (or heap)
- Application-specific candidate points. Similarities to previously found vulnerable patterns
- Model the system. Detailed modelling for security critical components
- Hypothesis testing. Guess an abstraction and then test the validity of this guess
- Deriving purpose and function. Somewhat similar; pick key programmatic elements and summarise them
- Design conformity check. Look at the “grey areas” and common code paths. Look for discrepancies b/w spec and implementation
- Error-checking branches - code paths that are followed when validity checks result in an error. Do not dismiss them
- Pathological code paths - functions with many small and nonterminating branches - branches that don’t result in abrupt termination of the current function. Exponential explosion of similar code paths
Re-reading code. At least 2 passes
Desk-checking (symbolic execution)
Test cases. Be wary of input data from other modules. Don’t assume the same level of danger as external input, but a be a bit suspicious about it. Boundary cases
Debuggers: gdb, OllyDbg, SoftICE (yeah right), (Immunity Debugger),
Binary navigation: IDA Pro, BinNavi Fuzzing: SPIKE, (Sulley)