Reference monitors
When a subject requests access to an object, the reference monitor checks whether to permit the access. The reference monitor is an abstraction:
- Not necessarily a single piece of code/module
- In practice, a collection of access controls for different types of objects (memory, files...)
The reference monitor must be
- tamperproof - an attacker should not be able to change it
- invoked at all accesses - complete mediation
- small - in order to allow thorough analysis and testing
Not the only mechanism necessary for secure systems: also need e.g.
- identification/authentication,
- auditing
- configuration
As usual (Design decision 3)
- less complex systems can be more thoroughly analysed
and thus
- generic security at low levels can give higher assurance than specific/complex security at higher levels
Putting security mechanisms in the core of the system may also reduce the performance overheads caused by security.
Where to put the monitor
- Hardware (microprocessor)
- OS kernel (e.g. hypervisor: emulates the host computer, making it possible e.g. to run several OSs, each on a virtual machine, on one machine)
- OS (e.g. file access control - Unix/Windows later)
- Services (cf Java/.NET/databases/distr.systems - later)
- Applications (later)
Where we can put it depends (partly) on what information the RM needs in order to make a decision (vs which information is available in a layer), and the "protection granularity" of the layer (cf memory pages vs data stored in the page).
But also (Design decision 5):
- protect from the "layer below"
which is another argument for putting it as low as feasible.
An execution monitor checks each execution step (at some level of abstraction), possibly taking the history of execution steps into account, while static analysis (e.g. type checking) can analyse also future steps - what may or will happen.
Examples:
- at each file memory reference (execution step), check the permissions on the page of the reference
- at each file opening (execution step), check the file permission
- at each program step, check whether a Chinese Wall should be built
- at each network connection made, check whether the program is executing within code trusted to make a connection
Integrity of monitor: controlled invocation
Users (untrusted) must be able to use protected resources, e.g. access memory and perform I/O, but only in a controlled way.
Rules:
- Users must not be able to modify the (operating) system.
- Users must be able to use the (operating) system.
- Users must not be able to misuse the (operating) system.
How to control the use of the OS?
Examples of control mechanisms:
- context switch: (typically) by
- traps/interrupts, using trap/interrupt vector (which needs protection)
- privileged instruction (e.g. RTE on MC68k)
- memory:
- MMU, page table, segment table, base/limit/bounds registers (need protection)
- need privilege (security level) to modify these
- Motorola 68k: 1 bit (supervisor/user mode)
- x86: 2 bits, ring protection (totally ordered levels), e.g.
- OS kernel
- rest of OS
- I/O drivers
- applications
- used directly in descriptors of memory segments
- lower level code can not access higher levels except through specific entry points (gates)
- Multics (GE-645):
- segmentation (also files are memory segments)
- security by ring protection (BLP implementation)
(Logical) memory is no more physical than files.
MMU, page table, segment table, base/limit/bounds registers etc must be protected from the user. They provide permission checking (r/w/x) and address separation (a process can not access the memory of other processes, or write "outside" a segment).
Still problems remain:
2.2.1. Covert channels
Page faults can be used as covert channels. Put password to be checked on a page border, see if you get rejected before or after the page fault. Protection: copy the whole password before checking.
2.2.2. Typing/tagging
A popular attack is done by writing executable code on the data stack, and modifying the return address so the stack gets executed. Popular protection is by enforcing that the stack is non-executable.
Generalisation: tag each address with its type, allow only proper use, e.g. wrt. security policies.
Security kernel
The part of the OS kernel which implements the reference monitor:
- enforces security policy
- provides the security interfaces/mechanisms
Why "kernel"? [ring pictures: HW/OS kernel/OS/Appl, security spread out VS HW/SKernel/OS/Appl]
- Coverage (complete mediation): with kernel, OS can use ref.monitor
- Separation: easier to protect the security mechanisms from users and OS
- Unity: easier to track down errors/problems
- Modifiability: easier to make modificationas, test them, and see/understand effects
- Compactness: with only security functions, smaller
- Verifiability: (thus) easier to analyse and verify
Note:
- complete mediation may degrade system performance (cf. ACL checking at every write)
- kernel may/will not contain all security functions
- e.g. user authentication:
- user interface
- user id lookup
- authentication data comparison
- authentication data updates
- e.g. user authentication:
Trusted Computing Base
The TCB is "all parts which are necessary to enforce security", the parts of the system we depend on for security.
Intutition:
- the parts of the system which are not in the TCB can be implemented by evil crackers, and the system will still be secure
- our trust in the security of the system depends (only) on the TCB.
(Gollman: "Trusted: the parts that can hurt you.")
Reformulate: which parts of the system does security enforcement depend on?
Examples:
- hardware: processor, memory, devices...
- processes: need to separate security-critical processes
- files: for access control database, identification/authentication, audit
- protected memory: necessary to protect reference monitor/TCB
- interprocess communication: for parts of TCB to communicate securely
Note: only "simple/primitive" files, IPC; not necessarily "fully-fledged".
Four basic interactions:
- process activation: context switch - hardware (registers, MMU etc) and software (userid etc)
- domain switching: e.g. login, "suid" programs, system calls, "trusted code" in Java
- memory protection: monitor memory references
- I/O operations: protect I/O devices (and thus files etc)
Separation
Often important to separate processes from eachother
- physical (run on different hardware/locations)
- temporal (run secure jobs at different times)
- logical (separate address spaces or use virtual machines)
Cf. the design principle of Least common mechanism
- minimise mechanisms shared between users to hinder information flow