Security Models (cont)
Info flow analysis: Doris Denning model
In BLP
- covert channels possible: HL "spy" creates object, LL tries reading
- *-property too strong: reading from HL while writing LL may not cause information to actually flow
Improvement: analyse actual, and indirect, information flow.
- <= is reflexive, antisymmetric, transitive (partial order)
- all pairs (x,y) have Greatest Lower Bound (glb) and Least Upper Bound (lub):
- GLB: exists z such that z<=x and z<=y, and for all k s.t. k<=x and k<=y, k<=z (i.e. z is greater)
- GLB(x,y) = z
- LUB: exists z such that x<=z and y<=z, and for all k s.t. x<=k and y<=k, z<=k (i.e. z is less)
- LUB(x,y) = z
- GLB: exists z such that z<=x and z<=y, and for all k s.t. k<=x and k<=y, k<=z (i.e. z is greater)
(Note: level system can be combined by combining orderings (cf Gollman 4.7.2). Non-lattice structures can be "expanded" to such.)
(Note: below, "GLB","LUB","<=" always uses security labels of objects, not objects themselves!)
An information flow from x to y is authorised (permitted) only if lvl(x) <= lvl(y)
- (low-level info may flow to same or higher level, but not to lower or unrelated classes)
- Explicit flow: (xi --> y)
- y := f(x1, ..., xi, ..., xn)
- permitted if LUB(x1,...,xn) <= y
- intuition: the "max level" of xi is <= y
- Implicit flow: (x --> yi)
- if p(x) then y1 := ...; yn := ...; end
- permitted if x <= GLB(y1,...,yn)
- intuition: the "min level" of yi is >= x
An explicit flow is caused by an assignment (or similar, e.g. function params --> values)
an implicit flow is caused by a conditional (or similar, e.g. boolean expressions).
Lattice (LUB/GLB existence) makes verification of flows easier: instead of comparing each xi with y (or each yi with x), pick GLB/LUB (at compile- or runtime) and compare.
1.2.1. Transitivity helps
A sequence of flows from x to y
- x = z0 --> z1 --> ... --> zn = y
is permitted
- if each individual flow z(i-1) --> zi is permitted,
- since this means z(i-1) <= zi
- and <= is transitive, and thus
- x <= y
1.2.2. Examples
z := x y := z
The flow x --> y is allowed if individual flows x --> z and z --> y are allowed.
z := 0 if (x == 1) then z := 1; y := z
where x is initially 0 or 1. Which flows exist?
- implicit: x --> z
- explicit: z --> y
- transitivity of flow: indirect flow x --> y
y := x1 + x2 * x3
if x then y1 := 0 y2 := 17 y3 := 42 end
Constants typically have bottom element of lattice as level (again by policy). Thus
x := 1 x := x + 1 x := "Yow!"
are all allowed.
How much information flows?
Information flows from x to y by program p
- if after executing p, the value of y gives more information about the value of x
Example:
z := x; y := z
- Indirect flow x --> y,
- direct flow x --> z,
- but no ("global") flow z --> y, since y doesn't tell anything about the initial value of z.
Mathematically: the flow from x to y is calculated using the uncertainty of the value of a variable x given y (its equivocation or conditional entropy).
Example:
y := 0 if odd(x) then y := 1
Implicit flow, but how much information flows from x to y? 1 bit.
Example:
y := 0 if x == 1 then y := x - 1
No information flows from x to y.
Example:
z := x + y
where x and y are 4-bit values 0..15, with equal probability. The value of z after assignment doesn't necessarily tell the initial values of x and y, but gives some information: information flow < 4 bits.
Example
z := x xor y
doesn't tell anything about x or y (if they have equal "size" and all values of y have equal probability).
Cf. unconditionally secure crypto algorithms such as one-time-pad: no additional information about the plaintext is gained by having the ciphertext. For one-time-pad, XOR the plaintext with a fresh random bitstring with same length (problem: key transfer). If same key is used twice, not unconditionally secure since XOR of plaintexts can be obtained, and they are not random.
Requires program analysis! (e.g. partial evaluation, type systems).
Chinese Wall Model
Intended for consultancy business handling many clients, but also court cases, stock exchanges, etc.
Basic policy: there must be no information flow that causes a conflict of interest.
Extend BLP:
- S: subjects (consultants)
- O: objects (information)
- C: set of companies (clients)
- Information belongs to client companies: company dataset y(o) is the company of o
- define conflict-of-interest classes: x(o) is the set of companies that should not learn the contents of o
- (which companies are in competition with (the owner of) o)
- security label: pair (x(o), y(o))
- sanitised info contains no sensitive details: x(o) is empty
- history of accessed company datasets also generates conflicts
- => access rights change with accesses (diff. from BLP)
- N(s,o) = true iff subject s has had access to object o
Initially N(s,o) is false everywhere.
Control direct information flow, only allow s to access o if
- for all objects o1 such that N(s,o1)=true (all previously accessed objects),
- y(o)=y(o1)
- (o belongs to a company dataset which s has already accessed)
- or y(o) not in x(o1)
- o is not in conflict of interest with an object previously accessed
Prevent indirect information flows: only allow write access to an object if no other object can be read which is in a different company dataset and contains unsanitized information.
- A subject s is granted write access to an object o only if s has no read access to an object o1 such that
- y(o) != y(o1)
- (different company)
- and x(o1) is not empty
- (o1 is not sanitised)
ss-property: You can't access information which has a conflict of interest with something you have read from a different company's information
- -property: if you can read unsanitized objects, you cannot write things to a different company
As soon as you read unsanitized objects, a (chinese) wall is built around you: you cannot write (*-prop) and you cannot read (ss-prop).
(Note that Gollmann's example is a little misleading, as the first write is not necessarily prevented.)
Clark-Wilson
"Real-world" model/framework: handles more of real systems, much more complex.
Higher-level framework/guidelines/"model" dealing with data integrity:
- internal consistency (system can enforce)
- external consistency (real-world related, need humans/audit logs/validation)
Access operations are programs (complex, application-specific).
- Objects can only be accessed/manipulated by a restricted set of programs (using well-formed transactions), not directly
- cf. abstract data types/object oriented programming: only accessors/methods etc.
- Sepatation of duties: more than one user (may be) needed to perform sensitive operations
- e.g. develop, test, certify, operate the system; open accounts and transfer money between accounts
- Subjects must be authenticated, audit log must be maintained, the system must be certified
Data items:
- unconstrained data items (UDI): input to system
- constrained data items (CDI): consistent, well-formed
- transformation procedures (TP): only way to manipulate CDI, can convert UDI to CDI
- integrity verification procedures (IVP): checks integrity of a state
Certification rules regulate IVPs and TPs.
Enforcement rules regulate mechanisms.
Design principles for mechanisms
Examples of standard/good principles:
- Least privilege (need-to-know)
- each subject should have the least rights necessary. Use small protection domains, switch when necessary. No implicit rights.
- Economy of mechanism
- small and simple design gives easier verification and implementation (at low level)
- Complete mediation
- every access should be checked, efficiently. Otherwise circumvention will be tried.
- Open design
- no security by obscurity (cf. crypto algorithms)
- Separation of privilege
- when possible, access should depend on more than one condition being satisfied.
- E.g. more than one subject (user) needed to perform access (modification)
- Least common mechanism
- minimise mechanisms shared between users to hinder information flow (e.g. file protection)
- Psychological acceptance
- mechanisms should be easy to use: not much more trouble than to leave unrestricted access