Skip to main content
Department of Information Technology

"New" Access Control Paradigms

Closely related to Web applications, but not only. "New" situation compared to traditional/old client/server situation, and compared to local system situation:

  • local system: users are authentified, system protects against user and users from eachother
  • client/server: system protects against client systems (often authentified)

New situation:

  • scripting languages and applets embedded in data (web pages, email, etc), blurring distinction data/code
  • computation moves from server to client (using data), "mobile code",
  • so client needs to protect against server/received data/code, and
  • user application (browser, email program) needs to be trusted by client system (thus part of TCB)

Some of this is not so new: long before the web, programs and data (and viruses) were also received from "the outside world" but using diskettes and magnetic tape (much slower and less efficiently, harder to exploit).

Access control changes

OS access control typically identity based (IBAC). Users are the main subjects wrt AC, and issue operations on local objects. The system is run by an organization (or individual) which has authority over the users, can locate them, and use audit logs to see who did what.

With mobile code, the subject changes to the code itself. The user can/may not be held responsible for the actions of the code, and should not be the (only) authenticated object. The organization/individual running the client system does not hace authority over the code producers etc.

Authorization now needs to collect "evidence" about the access request, including properties of the code being executed, current time, session, and other facts. When enough evidence has been found and verified, it is matched against the policies, and the access can be granted.

Code-based access control

Evidence, or basis of access control, may include

  • code origin: local system, local organization, remote site, may use different policies
  • code identity: specific (remote) code identified by hash code
  • code signature: digital signature by trusted author/distributor
  • code proof: possibly anonymous code which includes (formal) proof that it will not break security policies (and possibly other policies such as resource requirements).
    • note difference between proving these properties (difficult and time consuming) and checking a proof (faster, easier). Research area, will soon appear on your desktop.

"Code" may be a library/application/applet/script or combinations.

2.1. Trusted code

Basis: principle of least privilege. Each component of the "code" should only be given the permissions it needs and in trusted to have. E.g., a library with proof of security may be trusted to read/write the local file system, but code from a remote site should not be - unless it uses the trusted library.

Thus the permissions of the running program may change as it executes. How can we check which permissions to give a program at each request?

Stack inspection: when a request (e.g. opening a file) is made, a "stack walk" is performed, looking on the call stack to see what code/function is executing and who called it. The intersection of permissions of each function is collected (typically decreasing), as long as the required permissions are kept, until a point is found where (trusted) code has asserted the desired permission. Checking the permissions at the time of the request (instead of at every call) is called "lazy" evaluation (by Gollmann).

Problem: stack is reused for new purpose, making some old uses impossible, e.g. tail call optimization, where a stack frame is reused in the "tail call". This is a very important optimization.

function g(z) { x = f(z); return x+1; }

can not be tail optimized: the value of the call to f is used before returning it

function g(z) { return f(z+1); }

can be tail optimized: the value of the call to f is returned directly - the stack frame of g can be reused.

If an untrusted g calls a trusted f, there is nothing showing g was called (which may have destroyed the trust); the trust of f replaces the non-trust of g when its stack frame is replaced. A trusted g which has asserted a permission may lose the assertion when its stack frame is replaced.

Eager calculation of permissions is instead done when calling each function, proactively: the current permissions are updated at each call, and checked when/if an access request is made.

Implementation: Java security

Java is used both for local applications (e.g. in Open Office), and in web applets. The security policies differ - applets are executed in a sandbox:

  • applets can not access the client's file system
  • applets can not obtain the local user name, email address, machine config etc
  • applets may only make network connections back to their origin
  • applet windows are (in various ways) marked as untrusted
  • applets can not change the Java configuration

while a local application may do all these things.

For full information see Sun documentation.

Java source code is compiled to byte code (simple, portable machine code for the Java Virtual Machine, JVM) which constitutes the applications, libraries and applets. Thus restrictions made by the compiler (e.g. compile-time type checking) can be circumvented by using another compiler or writing the byte code manually.

3.1. Byte code verifier

Before executing, the byte code is verified to check properties such as

  • proper packaging (e.g. of class files)
  • no stack overflows
  • correct types of arguments and values of methods
  • no bad type casting (changing types)
  • legal references to other classes
  • object abstraction (e.g. private fields are not accessed)
3.2. Class loader

Class definitions can be loaded at runtime. When a class is loaded its permissions are checked against the current policy, type safety is checked, and the byte code verifier is called.

Untrusted sources (such as applets) may not override existing classes: each applet has its own name space for its own classes. Untrusted sources may not be able to load any class (e.g. normal UI/window classes which look like standard Java windows).

3.3. Policies

Security policies can be defined based on e.g.

  • code origin (URL)
  • signatures
  • principals (e.g. certificate owner)

Simple example:

grant signedBy "FooBar Inc", codeBase "http://foo.bar.com/-" {
    permission java.awt.AWTPermission "accessClipboard";
    permission java.util.PropertyPermission "*","read";
    permission java.lang.RuntimePermission "queuePrintJob";
    permission java.AudioPermission "play";
    permission java.lang.RuntimePermission "accessClassInPackage.sun.audio";
};

For full information, see Sun documentation.

3.4. Security Manager

This is the reference monitor of the Java Virtual Machine. It does stack walking, and checks the policy.

.NET

.NET is a framework similar to Java, but handles more source level languages (C#, Basic, Lisp, ML...). It has the same basic approach to security as Java.

Updated  2006-10-10 22:13:59 by Björn Victor.