Thomas Leonard's blog

Lambda Capabilities

"Is this software safe?" is a question software engineers should be able to answer, but doing so can be difficult. Capabilities offer an elegant solution, but seem to be little known among functional programmers. This post is an introduction to capabilities in the context of ordinary programming (using plain functions, in the style of the lambda calculus).

Even if you're not interested in security, capabilities provide a useful way to understand programs; when trying to track down buggy behaviour, it's very useful to know that some component couldn't have been the problem.

Table of Contents

( this post also appeared on Reddit, Hacker News and Lobsters )

The Problem

We have some application (for example, a web-server) that we want to run. The application is many thousands of lines long and depends on dozens of third-party libraries, which get updated on a regular basis. I would like to be able to check, quickly and easily, that the application cannot do any of these things:

  • Delete my files.
  • Append a line to my ~/.ssh/authorized_keys file.
  • Act as a relay, allowing remote machines to attack other computers on my local network.
  • Send telemetry to a third-party.
  • Anything else bad that I forget to think about.

For example, here are some of the OCaml packages I use just to generate this blog:

Dependency graph for this blog

Having to read every line of every version of each of these packages in order to decide whether it's safe to generate the blog clearly isn't practical.

I'll start by looking at traditional solutions to this problem, using e.g. containers or VMs, and then show how to do better using capabilities.

Option 1: Security as a separate concern

A common approach to access control treats securing software as a separate activity to writing it. Programmers write (insecure) software, and a security team writes a policy saying what it can do. Examples include firewalls, containers, virtual machines, seccomp policies, SELinux and AppArmor.

The great advantage of these schemes is that security can be applied after the software is written, treating it as a black box. However, it comes with many problems:

Confused deputy problem

Some actions are OK for one use but not for another.

For example, if the client of a web-server requests https://example.com/../../etc/httpd/server-key.pem then we don't want the server to read this file and send it to them. But the server does need to read this file for other reasons, so the policy must allow it.

Coarse-grained controls

All the modules making up the program are treated the same way, even though you probably trust some more than others.

For example, we might trust the TLS implementation with the server's private key, but not the templating engine, and I know the modules I wrote myself are not malicious.

Even well-typed programs go wrong

Programming in a language with static types is supposed to ensure that if the program compiles then it won't crash. But the security policy can cause the program to fail even though it passed the compiler's checks.

For example, the server might sometimes need to send an email notification. If it didn't do that while the security policy was being written, then that will be blocked. Or perhaps the web-server didn't even have a notification system when the policy was written, but has since been updated.

Policy language limitations

The security configuration is written in a new language, which must be learned. It's usually not worth learning this just for one program, so the people who write the program struggle to write the policy. Also, the policy language often cannot express the desired policy, since it may depend on concepts unique to the program (e.g. controlling access based on a web-app user's ID, rather than local Unix user ID).

All of the above problems stem from trying to separate security from the code. If the code were fully correct, we wouldn't need the security layer. Checking that code is fully correct is hard, but maybe there are easy ways to check automatically that it does at least satisfy our security requirements...

Option 2: Purity

One way to prevent programs from performing unwanted actions is to prevent all actions. In pure functional languages, such as Haskell, the only way to interact with the outside world is to return the action you want to perform from main. For example:

1
2
3
4
5
f :: Int -> String
f x = ...

main :: IO ()
main = putStr (f 42)

Even if we don't look at the code of f, we can be sure it only returns a String and performs no other actions (assuming Safe Haskell is being used). Assuming we trust putStr, we can be sure this program will only output a string to stdout and not perform any other actions.

However, writing only pure code is quite limiting. Also, we still need to audit all IO code.

Option 3: Capabilities

Consider this code (written in a small OCaml-like functional language, where ref n allocates a new memory location initially containing n, and !x reads the current value of x):

1
2
3
4
5
6
7
let f a = ...

let () =
  let x = ref 5 in
  let y = ref 10 in
  f x;
  assert (!y = 10)

Can we be sure that the assert won't fail, without knowing the definition of f? Assuming the language doesn't provide unsafe backdoors (such as OCaml's Obj.magic), we can. f x cannot change y, because f x does not have access to y.

So here is an access control system, built in to the lambda calculus itself! At first glance this might not look very promising. For example, while f doesn't have access to y, it does have access to any global variables defined before f. It also, typically, has access to the file-system and network, which are effectively globals too.

To make this useful, we ban global variables. Then any top-level function like f can only access things passed to it explicitly as arguments. Avoiding global variables is usually considered good practise, and some systems ban them for other reasons anyway (for example, Rust doesn't allow global mutable state as it wouldn't be able to prevent races accessing it from multiple threads).

Returning to the Haskell example above (but now in OCaml syntax), it looks like this in our capability system:

1
2
3
let f x = ...

let main ch = output_string ch (f 42)

Since f is a top-level function, we know it does not close over any mutable state, and our 42 argument is pure data. Therefore, the call f 42 does not have access to, and therefore cannot affect, any pre-existing state (including the filesystem). Internally, it can use mutation (creating arrays, etc), but it has nowhere to store any mutable values and so they will get GC'd after it returns. f therefore appears as a pure function, and calling it multiple times will always give the same result, just as in the Haskell version.

output_string is also a top-level function, closing over no mutable state. However, the function resulting from evaluating output_string ch is not top-level, and without knowing anything more about it we should assume it has full access to the output channel ch.

If main is invoked with standard output as its argument, it may output a message to it, but cannot affect other pre-existing state.

In this way, we can reason about the pure parts of our code as easily as with Haskell, but we can also reason about the parts with side-effects. Haskell's purity is just a special case of a more general rule: the effects of a (top-level) function are bounded by its arguments.

Attenuation

So far, we've been thinking about what values are reachable through other values. For example, the set of ref-cells that can be modified by f x is bounded by the union of the set of ref cells reachable from the closure f with the set of ref cells reachable from x.

One powerful aspect of capabilities is that we can use functions to implement whatever access controls we want. For example, let's say we only want f to be able to set the ref-cell, but not read it. We can just pass it a suitable function:

1
2
3
4
5
let x = ref 0 in
let set v =
  x := v
in
f set

Or perhaps we only want to allow inserting positive integers:

1
2
3
let set v =
  if v > 0 then set v
  else invalid_arg "Positive values only!"

Or we can allow access to be revoked:

1
2
3
4
5
6
7
8
let r = ref (Some set) in
let set v =
  match !r with
  | Some fn -> fn v
  | None -> invalid_arg "Access revoked!"
in
...
r := None		(* Revoke *)

Or we could limit the number of times it can be used:

1
2
3
4
let used = ref 0 in
let set v =
  if !used < 3 then (incr used; set v)
  else invalid_arg "Quota exceeded"

Or log each time it is used, tagged with a label that's meaningful to us (e.g. the function to which we granted access):

1
2
3
4
5
6
7
8
let log = ref [] in
let set name v =
  let msg = sprintf "%S set it to %d" name v in
  log := msg :: !log;
  set v
in
f (set "f");
g (set "g")

Or all of the above.

In these examples, our function f never got direct access (permission) to x, yet was still able to affect it. Therefore, in capability systems people often talk about "authority" rather than permission. Roughly speaking, the authority of a subject is the set of actions that the subject could cause to happen, now or in the future, on currently-existing resources. Since it's only things that might happen, and we don't want to read all the code to find out exactly what it might do, we're usually only interested in getting an upper-bound on a subject's authority, to show that it can't do something.

The examples here all used a single function. We may want to allow multiple operations on a single value (e.g. getting and setting a ref-cell), and the usual techniques are available for doing that (e.g. having the function take the operation as its first argument, or collecting separate functions together in a record, module or object).

Web-server example

Let's look at a more realistic example. Here's a simple web-server (we are defining the main function, which takes two arguments):

1
2
let main net htdocs =
  ...

To use it, we pass it access to some network (net) and a directory tree with the content (htdocs). Immediately we can see that this server does not access any part of the file-system outside of htdocs, but that it may use the network. Here's a picture of the situation:

Initial reference graph

Notes on reading the diagram:

  • The diagram shows a model of the reference graph, where each node represents some value (function, record, tuple, etc) or aggregated group of values.
  • An arrow from A to B indicates the possibility that some value in the group A holds a reference to some value in the group B.
  • The model is typically an over-approximation, so the lack of an arrow from A to B means that no such reference exists, while the presence of an arrow just means we haven't ruled it out.
  • Orange nodes here represent OCaml values.
  • White boxes are directories. They include all contained files and subdirectories, except those shown separately. I've pulled out htdocs so we can see that app doesn't have access to the rest of home. Just for emphasis, I also show .ssh separately. I'm assuming here that a directory doesn't give access to its parent, so htdocs can only be used to read files within that sub-tree.
  • net represents the network and everything else connected to it.
  • In most operating systems, directories exist in the kernel's address space, and so you cannot have a direct reference to them. That's not a problem, but for now you may find it easier to imagine a system where the kernel and applications are all a single program, in a single programming language.
  • This diagram represents the state at a particular moment in time (when starting the application). We could also calculate and show all the references that might ever come to exist, given what we know about the behaviour of app and net. Since we don't yet know anything about either, we would have to assume that app might give net access to htdocs and to itself.

So, the diagram above shows the application app has been given references to net and to htdocs as arguments.

Looking at our checklist from the start:

  • It can't delete all my files, but it might delete the ones in htdocs.
  • It can't edit ~/.ssh/authorized_keys.
  • It might act as a relay, allowing remote machines to attack other computers on my local network.
  • It might send telemetry to a third-party.

We can read the body of the function to learn more:

1
2
3
4
let main net htdocs =
  let socket = Net.listen net (`Tcp 8080) in
  let handler = static_files htdocs in
  Http.serve socket handler

Note: Net.listen net is typical OCaml style for performing the listen operation on net. We could also have used a record and written net.listen instead, which may look more familiar to some readers.

Here's an updated diagram, showing the moment when Http.serve is called. The app group has been opened to show socket and handler separately:

After reading the code of main

We can see that the code in the HTTP library can only access the network via socket, and can only access htdocs by using handler. Assuming Net.listen is trust-worthy (we'll normally trust the platform's networking layer), it's clear that the application doesn't make out-bound connections, since net is used only to create a listening socket.

To know what the application might do to htdocs, we only have to read the definition of static_files:

1
2
let static_files dir request =
  Path.load (dir / request.path)

Now we can see that the application doesn't change any files; it only uses htdocs to read them.

Finally, expanding Http.serve:

1
2
3
4
5
let serve socket handle_request =
  while true do
    let conn = Net.accept socket in
    handle_connection conn handle_request
  done

We see that handle_connection has no way to share telemetry information between connections, given that handle_request never stores anything.

We can tell these things after only looking at the code for a few seconds, even though dozens of libraries are being used. In particular, we didn't have to read handle_connection or any of the HTTP parsing logic.

Now let's enable TLS. For this, we will require a configuration directory containing the server's key:

1
2
3
4
5
let main ~tls_config net htdocs =
  let socket = Net.listen net (`Tcp 8443) in
  let tls_socket = Tls.wrap ~tls_config socket in
  let handler = static_files htdocs in
  Http.serve tls_socket handler

OCaml syntax note: I used ~ to make tls_config a named argument; we wouldn't want to get this directory confused with htdocs!

We can see that only the TLS library gets access to the key. The HTTP library interacts only with the TLS socket, which presumably does not reveal it.

Updated graph showing TLS

Notice too how this fixes the problem we had with our original policy enforcement system. There, an attacker could request https://example.com/../tls_config/server.key and the HTTP server might send the key. But here, the handler cannot do that even if it wants to. When handler loads a file, it does so via htdocs, which does not have access to tls_config.

The above server has pretty good security properties, even though we didn't make any special effort to write secure code. Security-conscious programmers will try to wrap powerful capabilities (like net) with less powerful ones (like socket) as early as possible, making the code easier to understand. A programmer uninterested in readability is likely to mix in more irrelevant code you have to skip through, but even so it shouldn't take too long to track down where things like net and htdocs end up. And even if they spread them throughout their entire application, at least you avoid having to read all the libraries too!

By contrast, consider a more traditional (non-capability) style. We start with:

1
let main htdocs = ...

Here, htdocs would be a plain string rather than a reference to a directory, and the network would be reached through a global. We can't tell anything about what this server could do from looking at this one line, and even if we expand it, we won't be able to tell what all the functions it calls do, either. We will end up having to follow every function call recursively through all of the server's dependencies, and our analysis will be out of date as soon as any of them changes.

Use at different scales

We've seen that we can create an over-approximation of the reference graph by looking at just a small part of the code, and then get a closer bound on the possible effects as needed by expanding groups of values until we can prove the desired property. For example, to prove that the application didn't modify htdocs, we followed htdocs by expanding main and then static_files.

Within a single process, a capability is a reference (pointer) to another value in the process's memory. However, the diagrams also included arrows (capabilities) to things outside of the process, such as directories. We can regard these as references to privileged proxy functions in the process that make calls to the OS kernel, or (at a higher level of abstraction) we can consider them to be capabilities to the external resources themselves.

It is possible to build capability operating systems (in fact, this was the first use of capabilities). Just as we needed to ban global variables to make a safe programming language, we need to ban global namespaces to make a capability operating system. For example, on FreeBSD this is done (on a per-process basis) by invoking the cap_enter system call.

We can zoom out even further, and consider a network of computers. Here, an arrow between machines represents some kind of (unforgeable) network address or connection. At the IP level, any process can connect to any address, but a capability system can be implemented on top. CapTP (the Capability Transport Protocol) was an early system for this, but Cap'n Proto (Capabilities and Protocols) is the modern way to do it.

So, thinking in terms of capabilities, we can zoom out to look at the security properties of the whole network, yet still be able to expand groups as needed right down to the level of individual closures in a process.

Key points

  • Library code can be imported and called without it getting access to any pre-existing state, except that given to it explicitly. There is no "ambient authority" available to the library.

  • A function's side-effects are bounded by its arguments. We can understand (get a bound on) the behaviour of a function call just by looking at it.

  • If a has access to b and to c, then a can introduce them (e.g. by performing the function call b c). Note that there is no capability equivalent to making something "world readable"; to perform an introduction, you need access to both the resource being granted and to the recipient ("only connectivity begets connectivity").

  • Instead of passing the name of a resource, we pass a capability reference (pointer) to it, thereby proving that we have access to it and sharing that access ("no designation without authority").

  • The caller of a function decides what it should access, and can provide restricted access by wrapping another capability, or substituting something else entirely.

    I am sometimes unable to install a messaging app on my phone because it requires me to grant it access to my address book. A capability system should never say "This application requires access to the address book. Continue?"; it should say "This application requires access to an address book; which would you like to use?".

  • A capability must behave the same way regardless of who uses it. When we do f x, f can perform exactly the same operations on x that we can.

    It is tempting to add a traditional policy language alongside capabilities for "extra security", saying e.g. "f cannot write to x, even if it has a reference to it". However, apart from being complicated and annoying, this creates an incentive for f to smuggle x to another context with more powers. This is the root cause of many real-world attacks, such as click-jacking or cross-site request forgery, where a URL permits an attack if a victim visits it, but not if the attacker does. One of the great benefits of capability systems is that you don't need to worry that someone is trying to trick you into doing something that you can do but they can't, because your ability to access the resource they give you comes entirely from them in the first place.

All of the above follow naturally from using functions in the usual way, while avoiding global variables.

Practical considerations

The above discussion argues that capabilities would have been a good way to build systems in an ideal world. But given that most current operating systems and programming languages have not been designed this way, how useful is this approach? I'm currently working on Eio, an IO library for OCaml, and using these principles to guide the design. Here are a few thoughts about applying capabilities to a real system.

Plumbing capabilities everywhere

A lot of people worry about cluttering up their code by having to pass things explicitly everywhere. This is actually not much of a problem, for a couple of reasons:

  1. We already do this with most things anyway. If your program uses a database, you probably establish a connection to it at the start and pass the connection around as needed. You probably also pass around open file handles, configuration settings, HTTP connection pools, arrays, queues, ref-cells, etc. Handling "the file-system" and "the network" the same way as everything else isn't a big deal.

  2. You can often bundle up a capability with something else. For example, a web-server will likely let the user decide which directory to serve, so you're already passing around a pathname argument. Passing a path capability instead is no extra work.

Consider a request handler that takes the address of a Redis server:

1
Http.serve socket (handle_request redis_url)

It might seem that by using capabilities we'd need to pass the network in here too:

1
Http.serve socket (handle_request net redis_url)

This is both messy and unnecessary. Instead, handle_request can take a function for connecting to Redis:

1
Http.serve socket (handle_request redis)

Then there is only one argument to pass around again. Instead of writing the connection logic in handle_request, we write the same logic outside and just pass in the function. And now someone looking at the code can see "the handler can connect to Redis", rather than the less precise "the handler accesses the network". Of course, if Redis required more than one configuration setting then you'd probably already be doing it this way.

The main problematic case is providing defaults. For example, a TLS library might allow us to specify the location of the system's certificate store, but it would like to provide a default (e.g. /etc/ssl/certs/). This is particularly important if the default location varies by platform. If the TLS library decides the location, then we must give it (read-only at least) access to the whole system! We may just decide to trust the library, or we might separate out the default paths into a trusted package.

Levels of support

Ideally, our programming language would provide a secure implementation of capabilities that we could depend on. That would allow running untrusted code safely and protect us from compromised packages. However, converting a non-capability language to a capability-secure one isn't easy, and isn't likely to happen any time soon for OCaml (but see Emily for an old proof-of-concept).

Even without that, though, capabilities help to protect non-malicious code from malicious inputs. For example, the request handler above forgot to sanitise the URL path from the remote client, but it still can't access anything outside of htdocs.

And even if we don't care about security at all, capabilities make it easy to see what a program does; they make it easy to test programs by replacing OS resources with mocks; and preventing access to globals helps to avoid race conditions, since two functions that access the same resource must be explicitly introduced.

Running on a traditional OS

A capability OS would let us run a program's main function and provide the capabilities it wanted directly, but most systems don't work like that. Instead, each program requires a small trusted entrypoint that has the full privileges of the process. In Eio, an application will typically start something like this:

1
2
3
4
5
Eio_main.run @@ fun env ->
let net = Eio.Stdenv.net env in
let fs = Eio.Stdenv.fs env in
Eio.Path.with_open_dir (fs / "/srv/www") @@ fun htdocs ->
main net htdocs

Eio_main.run starts the Eio event loop and then runs the callback. The env argument gives full access to the process's environment. Here, the callback extracts network and filesystem access from this, gets access to just "/srv/www" from fs, and then calls the main function as before.

Note that Eio_main.run itself is not a capability-safe function (it magics up env from nothing). A capability-enforcing compiler would flag this bit up as needing to be audited manually.

Use with existing security mechanisms

Maybe you're not convinced by all this capability stuff. Traditional security systems are more widely available, better tested, and approved by your employer, and you want to use that instead. Still, to write the policy, you're going to need a list of resources the program might access. Looking at the above code, we can immediately see that the policy need allow access only to the "/srv/www" directory, and so we could call e.g. unveil here. And if main later changes to use TLS, the type-checker will let us know to update this code to provide the TLS configuration and we'll know to update the policy at the same time.

If you want to drop privileges, such a program also makes it easy to see when it's safe to do that. For example, looking at main we can see that net is never used after creating the socket, so we don't need the bind system call after that, and we never need connect. We know, for instance, that this program isn't hiding an XML parser that needs to download schema files to validate documents.

Thread-local storage

In addition to global and local variables, systems often allow us to attach data to threads as a sort of middle ground. This could allow unexpected interactions. For example:

1
2
3
let x = ref 0 in
f x;
g ()

Here, we'd expect that g doesn't have access to x, but f could pass it using thread-local storage. To prevent that, Eio instead provides Fiber.with_binding, which runs a function with a binding but then puts things back how they were before returning, so f can't make changes that are still active when g runs.

This also allows people who don't want capabilities to disable the whole system easily:

1
2
3
4
5
6
7
8
let everything = Fiber.create_key ()

let f () =
  let env = Option.get (Fiber.get everything) in
  ...

let main env =
  Fiber.with_binding everything env f

It looks like f () doesn't have access to anything, but in fact it can recover env and get access to everything! However, anyone trying to understand the code will start following env from the main entrypoint and will then see that it got put in fiber-local storage. They then at least know that they must read all the code to understand anything about what it can do.

More usefully, this mechanism allows us to make just a few things ambiently available. For example, we don't want to have to plumb stderr through to a function every time we want to do some printf debugging, so it makes sense to provide a tracing function this way (and Eio does this by default). Tracing allows all components to write debug messages, but it doesn't let them read them. Therefore, it doesn't provide a way for components to communicate with each other.

It might be tempting to use Fiber.with_binding to restrict access to part of a program (e.g. giving an HTTP server network access this way), but note that this is a non-capability way to do things, and suffers the same problems as traditional security systems, separating designation from authority. In particular, supposedly sandboxed code in other parts of the application can try to escape by tricking the HTTP server part into running a callback function for them. But fiber local storage is fine for things to which you don't care to restrict access.

Symlinks are a bit of a pain! If I have a capability reference to a directory, it's useful to know that I can only access things beneath that directory. But the directory may contain a symlink that points elsewhere.

One option would be to say that a symlink is a capability itself, but this means that you could only create symlinks to things you can access yourself, and this is quite a restriction. For example, you might be forbidden from extracting a tarball because tar didn't have permission to the target of a symlink it wanted to create.

The other option is to say that symlinks are just strings, and it's up to the user to interpret them. This is the approach FreeBSD uses. When you use a system call like openat, you pass a capability to a base directory and a string path relative to that. In the case of our web-server, we'd use a capability for htdocs, but use strings to reference things inside it, allowing the server to follow symlinks within that sub-tree, but not outside.

The main problem is that it makes the API a bit confusing. Consider:

1
save_to (htdocs / "uploads")

It might look like save_to is only getting access to the "uploads" directory, but in Eio it actually gets access to the whole of htdocs. If you want to restrict access, you have to do that explicitly (as we did when creating htdocs from fs).

The advantage, however, is that we don't break software that relies on symlinks. Also, restricting access is quite expensive on some systems (FreeBSD has the handy O_BENEATH open flag, and Linux has RESOLVE_BENEATH, but not all systems provide this), so might not be a good default. I'm not completely satisfied with the current API, though.

Time and randomness

It is also possible to use capabilities to restrict access to time and randomness. The security benefits here are less clear. Tracking access to time can be useful in preventing side-channel attacks that depend on measuring time accurately, but controlling access to randomness makes it difficult to e.g. randomise hash functions to help prevent denial-of-service-attacks.

However, controlling access to these does have the advantage of making code deterministic by default, which is a great benefit, especially for expect-style testing. Your top level test function is called with no arguments, and therefore has no access to non-determinism, instead creating deterministic mocks to use with the code under test. You can then just record a good trace of a test's operations and check that it doesn't change.

Power boxes

Interactive applications that load and save files present a small problem: since the user might load or save anywhere, it seems they need access to the whole file-system. The solution is a "powerbox". The powerbox has access to the file-system and the rest of the application only has access to the powerbox. When the application wants to save a file, it asks the powerbox, which pops up a GUI asking the user to choose the location. Then it opens the file and passes that back to the application.

Conclusions

Currently-popular security mechanisms are complex and have many shortcomings. Yet, the lambda calculus already contains an excellent security mechanism, and making use of it requires little more than avoiding global variables.

This is known as "capability-based security". The word "capabilities" has also been used for several unrelated concepts (such as "POSIX capabilities"), and for clarity much of the community rebranded a while back as "Object Capabilities", but this can make it seem irrelevant to functional programmers. In fact, I wrote this blog post because several OCaml programmers have asked me what the point of capabilities is. I was expecting it to be quite short (basically: applying functions to arguments good, global variables bad), but it's got quite long; it seems there is a fair bit that follows from this simple idea!

Instead of seeing security as an extra layer that runs separately from the code and tries to guess what it meant to do, capabilities fit naturally into the language. The key difference with traditional security is that the ability to do something depends on the reference used to do it, not on the identity of the caller. This way of thinking about security works not only for controlling access to resources within a single program, but also for controlling interactions between processes running on a machine, and between machines on a network. We can group together resources and zoom out to see the overall picture, or expand groups to zoom in and get a closer bound on the behaviour.

Even ignoring security, a key question is: what can a function do? Should a function call be able to do anything at all that the process can do, or should its behaviour be bounded in some way that is obvious just by looking at it? If we say that you must read the source code of a function to see what it does, then this applies recursively: we must also read all the functions that it calls, and so on. To understand the main function, we end up having to read the code of every library it uses!

If you want to read more, the What Are Capabilities? blog post provides a good overview; Part II of Robust Composition contains a longer explanation; Capability Myths Demolished does a good job of enumerating security properties provided by capabilities; my own SERSCIS Access Modeller paper shows how to analyse systems where some components have unknown behaviour; and, for historical interest, see Dennis and Van Horn's 1966 Programming Semantics for Multiprogrammed Computations, which introduced the idea.