Java Security ============= what's this paper about? confinement of "mobile code" untrusted code running on user machines (Javascript, Java, etc) want to prevent mobile code from doing bad things (corrupt disk, steal data) want to allow mobile code to do something (draw images, download data, ..) why don't we use the things we already know about? privilege separation, KeyKOS capabilities, Unix processes, etc portability: each OS has different mechanisms (windows vs linux vs keykos) performance: these other mechanisms rely on hardware isolation expensive to switch between protection domains why? often need to do a context switch, change page tables this invalidates many hardware caches, thus incurs perf. penalty didn't KeyKOS, OKWS perform OK? yes, but this paper argues it's at the expense of ease of programming requires programmer to carefully think about protection domain crossings language-level protection domain crossings might be 1000x cheaper good reason to explore this technique! software protection/confinement vs. a single language instead of a single OS we're now tied to a single language next lecture: XFI (software isolation for binary code) what are the two kinds of protection/security this paper talks about? memory protection making sure one program does not corrupt another program's state how does java achieve this? verify bytecode before running it bytecode can only access data, invoke code "safely" no way to directly access memory, invoke syscall, .. rules: type safety and private/public methods/fields java: all code in classes, all data in objects object fields public or private (to code in that class) one program does not have access to another program's objects, because it doesn't have a valid pointer and can't make one up would range checking suffice? e.g. object is of a given size? probably not: can synthesize pointers secure services some application-level notion of security e.g.: compiler in "confused deputy", okws services, unix root daemons OS techniques: setuid, daemons too many privileges held by trusted code policy cannot be enforced outright in the language goal: language support to make it easier to build them what's the basic java sandbox model? two kinds of code: trusted and untrusted differentiated by their classloaders classloader: responsible for taking strings and turning them into code resulting java classes keep a pointer to their original classloader one classloader used to load java code from the local disk another classloader used to load java code from the network aside: is this classloader-based separation a good idea? assumes attacker can't get local classloader to load his/her code but lots of untrusted data is reachable by local classloader e.g. browser cache, network paths, temporary files, .. both co-exist in the same JVM well-known set of privileged operations: FS access, network access unprivileged: communicating with applet's origin server rule: when a privileged operation is performed, if any frame on the stack came from untrusted code, then deny the operation why do we need stack inspection? is it OK to just check caller? might have "trusted" code that unintentionally performs sensitive ops trusted void convertToUppercase(File f) should it be allowed to access files? why does the paper argue the model is too restrictive? probably want to perform some privileged operations in trusted code carefully-written code known to be "safe" even if invoked by untrusted semi-trusted java applets might need limited privileges grant only screen-drawing privs to youtube.com grant access to local storage to gmail code (so need to differentiate between kinds of untrusted code) is it sufficiently restrictive, though? corrupt data in trusted objects to trick them into doing sth later? trusted code: open(o.getFileName()), where o is untrusted object register a swing/awt callback with specially-crafted arguments? all stack frames will be trusted, but untrusted code "caused" the op so how do the more flexible alternatives work? need a notion of a principal principal = some party that signs a piece of code at some level, very general: anything we want can be a signing key expected use: software developer signs. why is this the right model? what doesn't this work for? applet wants to priv-separate itself? need a notion of what's being protected not explicitly defined up to the application or trusted code to decide what matters java objects, access to network, different parts of the file system, .. need a policy not very clearly spelled out in this paper what the policy looks like rules that grant access to (app-level) operations to diff. principals capabilities what's their strawman design? how does this help vs. sandbox policy? trusted code can invoke privileged operations (if it has the capability) semi-trusted code can be selectively granted capabilities example: figure 2 can attacker get fs from SubFS? can attacker change rootPath? can attacker make a new SubFS? why isn't Java pointers good enough for capability-based isolation? global namespaces inside Java can access things using class names - invoke class constructor - access global static objects - access static class methods global namespaces outside of Java (e.g. can use ".." on a directory) pure capability systems have no global names (eg KeyKOS) why not use capabilities? doesn't fit with existing java apps/libraries (rely on global objects) tricky to undo granting a capability stack introspection what's the strawman design? what do you pass to checkPriv() et al ("target")? free-form strings -- just need to match up between check and enable in reality it's a hierarchical namespace of targets e.g. { java.io.FilePermission, "read,write", "/home/alice/*" } who is allowed to call enablePrivilege()? that depends on the policy "trusted" local code can enablePrivilege() any target untrusted code must be granted the privilege in the policy how does this help us, vs. simple sandbox policy? trusted code can call enablePrivilege() to perform trusted op even if it was called by untrusted code higher in the stack what goes wrong with the strawman? simple stuff: need to ensure privileges are local to a thread soln: keep track of privileges per thread simple stuff: might forget to disablePrivilege() soln: disable privileges on function return more tricky: trusted code invokes untrusted code ("luring" attack) untrusted code might be able to use all enablePrivilege()'s soln: entire call stack from checkPrivilege() to enablePrivilege() must be composed of principals allowed that privilege example: figure 4 some traces of a capability design pointer to a FileInputStream gives you rights, even if you can't enable inside the java.io.FileInputStream() there's a checkPerm("UnivFileRead") who gets to define the targets? why does it matter? might want third-party code to control its own privileges what does it mean to define a target? really about who can call enable anyone can always call checkPriv() on any target they want, presumably MS design is really a subset of the netscape design here netscape design: add extra level of hierarchy to target names: princial (System, "FileAccess", "/home/alice/*") (google.com, "GmailFolderAccess", "Spam") gmail code might grant the latter priv to client-side spam filter how does code signing work? microsoft and netscape have different schemes netscape scheme: principals sign code code asks for targets policy says what principals should be granted what targets microsoft scheme principals sign code and target sets policy says what principals should be granted what targets more explicit: principal never grants unintended targets suppose in NS design a principal got extra privileges over time any code it signed in the past will now get those privileges! nice principle for designing secure systems: always be explicit namespace management what's the mechanism? kind-of a capability design, except for class names and not object ptrs controlled mapping for global class names - can give a specialized impl (that might have more restrictions) - can provide no mapping (will get an undefined class exception) what are the pitfalls? might be able to abuse code that _does_ have access to protected classes sort-of like a confused deputy problem: implicit privileges no analog of enablePrivilege() are there things these mechanisms don't protect us from? denial of service -- need accountability to track down covert channels impl bugs in the Java VM or the protection mechanism mediation why do they think it's a bad idea to grant privileges? in keykos this seemed helpful in constructing least-privilege domains worry: you might be wrong about granting privileges to some applet want to revoke privileges and not worry about where they leaked can stack inspection leak privileges? avoids luring attack by checking every frame along the path to enableP can namespace management leak privileges? maybe if there's a base class like InputStream someone could give you a FileInputStream object as an InputStream might not be so bad because file name is fixed in this case accountability capabilities: can know who got the capability at first, but not who used it stack inspection: know who called enablePrivilege() name space: no idea what happened, aside from what the namespace policy was least privilege how do the systems stack up? capability trick: passing privileges through untrusted code in a wrapper in practice, stack inspection seems to be the prevalent Java security model now