Trusted Computing ================= what's the problem these guys want to solve? you're talking to a server over the network you're worried the server might be compromised want to have some assurance of the server's integrity but you might not trust what that server says on its own -- it can lie ideal model: something tells you if the server is compromised or not what are the possible definitions of integrity? Biba: information flow control, like flume, only approved inputs problem: too strict, on the Internet you will have low-integrity inputs Clark-Wilson: you can have low-integrity inputs through approved transforms problem: hard to verify all of the approved transforms what's their definition of integrity? server started out running a known-good Linux kernel application binary was known-good, shared libraries, config files, etc couple of ways to get their version of integrity "secure boot": every piece of code must be signed, boot loaders check sig to prove to clients you're OK, store some private key clients must trust that your system is doing secure boot .. and that there's no way to get the key without secure boot "authenticated boot": remember what code was run, no sigs needed to prove to clients you're OK, present a proof of code you ran what kinds of problem does this guard against? network attackers redirecting packets to another machine (but that's SSL) attacker steals admin's password, replaces Linux kernel, app binaries, config files attacker gains physical control of your machine, boots it from a USB drive attacker steals the SSL certificate and tries to impersonate the server? well, attacker can probably replicate the exact same configuration will look OK if attacker is running approved binaries solutions? client can remember exact signing key used by server's TPM server can keep data encrypted on disk (so attacker can't steal it, even if they clone the server) server can use keys stored in TPM sealed storage this way, must run on the original hardware, for what that's worth.. for all of this, you need to tie the application's crypto protocol to the verification protocol.. paper doesn't describe how that works what kinds of problems don't they protect from? find a buffer overflow in the application, or in the kernel attacker gaining root access: can debug, modify any process application logic bugs, what Resin finds: XSS, SQL inj, password disclosures attacker physically tampers with server's DRAM TPM hardware, overall architecture DRAM /-- BIOS | | CPU --- northbridge --+-- TPM what does the TPM hardware provide? internally: PCRs, a private key, trusted counters reset: should only happen when the entire system resets sets all PCRs to a fixed value (all-ones) measurement: extends PCRs with a new "measurement" PCRx = SHA1 ( PCRx || measurement ) attest: use TPM's key to sign user-supplied message and some PCR's seal/unseal: (not used in this paper) use one of TPM's private keys to encrypt/decrypt data seal (authenticated-encryption) data along with some PCR values TPM will only decrypt data when PCR values match sealed values can be used to keep secret keys accessible only to specific software why should clients trust the TPM's signature? certificate from the hardware manufacturer claims TPM key is "authentic" means that you trust every TPM key in the world not to be compromised TPM attacks usually hard to tamper with CPU's internals or with TPM internals tampering with the bus connecting the TPM is much easier can reset the TPM without resetting the CPU PCR's cleared, can fake anything chained measurements who measures what? BIOS trampoline -> BIOS -> grub -> Linux kernel -> ... how can we trust each step? we measured the code that's trusted to perform next measurement need to also measure important config files that affect how we run code paper says grub.conf not important to measure: why? what happens to arguments to a program, e.g. argv or kernel params? presumably the app should include them in the measurement somehow (again, paper doesn't say how that happens..) SKINIT measuring everything along the way is pretty inflexible why this limitation? because CPU & TPM must reset simultaneously with hardware virtualization, can place CPU in a known state without reset SKINIT instruction lets you do just that start a new HW VM, protected from any outside interference (at first) CPU sends special msg to TPM indicating it's safe to reset some PCRs also performs a measurement of VM contents and sends it to TPM avoids the need to trust BIOS, etc upfront what threat models is TPM hardware useful for? you trust your HW+SW, want to prevent attackers from bypassing software roughly what this paper is about you trust some piece of hardware but don't trust all of the software on it SKINIT lets you start from a known state DRM: someone else trusts your hardware but does not trust you data (audio, video) applications (software license tied to a TPM key) how does the paper's integrity measurement system work? what gets measured? chain-measurement from BIOS -> loader -> kernel kernel keeps a list of measured files: measurement list (ML) every loaded kernel module every executable that ran on the system every config file or other file that application asked to measure assumption: any attack (that this model is concerned about) must leave a trace in either the initial chain measurement, or in the kernel's measurement list. attacker runs some malicious code on the system as root: added to ML what if an attacker tampers with the kernel's ML? each measurement list entry extends a particular PCR modified measurement list won't match the PCR anymore measuring code for interpreted languages runtime responsible for adding a measurement of the interpreted code bash, java, etc can think of the application's config files as interpreted code TOCTTOU races in measurement problem: file measured, then modified by attacker, then read by application solution: kernel invalidates measurement if it detects modifications how would a client verify the server's integrity? server provides a signed message that contains PCR values verify PCRs match what's expected (BIOS, grub, Linux, ..) server response includes measurement list for application integrity how does the client ensure freshness? response contains nonce signed by the server TPM's private key how do we know the ML is fresh? should match one of the PCRs in signed msg what does the client do with the measurements? hard problem: need to decide if we trust the server based on code it ran what the paper seems to suggest: client has some idea what the server should be running verifies that nothing else ran (unknown root processes would be bad) verifies all config files match what the client expects problem: measurement list doesn't say what the files were used for! clever attacker may find a way to use one legal file in place of another might have unintended effects! e.g. running some shell script as root may be really bad, even if it's OK to run it under the web server's account e.g. there may be different config files for different services, and using one service's config file for another will result in an insecure service! recall the "be explicit" principle from kerberos lecture! paper's design seems to be missing context for measurements (at least as described) pretty inflexible: what if the admin wants to change some config files? e.g. change the root password? need to inform all users what the new pw hash is?! how would upgrades work? presumably client needs to know about all acceptable versions of software alternative: server-side code that performs "secure boot" equivalent checks some signature on software running on the server measurement includes the signer's public key / name / version? need to trust the piece of server-side code that will check sigs