For each paper, your assignment is two-fold. By 10PM the evening before lecture:
Once you submit your own question and answer (or after the deadline
has passed), you can view the
questions
and
answers
that other students submitted.
Warning: You must have Javascript enabled to view a specific question.
Suppose slot_size is set to 16 bytes.
Consider the following code snippet:
char *p = malloc(256);
char *q = p + 256;
char ch = *q;
Explain whether or not baggy bounds checking will raise an exception
at the dereference of q.
Suppose a program has a buffer overflow vulnerability which allows
an attacker to overwrite a function pointer on the stack (which is
invoked shortly after the buffer is overflowed). Explain whether or not
an attacker is able to exploit the vulnerability if the same program is run
under XFI.
What's the worst that could happen if one service in OKWS were to leak
its 20-byte database proxy authentication token?
Would a Unix application running in the Unix environment described
in the KeyKOS paper (i.e., KeyNIX) be susceptible to the confused
deputy problem? Explain.
What are the principals that Java uses for access control?
List possible causes of false negatives (missed vulnerabilities)
and false positives (reported problems that are not vulnerabilities)
in the system described by the paper.
Is the descendant policy just as secure as the child policy
for frame navigation? Either explain why it is so, or describe
a concrete counter-example.
Suppose that a web application developer wants to avoid the security
pitfalls described in the ForceHTTPS paper. The developer uses HTTPS
for the application's entire site, and marks all of the application's
cookies as "Secure". If the developer makes no mistakes in doing so,
are there still reasons to use ForceHTTPS? Explain why not, or provide
examples of specific attacks that ForceHTTPS would prevent.
What is the worst that could happen if the private key of a user is
stolen (i.e., becomes known to an adversary)? Similarly, what is
the worst that could happen if the private key of a service is
stolen? How should the compromised user or service recover? Think
about possible vulnerabilities in the recovery process if the user
or service key is known to an adversary.
What are some other situations where an adversary may be able
to learn confidential information by timing certain operations?
Propose some ideas for how an application developer might mitigate
such vulnerabilities.
Could an adversary compromise a server running the system proposed
in the paper without being detected?
Suppose an adversary steals a laptop that uses BitLocker disk encryption.
In BitLocker's design, Windows has a key to decrypt the contents of
the drive.
- What prevents the adversary from extracting this key from Windows?
- If the adversary cannot extract the key, what prevents him or her
from simply using Windows to access files?
Sketch out the Resin filter and policy objects that would be needed to
avoid cross-site scripting attacks through user profiles in zoobar.
Assume that you have a PHP function to strip out JavaScript.
What are the technical risks and benefits of running an onion router
Tor node (i.e., not just a client) on your machine?
Do you think a worm similar to Stuxnet could be designed to compromise
Linux machines? What aspects of Linux or Windows design do you think
make worms easier or harder to write?
What factors control the precision with which Vanish can make data
unreadable after exactly time T?
In Table 1, what causes the secure deallocation lifetime to be
noticeably larger (for some applications) than the ideal lifetime?
How could an adversary circumvent Backtracker, so that an administrator
cannot pinpoint the initial intrusion point?
How does the proposed system deal with an adversary that tries to
frame someone else for the denial-of-service attack by marking the
attack packets they send in some way?
Given that CAPTCHAs can be solved quite cheaply, do you think that
open web sites should continue using CAPTCHAs, switch to some other
mechanism, or not use any mechanism at all (e.g., if you believe
any mechanism will be cheap to break, like CAPTCHAs)? Explain your
reasoning.
A browser cross-site scripting filter is a common client-side XSS
prevention mechanism built into many modern browsers. Here's
a brief description of what it does, in the words of Adam Barth,
one of the creators of such filters, XSS Auditor: "Basically,
the filter checks each script before it executes to see whether
the script appears in the request that generated the page. If it
finds a match, it blocks the script from executing. [...]". Do
you think such a filter may be effective at detecting DOM-based
(entirely client-side) cross-site scripting? Please explain.
The paper only mentions one potential false positives arising because
of the use of regular expression. Explain why it is indeed a false
positive.
Why is it necessary to treat innerHTML field assignments in a special
way in the Gatekeeper analysis?
Security and performance are often at odds in computer systems. Do you feel that object views is a performant
enough mechanism for everyday use?
What are some of the disadvantages of fast-propagating worms?
The paper discusses the possibility of using memory scanning to deal with the problems of obfuscation, encryption, and polymorphism. While memory scanning will enable signature-based detection, do you see any drawbacks of this approach?
JavaScript malware often uses a variety of environment detection techniques. One such technique is to check the version of the browser, plugins such as Adobe Acrobat or Flash, operating system, etc. before delivering an exploit deliberately designed for that platform and environment configuration, as illustrated by the pseudocode below.
if(browser-is-ie-6 && adober-flash-version==10.1){
heap_spray();
}
This leads to more reliable, successful exploits for the attacker. Do you see how this pattern may lead to false negatives in a runtime detector?
The paper mentions that typical Android applications execute on top of
a Java virtual machine. What is the role of Java in ensuring overall
security?
Would it be reasonable to run TaintDroid to track what data
applications may be exfiltrating from your phone at all times?
Would it be reasonable to use TaintDroid to enforce policies like
``no application can send my IMEI to the Internet''? Explain why
or why not, and what changes would be needed to make TaintDroid
applicable, if not.
While privacy seems to be one clear benefit of client-side personalization, what are some of the disadvantages of it?
What are the disadvantages of using a human-readable, pseudonymous
identifier for the user within a federated identity system,
instead of a crypto key or a long string of hexadecimal numbers?
How could the operators of the spam value chain, studied in this
paper, make it more difficult to repeat such studies in the future?
After reading this paper, propose some ideas for how you might improve
the usability of securely accessing WebSIS (http://student.mit.edu).
Suppose you are building an online multi-person game. You are
worried that a player can cheat in various ways by modifying
the game software, since it runs on the player's own computer,
or sending arbitrary network messages to your game server.
What security properties could you get by using TrInc in your
game (e.g., a trinket comes in the box when you buy a game)?
What security problems cannot be solved with TrInc?
The authors of the Capsicum paper describe several strategies
for how to use Capsicum in several applications (Section 4).
How would you recommend using Capsicum in the different components
of OKWS? Are there features missing from Capsicum that would have
made it easier to build OKWS?
Suppose you are helping the developers of a complex web site at
http://bitdiddle.com/ to evaluate their security. This web site
uses an HTTP cookie to authenticate users. The site developers
are worried an adversary might steal the cookie from one of the
visitors to the site, and use that cookie to impersonate the
victim visitor.
What should the developers look at in order to determine if
a user's cookie can be stolen by an adversary? In other words,
what kinds of adversaries might be able to steal the cookie of one
of the visitors to http://bitdiddle.com/, what goes "wrong"
to allow the adversary to obtain the cookie, and how might the
developers prevent it?
Note: an exhaustive answer might be quite long, so you
can stop after about 5 substantially-different issues that the
developers have to consider.
Why is it important to prevent access to scope objects?
Suppose an adversary discovers a bug in NaCl where the checker
incorrectly determines the length of a particular x86 instruction.
How could an adversary exploit this to escape the inner sandbox?
Based on the different schemes described in the paper, what do
you think would be a reasonable choice for authenticating users in
the following scenarios, and what trade-offs would you have to make:
- Logging in to a public Athena machine in a cluster.
- Checking your balance on a bank's web site via HTTPS from a private laptop.
- Accessing Facebook from an Internet cafe.
- Withdrawing cash from an ATM.
Which of the vulnerabilities described in this paper (A1 through A5)
do you think could have been found with some kind of automated tool
(such as fuzzing or program analysis) and what might such a tool
look like?
Think about other applications that you run on your mobile phone.
How might you apply Koi's techniques to help ensure privacy in
these other applications? What other techniques could be useful?
Could large email providers, such as GMail, Yahoo Mail, or
Hotmail, use ideas from SybilLimit to better detect spam email?
What assumptions would they need to check?
First, ignoring range metadata, what constraint would KINT generate
for the count variable in the code from Figure 3?
Second, how can you simplify the snippet of code in Figure 1 using
the NaN integers as described in Section 7?
Steve Bellovin's ``A Look Back'' paper was published in 2004,
almost 10 years ago (and the paper itself is a retrospective on
his earlier paper from 1989). Which of the security problems in
the TCP/IP protocol suite described in Steve Bellovin's paper are
still relevant today?
After you have read about Django's security mechanisms, think
back to ``The Tangled Web''. What security pitfalls still remain
for developers using Django? Could you extend Django to help
developers avoid those pitfalls, in a style similar to Django's
existing protections?
For the different parts of the browser state shown in Tables 1-3,
what are the security implications of a "yes"? Consider both of
the two threat models that the authors put forward for private
browsing.
What do Dropbox developers gain from the obfuscation measures
described in the paper? Could they have made it impossible for
the authors to perform this kind of reverse-engineering?
Consider the following query:
SELECT SUM(GREATEST(salary, 100)) FROM employees;
The GREATEST(a, b) function returns the larger
of a and b, so the above query returns
the sum of all salaries in the employees table,
rounding up any salaries below 100 to 100.
How could CryptDB rewrite this query to execute over
encrypted data, using the encryption schemes described
in the paper?
For a BROP attack to succeed, the server must not rerandomize canaries
after crashing. Suppose that, after a server crashes, it creates
a new canary by SHA1-hashing the current gettimeofday() value. Is
this new scheme secure?
KLEE uses a satisfiability (SAT/SMT) solver to implement symbolic
execution. What would go wrong if KLEE did not use a SAT/SMT solver,
and instead tried all branches? What would go wrong if KLEE just
guessed randomly about what a symbolic value could be?
What kinds of security vulnerabilities are still possible in an
Ur/Web application? One approach might be to keep the OWASP
Top-10 list in mind as you are reading the Ur/Web paper, and
consider whether Ur/Web's features can eliminate certain classes
of bugs, or whether it's still possible to have vulnerabilities.
A note from the paper author: this paper is a draft of a camera-ready
conference paper, and if you have any bug reports or suggestions about
the paper, the author (Adam Chlipala, adamc@csail.mit.edu) would
appreciate your feedback!
Security engineering classes often focus on technological mechanisms
such as cryptography or programming techniques (i.e., controls) to
prevent security problems, but safety and biomedical engineering
classes tend to focus on risk management to balance risks and
benefits. Consider the situation of requiring fast emergency access
to control an implanted medical device that must also remain secure.
If the overarching goal is patient safety, how might your choice
of security mechanisms differ from traditional computing contexts?
How do we achieve both safety and security while balancing risks
and benefits that ensure patient safety?