Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Introduction

A major tenet of western political philosophy is the idea that freedom of speech is an essential and unalienable human right.  Logically, this same ideal extends to information published on the Internet, for what is the Internet but a widely used, virtual, publicly accessible network of electronic printing presses.  While the concept of a free Internet permeates much of the western world, there are many countries that do not subscribe to the idea that freedom of press is a human right.  Such countries have taken widescale actions to censor the Internet in their countries without completely disconnecting their citizens from such an economically vital resource entirely.  These governments use sophisticated filtering techniques such as IP and DNS blacklisting as well as traffic analysis and deep packet inspection to try and discern “appropriate” traffic from “inappropriate” traffic.

While there have been some successful attempts at circumventing such filters in the past, all of these attempts have fundamentals limitations.  For example, the most widely used anti-censorship system, Tor, has a finite number of entry nodes that can be discovered (and blacklisted) by using a Sybil attack.  To address these limitations, Wustrow, Wolchock, Goldberg and Haldermen have published a paper entitled “Telex: Anticensorship in the Network Infrastructure,” at this year’s USENIX conference.  In it the authors propose a new system for circumventing Internet censors (supposedly) anonymously.  Let’s see how it measures up.

The Telex Anticensorship System

Telex is designed to achieve several goals.  First, Telex must be unblockable, meaning that a censor should not be able to block Telex access without also blocking connections to an essential web service.  Second, Telex should be confidential, in that the government must be unable to differentiate between a Telex connection and a valid TLS connection to an unblocked site.  Third, Telex must be easy to deploy, meaning that a failure of Telex should not interfere with normal operations at either the ISP or the client side.  Finally, Telex should be easy to use—after a short startup procedure it should be essentially invisible to the user.

Telex relies on a “friendly” ISP, or rather, a group of friendly ISPs outside of the censored network to recognize seemingly random tags that are embedded in legitimate TLS connection to an uncensored website as the nonce field.  In order to do so, the friendly network must have a Telex station and it must sit in the middle of the censored network and the unblocked site.  In order to guarantee this, we either need a lot of Telex-supported ISPs around the unblocked website, or we need a large number of Telex stations around the censored network to ensure that BGP will pick a route through an ISP with a Telex station.  Assuming that we have such a setup, a friendly network will check all TLS connection for a Telex tag by using its private key to attempt to verify the nonce field.  If a Telex tag is recognized, the ISP’s Telex station will allow the TLS handshake to continue, but the client will modify the handshake so that it leaks the key to the Telex server.  Once the TLS connection with the unblocked site is established, the Telex server will immediately send an RST packet to the server to force it to drop the connection.  It will then reroute all the traffic over the connection to its actual destination.  The client will then tunnel its traffic over the Telex connection by treating the connection as SOCKS proxy with packets being sent to the unblocked site but actually being captured and rerouted by the Telex server. Performance tests of Telex have showed that it is only minimally slower than using a TLS connection directly between a client and server.

Feasibility

While Telex’s implementation is cryptographically secure and while it is ingenious and performs well, I believe that it has two serious drawbacks that will prevent it from ever being deployed in the real world.  One drawback is that the paper completely relies on the mythical concept of a “friendly” ISP.  This seems to be wholly unreasonable for several reasons.  First, an ISP is a business and its purpose is to make money, not to spread goodwill in countries that it has no real economic interest in.  In addition, implementing Telex will be costly for the ISP since it will require the construction of Telex stations, which will have to perform cryptographic operations and deep packet analysis on all HTTPS connections routed through them.  Telex will also significantly increase the traffic load for these ISPs since the ISP will now have to route traffic between the censored network and any other network that a Telex user wants to connect to rather than routing only the traffic that BGP would normally route through the network.  This creates overhead and additional costs and does not increase revenue.  Even worse, what if such a network ever decides to do business in a country whose government it is actively attempting to subvert?  Obviously, a country would react unfavorably to business enterprises from companies that are attempting to break its laws.  As a result, I see a whole bunch of downsides on the ISP’s end and absolutely no upsides to participating in Telex.  Now add to this the requirement that many ISPs must participate in Telex so that some HTTPS traffic to an unblocked site has a good chance of being routed through one of them and we can see that we have a real problem with ever getting Telex to be deployed in the real world.

Now let’s tackle the issue of confidentiality.  The paper assumes an attacker that cannot perform statistical traffic analysis on any connections it is monitoring.  I find this idea highly unrealistic.  In reality, a country that is looking to censor the Internet can look at what a “normal” HTTPS session looks like and then compare it to what a Telex session looks like (as a hint, a Telex session is probably going to be MUCH longer than a real HTTPS session, and it will probably look different under traffic pattern analysis).  It is also important to point out that a repressive government need not prove that a citizen is using Telex–it need only develop a reasonable suspicion.  Repressive governments don’t need warrants to investigate what their citizens are up to.

Overall, I consider Telex to be an elegant idea from a theoretical perspective, but I don’t think it is capable of being used in practice.  I believe that its reliance on friendly ISPs and its probable vulnerability to traffic analysis to be the nails in its coffin.  Despite these weaknesses, I should point out that this is still a good paper and that it is important to read and learn from papers like these, even if they are not likely to work in practice, since many practical schemes are originally based on originally infeasible schemes that have been modified to be more practical.

 

Reference:
“Wustrow, E., Wolchock, S., Goldberg, I., and Halderman, J. A. Telex: Anticensorship in the network infrastructure. To appear in the proceedings of the 20th USENIX Security Symposium (2011).”

Introduction

Attribute-Based Encryption (ABE) has become a huge area of research in cryptography over the past five years.  Originally conceived as a system to allow for error-tolerance in identity-based encryption (IBE) for applications, such as biometrics, ABE has grown into a giant and has become the next big thing in cryptography. Ironically, IBE has been shown to be a specialized application of ABE and thus the new technology has eclipsed the scheme that it spawned from in size, complexity and power.  ABE originally started by generalizing the definition of identity from a string to a set of attributes. This allows for the creation of a threshold value such that a message is encrypted and sent to a given identity. The message is then decrypted by the owner of that identity if the public key and private key are within a certain threshold of similarity.   This concept was extended to allow for Boolean policies to be used with attributes to determine the requirements for decrypting a ciphertext.  Additional research has created schemes for decentralizing ABE and for using ABE to sign messages.

Section 2 of this post will cover the basis of ABE as a generalization of IBE and will explain how Boolean policies can be created to allow for certain attribute requirements to decrypt a message.  Sections 3 and 4 will deal with how these policies are attached to the messages.  Section 5 details how to decentralize ABE such that attributes can be assigned by many different attribute authorities, some of which may be corrupt.  Finally, section 6 discusses the concept of attribute-based signatures to provide message authentication for ABE schemes.

 

Fuzzy Identity-Based Encryption and Attribute Based Systems

ABE was first conceived by Sahai and Waters in their paper “Fuzzy Identity-Based Encryption” [11] as a means of allowing for a certain amount of error tolerance in IBE.  At the time, identities used for IBE were strings that were associated with a given user.  For example, the string mike.jones@gmail.com may be associated with a Gmail user named Mike Smith. In IBE, such an identity string is expected to be publicly known and is used to derive a public key to send encrypted messages to the owner of the identity.  This makes the process of obtaining a public key to send private messages to a user much more simplistic than in a typical public key infrastructure where a user’s certificate must be obtained and verified before messages can be encrypted to him or her.

While IBE is useful in many applications, certain implementations were impossible under its original construction.  One such example is in the case of using biometrics to derive a user’s public key.  In such a situation, it is often the case that two biometric readings of the same person will differ by a small amount.  In such situations, IBE is useless.  Sahai and Waters set out to change this by creating an IBE scheme that allowed for a certain amount of error tolerance in the measurements that could be used to derive a public key.  Their scheme views an identity as a set of descriptive attributes, rather than as a string.  The proposed scheme allows for a user with a secret key ω’ to decrypt a message encrypted using the identity ω if and only if |ω ∩ ω′| ≥ d where d is a quantitative threshold value.  In addition, the scheme issues all users a random polynomial.  Any time an attribute’s private key component is assigned to a user, the user’s random polynomial is used to generate the component.  In this way, different users cannot combine their separate attributes to decrypt a message that none of them would be able to decrypt individually, an attack known as collusion. The Sahai-Waters construction is secure in the Fuzzy Selective-ID model of security, which reduces to the hardness of the Decisional BDH assumption.  In addition, the paper specifies that the scheme can be extended to be CCA secure in a straightforward manner.

A huge advantage of ABE over other encrypted schemes is that a user may use it to store private documents addressed to groups of users that fit a specific set of requirements on a public server.  This works because in ABE each user is associated with a set of attributes.  A party may encrypt a message under a specific set of attributes.  And can then set a threshold value for the number of attributes required to decrypt the message.  For example, perhaps a user would like to encrypt a message under the attributes MIT, Harvard, with a threshold value of 1.  In this case, only parties that possess the attributes MIT or Harvard or both can decrypt the message.  Similarly, if the threshold value was set to 0, then only users with both the MIT and the Harvard attributes can decrypt the message.

Secure Attribute-Based Systems with Non-Monotonic Access Structures

While the Sahai-Waters construction of ABE was an excellent first step, there were several limitations in their scheme that left much room for expansion.  In particular, while the Sahai-Waters construction allowed for the encryption of a message to its intended recipients by selecting attributes, and it allowed for a threshold to be set such that only k-of-n attributes were required for decryption, their scheme did not detail a way to encrypt messages under more expressive policies.   Additionally, ABE requires that for a k-of-n construction, k and n must be fixed constants across all Ciphertext objects created in an attribute system. Finally, the Sahai-Waters implementation of threshold values requires a large number of computations, slowing down the encryption scheme.

These issues were addressed by Piretti, Traynor, McDaniel and Waters in the paper “Secure Attribute-Based Systems” [10].  First, the Threshold calculation function was replaced with a random oracle.  This has two main advantages.  In particular, it speeds up the computation of the key generation and encryption algorithms dramatically and it allows ciphertexts to contain variable numbers of attributes.  On the other hand, the random oracle model also weakens the security of the encryption scheme by making the security of the entire scheme rest solely on the security of the cryptographic hash function used.  This optimization makes threshold computations in ABE significantly faster than computations for equivalent functionality in RSA.

The second and most important contribution of the paper is the creation of the idea of attribute policies.  An attribute policy is a plaintext Boolean expression that defines the combination of attributes that are necessary to decrypt a given message.  To get this flexibility nothing but the basic structures of ABE are used, but combined in novel and clever ways.  First the paper starts by dealing with atomic policies, that is policies over subsets of the complete attribute set, A.  Note that from the ABE paper we already have a k-of-n threshold function.  The authors show that AND and OR gates can be constructed by creating an n-of-n threshold and a 1-of-n threshold for attributes, respectively.  The authors also show that once we get past the atomic building blocks of ABE policies, we can specify more complex combinations by combining policies with AND and OR gates.  To encrypt policies under an OR gate, we can simply concatenate the encryption of each object under each policy.  So for example, if we want to encrypt an object under two policies we would do E(oi, P1) * E(oi, P2).  In the case that a user has attributes to satisfy one of the requisite policies then the user could decrypt at least one of the encryptions of the object.  Similarly, to create logical AND gates, a user can encrypt an object under the first policy, and then encrypt this object under the second policy and so on, thus nesting the encryptions.  To decrypt such a message, the receiver would have to decrypt under each policy sequentially.  The authors point out that this situation does allow for collusion attacks, however, since one attacker may be able to decrypt one layer of the message encrypted under one policy, and other attackers may be able to decrypt other layers of the message under other policies.

Finally, the paper shows that the flexibility of ABE can be extended to allow for ABE to support variable k and n values in a k-of-n construction.  Sahai and Waters themselves proposed two possible solutions to this problem in their original paper on fuzzy IBE.  Their first solution was to create an upper limit on the number of attributes in a system and then to provide “default attributes” which could be used as a placeholder when fewer than the maximum number of attributes is to be used as the threshold value.  However, this scheme is not fully expressive.  The other Sahai-Waters solution was to create separate cryptosystems, each with a different k value.  This scheme is much more expressive, but would require a large number of cryptosystems to be feasible.  The authors instead proposed a hybrid scheme that makes use of both of these previous schemes.  In this new scheme, the second scheme is used to create all of the n separate cryptosystems for 1 to n where n is the maximum number of attributes in the universe.  Then, the first scheme is used to allow for varying the k value for each scheme.

These optimizations to ABE allow for many interesting applications.  One proposed application in the paper was a HIPAA compliant electronic medical record encryption system, where parts of medical records could each be encrypted with a different policy.  For example, a patient’s family medical history could be encrypted under one policy that might allow access for just doctors that take United Healthcare, while the same patient’s current prescriptions can be encrypted under a policy that might allow access for either doctors or nurses in offices that accept United Healthcare.  Another proposed application was a social networking or online dating system in which each user can specify his or her own attributes and also policies for what attributes a person must have to view their profile.

One limitation of this work is that the proposed ABE scheme was still only able to express monotonic access structures.  No method existed to express the need for a lack of an attribute in a given policy.  A while later Ostrovsky, Sahai and Waters proposed a modification to ABE that gave it this functionality in the paper “Attribute-Based Encryption with Non-Monotonic Access Structures” [9], but the details of its implementation are highly mathematical and are outside the scope of a survey paper.  Suffice it to say that current ABE schemes can express any access structure, including non-monotonic ones (meaning that the lack of an attribute can be expressed within attribute policies).

 

Key-Policy Attribute Based-Encryption

Existing ABE schemes can be divided into two forms: Key-Policy ABE (KP-ABE) and Ciphertext-Policy ABE (CP-ABE).   KP-ABE was introduced in the paper “Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data” by Goyal, Pandey, Sahai and Waters [3].  It deviates from the original Sahai-Waters construction in that the Sahai-Waters construction originally proposed that users would have a set of attributes, and documents would be encrypted under a set of attributes.  If there enough overlap between the set of user and ciphertext attributes to be within a certain acceptable threshold then the user would be able to decrypt the ciphertext.  As detailed above, after more complex access structures were created using this scheme a fundamental limitation arose in that there was no way to create an AND gate not susceptible to collusion attacks.  This is a critical flaw in the practicality of the scheme since collusion resistance is considered a necessary condition to ensure the security and usefulness of ABE.  KP-ABE and CP-ABE are complimentary schemes that both address this limitation.

In Key-Policy ABE, data is annotated with attributes and access policies are associated with keys.  This sounds a bit counterintuitive so an (admittedly contrived) example is in order.  Perhaps a lieutenant wants to encrypt orders for his superior officer.  In this case, the lieutenant would encrypt his message and annotate it with the attributes “captain” and “Easy Company” (the name of his company).  Now, say the captain’s superior officer (a major) wants to read the message.  The major’s access policy would be: (“major” OR “captain” OR “lieutenant” OR “private”) AND “Easy Company”.  In this way, a major can read messages designated to his subordinates within the company.  Note that this example is a form of Hierarchical IBE (HIBE), which is a well-known application of ABE that was discussed in the paper.  More information on HIBE as implemented with ABE can be found in the paper “Unbounded HIBE and Attribute-Based Encryption” by Lewko and Waters [5].

For KP-ABE to work, private keys are actually Boolean expressions composed of attributes.  When a user attempts to decrypt a message, the attributes annotating the document are matched up with the attributes in the Boolean expression.  If the expression is satisfied then the user is able to decrypt the document.  This is implemented as a tree-access structure in the paper.  In such a tree, the exterior nodes are associated with attributes and the interior nodes are logic gates.  A traversal of the tree determines whether the policy is satisfied by the attributes present in the document or not.

KP-ABE also allows for delegation of new attributes to users so long as they are more restrictive than current attributes.  For example, a professor with the attribute “Professor” may use this attribute to derive an attribute for only professors in his own lab from his secret key for the professor attribute.  Let’s say he calls the new attribute “Lab A Professor”.  Now he can distribute this key to professors in his own lab and subsequently he can now encrypt messages to only the professors in his lab.  It should be noted that because the KP-ABE paper includes a re-randomization step in the key derivation process, the derived key is completely independent of the original key and as such information about the key used to derive the new attribute key is not leaked.

 

Ciphertext-Policy Attribute-Based Encryption

Ciphertext-Policy Attribute-Based Encryption (CP-ABE) was first conceived by Bethencourt, Sahai and Waters in the paper “Ciphertext-Policy Attribute-Based Encryption” [1].  It is essentially the opposite of KP-ABE.  Rather than having a message be annotated with attributes, the message is encrypted with an access-structure formulated as a Boolean expression over a set of attributes.  Users possess sets of attributes and if a user’s attributes satisfy the access policy of the ciphertext then the message can be decrypted and the plaintext recovered.  The CP-ABE paper makes a few key contributions to ABE.  In particular, the scheme is collusion-resistant and can be easily extended into a CCA-secure implementation, as described in the paper, while providing rudimentary revocation support (that is still far from optimal).  Finally, the paper details the cpabe toolkit—a complete implementation of the CP-ABE scheme the authors developed.[1]

Collusion resistance is accomplished by randomizing a user’s private key so that private keys for attributes from different users cannot be combined to create a valid decryption key for the document.  Unlike in Sahai-Waters, however, the secret sharing is embedded in the ciphertext.  In order to decrypt a given ciphertext, a user must possess all of the given key components to remove blinding values from decryptions of the ciphertext.  If users attempt to collude, they won’t be able to compute the value needed to remove the blinding values because the different users’ private keys contain randomness that make them incompatible with each other.

The paper contains a scheme for key revocation that is accomplished by having keys expire on dates baked into the keys, but by allowing integer comparisons in the policy and by allowing the creation of numerical attributes that can be compared in the extended policy.  In this way the scheme allows for the decryption of a message if the user’s current date key is more current than the date key used in encrypting the message.  Calling this a valid revocation scheme is a bit of a stretch, since a user is still able to decrypt a message even after his key has expired as long as the message he is trying to decrypt was encrypted with a date key that expired before his date key did.  In this sense, the scheme never allows a user to lose access to messages that they once had access to.  In addition, this scheme does not allow for per-user access revocation at all.


[1] At the time of this writing the toolkit could be downloaded here: http://acsc.cs.utexas.edu/cpabe/

 

Decentralized Attribute-Based Encryption

A common requirement in all of the systems previously described is that they require an individual, centralized authority to verify and issue keys for all attributes that a user is to possess.  In certain cases this model is perfectly adequate, but in others the model does not provide enough flexibility to be used to reliably protect data and prevent collusion.  Several recent papers have dealt with this issue, the first of which is “Multi-Authority Attribute Based Encryption” by Chase [2].  Chase provides an example of needing to encrypt a message addressed to drivers in the state of Rhode Island that attend Brown University and are eligible voters.  In such a case, the recipient of the message possesses three attributes, but each of these attributes is issued by a different authority (the RI RMV, Brown University, and the RI Elections Board).  This poses an interesting problem for the security of ABE, as in a realistic situation, each authority would be responsible for verifying that applicants for private attribute keys actually possess the attributes in question and for issuing such keys.  A realistic problem with such a scheme, however, is that it becomes difficult to prevent collusion, as different users could pool credentials obtained from separate sources.

To address this problem, Chase proposes that each user be issued a global identifier (GID) that must be presented whenever a new credential is issued.  This GID makes it so that all credentials used to decrypt a message must share the same GID, but otherwise the ABE construction works the same was as Sahai-Waters.  Chase further extends this scheme to allow for the case that an arbitrary number of attribute authorities may be corrupt.  To allow for such a case, a central authority must be constructed that may or may not also be an attribute authority, but that also has the role of knowing enough of the state of each of the authorities that it is able to reconstruct the master secret for a given user at a given attribute authority that is then used along with the GID to derive the user’s private attribute key.  This master secret can be used to derive a user’s private key component for an attribute, and can thus allow a user to decrypt a message despite the presence of corrupt authorities.  In this case, only one authority needs to be trusted in order for the user to recover the message.

In the paper “Decentralizing Attribute-Based Encryption,” [4] Lewko and Waters significantly improved upon this idea by removing the requirements that there need be a trusted central authority.  Additionally, any party can act as an authority and issue attributes to any other party in the universe.  In this system, corrupt authorities will not affect any other authorities, and any user can act as an authority.

In order to do this, Lewko and Waters use a GID in the scheme in order to tie private key components together.  The scheme uses the random oracle model to achieve this, thus weakening the security of the system by making it rely on the quality of the security of the hash function used.  In their scheme, a user’s GID is hashed and included in each piece of the key for an attribute.  In this way, an attribute issued to a user is tied to this unique user to make collusion difficult.

 

Attribute-Based Signatures

Another useful application of ABE is in creating attribute-based signatures (ABS).  In “Attribute-Based Signatures” [8] Maji, Prabhakaran and Rosulek describe a system for doing exactly that.  They begin their paper with an interesting use case: say a whistleblower in the financial department at a major corporation wants to leak information about an illegal scheme happening at the company to the media.  The whistleblower can retain anonymity while still proving to the media outlets that she does in fact work at the company she claims in the financial department by using ABS to sign the press release.

The key feature of attribute-based signatures is that they must be unforgeable and provide verification of credentials while still retaining signer anonymity.  The paper compares attribute-based signatures to group signatures, ring signatures and mesh signatures stating they are all similar in that they are unforgeable and they provide anonymity for the signer.  On the other hand, the authors state that their ABS scheme is unique in that it simply proves that a single user with a specific set of attributes has signed the message.  A key aspect of ABS is that it must not allow parties to collude on signatures, otherwise the guarantees that it provides will be broken.  The other schemes mentioned are noted to be either more restrictive than ABS (ring, group) or vulnerable to collusion (mesh).

The paper gives a formal security definition of ABS and proposes a framework for constructing such schemes with associated proofs.  The paper then goes on to extend the signature scheme to multi-authority systems.  It is important to note that in this case the paper uses a centralized authority called a signature trustee to set up the parameters for the signature scheme and to provide keys to the attribute authorities, which do not have to be trusted.  This creates a bottleneck in the system similar to the one in Chase’s paper.  Future research will undoubtedly look at how to remove the requirement for a centralized, trusted signature trustee from the equation.

Finally, the paper discusses several applications of ABS schemes.  Attribute-based messaging is a relatively obvious use of ABS in which the send of an encrypted message signs the message himself to prove to the recipients that he has a given set of credentials.  In this way ABS provides authentication for message senders.  A more interesting use of ABS detailed in the paper is as an authentication mechanism as part of a challenge protocol.  In such a system, a client can request access to a resource on a server, and the server can send the client a random challenge string and a policy.  The client would then generate a session key and send the server an ABS signature containing (challenge, sessionkey) signed under the policy it received and then encrypted and sent to the server.  This scheme would be secure against man-in-the-middle attacks.  The paper also extend the scheme into a trust negotiation protocol.

 

Conclusion

ABE provides a versatile encryption framework that uniquely couples policy-enforcement and access control with cryptography.  This  allows for fewer resources to be deployed in order to prevent unauthorized access to data.  It also provides greater security than is possible with access control servers, as there is no danger of a server being compromised, thus giving an attacker to all of the data that is intended to be secret.  As a result, I predict that ABE research will continue at a very fast pace and I think in the future, we will see the development of products that use ABE for various security applications.  As detailed above, ABE is a particularly good fit for situations where privileged data can be kept on untrusted servers.  Additionally, ABE has been shown in the papers reviewed as a part of this survey to be useful for tasks as broad as biometric IBE, social network security, HIPAA compliant healthcare security, HIBE, and as specific as leaking sensitive data and documents anonymously.

Current and Future Research Topics

The most noted missing feature of ABE is revocation support.  Right now, there is no good way to revoke a user’s identity in an ABE system without revoking each of his attributes.  Additionally, even attribute revocation poses problems.  Proposed schemes to handle such issues have been based on having expiring attributes.  These systems have many flaws as pointed out in this paper.  If a way to revoke individual attributes and user identities were created on an individual level, rather than across the whole universe, ABE would likely find its way into many shipping products.

Another area of potential research is in creating a decentralized ABS scheme that does not rely on a trusted signature authority. As was the case with decentralized ABE, the requirement for a centralized, trusted entity creates a bottleneck in the system since every user must access the same entity.  Similarly, if the entity ever goes offline, it would cause a disruption in the whole system since it would be a single point of failure.

 

References

[1] J. Bethencourt, A. Sahai, and B. Waters. Ciphertext-policy attribute-based encryption. In IEEE Symposium on Security and Privacy, pp. 321-334, 2007.

[2] M. Chase. Multi-Authority Attribute-Based Encryption. In proceedings of Theoretical Cryptography Conference (TCC) 2007.

[3] V. Goyal, O. Pandey, A. Sahai, and B. Waters. Attribute-based encryption for fine-grained access control of encrypted data. In ACM Conference on Computer and Communications Security, pp. 89-98, 2006.

[4] A. B. Lewko and B. Waters. Decentralizing attribute-based encryption. In EUROCRYPT, 2011.

[5] A. B. Lewko and B. Waters. Unbounded hibe and attribute-based encryption. In EUROCRYPT, 2011.

[6] A. B. Lewko, T. Okamoto, A. Sahai, K. Takashima, and B. Waters. Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption. In EUROCRYPT, pp. 62-91, 2010.

[7] J. Li, M. H. Au, W. Susilo, D. Xie, K. Ren, “Attribute-Based Signature and Its Applications”, In ACM Symposium on Information, Computer and Communications Security (ASIACCS 2010), pages 60 – 69, ACM Press , 2010

[8] H. K. Maji, M. Prabhakaran, and M. Rosulek. Attribute-based signatures. In Proceedings of the 11th international conference on Topics in cryptology: CT-RSA 2011 (CT-RSA’11), Aggelos Kiayias (Ed.). Springer-Verlag, Berlin, Heidelberg, 376-392. 2011.

[9] R. Ostrovsky, A. Sahai, and B. Waters. Attribute-based encryption with non-monotonic access structures. In ACM Conference on Computer and Communications Security, pp. 195-203, 2007

[10] M. Pirretti, P. Traynor, P. McDaniel, and B. Waters. Secure attribute-based systems. In ACM Conference on Computer and Communications Security, pp. 99-112, 2006.

[11] A. Sahai and B. Waters. Fuzzy identity based encryption. In Eurocrypt 2005.

Introduction

Cloud computing is all the rage these days. Increasingly, businesses and consumers alike are moving their data away from local storage to centralized, always-on storage facilities in the cloud and are condensing their large, diverse server configurations to a virtualized and sandboxed environment on large multi-core machines. The advantages are staggering. Now, instead of worrying about hardware failures on multiple devices, and paying for the electricity to run each, we can downsize our hardware and support investments and instead focus on maintaining a smaller number of high-performance servers with the added flexibility of being able to easily provision and destroy virtualized computers on the fly. There are many uses for virtualization clusters and centralized data stores. This article is the first in a series on how to build a cloud for personal use out of commodity hardware cheaply. So cheaply, in fact, that a poor graduate student (such as myself) can do it for only a few hundred dollars.

There are several components to a private cloud, each of which will get its own article. Today I’m going to talk about why you might want to build a private cloud and what you will need to get started. I will give you an overview of the technologies involved and what sorts of software and hardware configurations we can use to get a reliable virtualization infrastructure, a robust datastore and a secure solution for connecting to them from anywhere in the world.

Why Build a Cloud?

Secure, Robust, Plentiful Storage

So first, why might you want a private cloud? Today, laptops are ubiquitous. Many people, myself included, do most of their computing on a laptop. They store all of their data on the laptop’s hard drive and back it up to a cheap external hard drive. Of course, laptop hard drives are small, and external drives provide no redundancy for data stored on them that is not also stored elsewhere. Furthermore, users are increasingly purchasing smartphones and tablets, and may even have home computers for family use. This is home computing today–this model is the norm. Unfortunately, it introduces the common problem of storing accessing data stored on one device from all of the others. Seeing an opportunity in the market, many businesses have sprung up with the purpose of allowing users to store their data online and to access it from any device. Such services typically give users a small amount of space for free (in the 2-5GB range, although many services are increasingly giving users up to 20GB for free). The user can then use this space as if it were a local hard drive. Anything they want to back up or share between machines can be put on the cloud space and accessed from anywhere. Not only that, but cloud space is (in theory) robust, so data outages are less likely to happen.

This is all well and good, but what if you want more than 20GB of robust space and can’t afford to pay a high monthly fee for the privilege? Or, what if you want to store sensitive data on the cloud? Does having the convenience of flexible online storage mean that you must either forfeit the privacy of the data or else use a data encryption package (that won’t run on all of your devices) in order to ensure that the third party you are using to store your data on won’t be able to read it? There is one simple solution to these issues: storing the data on servers you own, with a security configuration that you can tailor to your needs.

Virtualization – Flexible Creation of Computer and Networks

In addition to data storage, many advanced users want to run a myriad of virtualization systems today. Many developers use desktop virtualization packages such as VMware Fusion or Workstation or Oracle VirtualBox to allow themselves to easily create virtual machines running different operating systems in order to test their software on a variety of different platforms without buying a whole bunch of computers. Furthermore, many developers who are developing software meant to be used over networks may want to provision and run several virtual machines at the same time so that they can test both the client and server side of their programs. Network security professionals might be interested in virtualizing entire networks so that they can simulate attacks to and from different nodes on the network to see how the overall infrastructure is effected.

In these situations, a user must share their main computer with their test setup. Most laptops are not well equipped to deal with the requirements of running several virtualized servers alongside a bunch of consumer software. Finally, many advanced users might be interest in hosting various servers in their apartment. An advanced user might want to host a website, a database server, a proxy server and several other type of server that might be useful to him or her. In such a case, the user has several options. One option is that the user can just set up everything on one computer, knowing that if any one of these services on the computer is compromised so are all the rest. Another option is running each service on its own computer. Of course, in this case a user now has to not only administer a bunch of hardware but also pay for the electricity to do so. The final, most logical, and most cost effective option is to offload the requirements of running testbeds and servers to a single box dedicated to this purpose–a virtualization server.

But How Do we Access it?

Now that we have local storage and a virtualization server running at home, we’re going to want to be able to access them from anywhere over the internet, hence the third component of our cloud–routers. In particular, we want a router that can be finely configured to allow secure access to our self-hosted services us ad the trusted users we designate, while ensuring the security of our local network. There are two philosophies to such a requirement. Some users will want to be able to access each service from their home IP address without any additional configuration. Of course, this means that each service has to be secured properly. If they are not, and one service is compromised, an attacker will now be on your private network–a suboptimal situation. My personal opinion is that rather than spending the time to do this we should treat our network itself as its own private and view all of our configured cloud services as internal. But if the services are internal how can we access them when we are not on our network? By using setting up a Virtual Private Network! A VPN allows a user to connect to a secure network over an unsecured network (i.e. the Internet) and to pretend that the user’s computer is physically on this secure network. VPNs are encrypted, point-to-point links that, for the purposes of this configuration (and I know some people are going to get mad at me for saying this) can be thought of as one big long ethernet cable.

Conclusion

So, there you have it, instead of relying on various companies for our storage, hosting and virtualization needs and paying thousands of dollars, we can build our own self-hosted solution with significantly more flexibility, configurability, control and security. But if you’ve actually read through this entire article you’re probably thinking, how on Earth are we going to do this for less than $1000? Well, that is an excellent question! Stay tuned–in the coming weeks and I will describe how to buy and hack consumer-level hardware (or how to buy old enterprise hardware) in order to build each of the cloud components above cheaply.

When you last signed up for an online service do you remember solving the CAPTCHA?  You know, the distorted image at the bottom of the page that you need to fill out to prove that you aren’t a computer?  Increasingly, online websites have been making users solve these simple image recognition problems that are easy for humans to solve but that present very difficult problems in the field of Artificial Intelligence in order to make life more difficult for spammers and other malicious users that might want to create accounts.
CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart.  They were developed at Carnegie Mellon University in 1997 and as a programmer I can assure you that they do effectively frustrate my efforts to script account creation.  While the scripts I write aren’t malicious, the vast majority of scripts that try to register multiple accounts are created by people who are looking to use these accounts for malicious purposes.
Even though CAPTCHAs are effective in preventing automated account generation, I find that I routinely have trouble solving them even when I try to manually create accounts for websites such as GMail and Hulu.  I’ve often wondered how many others have trouble with this same image recognition task that is supposedly simple for humans.
A recent study at Stanford entitled “How Good are Humans at Solving CAPTCHAs? A Large Scale Evaluation” seeks to answer exactly this question.  This study asserts that CAPTCHAs are supposed to be easy for humans to solve and hard for machines, but recently most CAPTCHA research has focused on making CAPTCHAs hard for machines, without considering how this changes the difficulty of the image recognition task for the user.
The Stanford researchers focused on the two most common types of CAPTCHAs:
Image-based – CAPTCHAs such as the ones mentioned above that present an image of distorted letters that the subject must recognize

Some common examples of Image-­based CAPTCHA schemes.

 

 

Audio-based – CAPTCHAs that consist of a sound file that plays spoken letters with a large amount of background noise that the subject must decodes

 

In order to answer the question of exactly how well humans are able to solve CAPTCHAs, the researchers at Stanford employed two separate services.  The first, Amazon Mechanical Turk, is a labor service that is offered by Amazon.  Prospective laborers sign up through Amazon and then are presented with a list of tasks listed by researchers, all of which offer some form of monetary compensation.  In this case, the Stanford researchers had users on Amazon break 39 image CAPTCHAs and 25 audio CAPTCHAs each.  In total, 319,000 CAPTCHAs were given to Mechanical Turk workers, and each CAPTCHA was given to three different workers.

 

In addition to this service, the Stanford researchers used an underground service called Captcha-Bypass, which claimed to hire specialists to break CAPTCHAs for $.005 cents per CAPTCHA.  Users could submit CAPTCHAs to be solved to the service programmatically, and a human would solve the CAPTCHA on the other end and submit it back to the calling program.  In total, 39,000 CAPTCHAs were submitted to this service.

 

The Stanford researchers then analyzed how long it took on average for workers to break CAPTCHAs, how successful the workers were, and how often they agreed with each other.

 

The study used CAPTCHAs from thirteen separate image-based schemes and eight different audio-based schemes.  And also compared the difficulty of solving each of these CAPTCHA schemes.

 

 

The first problem considered by researchers was how long, on average, it takes to solve CAPTCHAs.  The chart above clearly shows that image-based CAPTCHAs take around 9.8 seconds for the Mechanical Turk solvers but around 22.44 seconds for the Captcha-Bypass solvers.  Audio CAPTCHAs took significantly longer for the Mechanical Turk solvers at 28.32 seconds.

 

 

The second problem considered by the researchers was how often users agreed on the answers to the CAPTCHA.  Here, problems with CAPTCHAs become clear:  for image-based CAPTCHAs, Image Turk workers typically agreed 71% of the time and Captcha-Bypass workers agreed 67.17% of the time, but for audio-based CAPTCHAs, workers only agreed 31.18% of the time.  33.6% of the time, all three workers gave different answers for the audio-based CAPTCHAs.  This shows that audio-based CAPTCHAs are harder for humans to solve then website operators probably realize, and that they are therefore an unreliable test of whether or not a web-agent is a human or a computer.

 

From this study the answer to my original question is clear.  It is not just me who has trouble solving CAPTCHAs reliably, CAPTCHAs in general are hard to solve.  Audio CAPTCHAs are a particular problem, as the data collected at Stanford shows that workers more often disagree on their solution than they do agree on it.  Since audio CAPTCHAs exist primarily to help the disabled, I believe that the entire scheme should be re-evaluated and research should be done to find a suitable replacement that is more accessible.  In any case, CAPTCHAs have become so common on the internet that it is hard to imagine that they will be significantly changed or replaced until computer vision techniques become available that will put the task of reliable, automatic CAPTCHA breaking within reach.

 

 

Reference:
“E. Bursztein, S. Bethard, J.C. Mitchell, D. Jurafsky, C. Fabry, How Good are Humans at Solving CAPTCHAs? A Large Scale Evaluation, Proc. IEEE Symposium on Security and Privacy, May 2010. ”

This article is not about professional logic board repair. It is not even about how to permanently repair a damaged logic board. Instead it is a chronicle of my experiences hacking out a cheap way to fix broken Apple logic boards for indefinite amounts of time. Maybe by following these instructions your computer will work forever. Perhaps the fix outlined here will only do the job long enough for you to find a replacement motherboard. Perhaps it won’t work at all. Do not try this trick if you do not know what you are doing; I will take no responsibility for any injury to person or property received as a result of this article.

A common reason Apple motherboards “break” is because the heat the components on it generate over a long period of time causes the BGA soldered components to come loose. This problem is often further aggravated by jolting the computer while it is in operation. This causes the connection between the component and the logic board to become fragile, resulting in frequent kernel panics or a nonbooting system.

A quick fix to this issue is pressure. By adding a thermal shim below the heatsink of the unsoldered chip, you can put more pressure on the component to prevent it from moving while the system is running and to increase make the contact between the component and the logic board.

But how do you find which component is the problem? To find the bad chip I generally take a long flat piece of nonconducting material, and use it to pry up on parts of the logic board while the computer is running. Inevitably, there will be a place on the motherboard where doing this will consistently cause the system to crash. The problem component will be nearby, probably near where the motherboard is flexing.

Once the component is identified, measure it. Make a shim out of copper or aluminum. Take off the heatsink and mount the shim on the chip, with thermal compound on both sides of the heatsink. Then, reattach the heatsink, being conscious of the pressure you put on the chip (strong pressure is good but too much pressure will crack the chip). Finally, reassemble the computer and hope everything went well. If it boots your hack was successful!