E.g. Anatomy and physiology.
For both the password and the key pair, I am creating a salt derived from OpenSSL's random number generator. For the password, I am using bcrypt to encrypt with the aforementioned salt. For the key pair, it's a little more complicated. I am using OpenSSL with aes-128-ctr to encrypt the pair, and the key for aes is derived from the password and the salt. Specifically, here's the code I am using in Ruby:
iter = 20000 key_len = 16 key = OpenSSL::PKCS5.pbkdf2_hmac_sha1(password, salt, iter, key_len) # unhashed password and salt from before cipher = OpenSSL::Cipher.new('AES-128-CTR') cipher.encrypt cipher.key = key iv = cipher.random_iv cipher.iv = iv encrypted = cipher.update(data) + cipher.final #key pair
My question is, is this system secure? Am I using best practices? As far as I can tell, I am, but I also want an outside opinion to ensure I'm not missing something major. If necessary I can provide the code for the password hashing, which is done using Ruby's bcrypt library. Edit: I want to clarify in case it is asked that I'm not asking about the validity of my code but the security of the code.
The whole setup seems kosher to me. But I feel that too little is known about the application itself; did you consider the possibility of the password being leaked, or the decoded keys? On some (web-based) systems, you'd have to "store" some or all of that data in a session object, which could well mean that the information is archived in plain text in a file on disk, or a key-value pair server, potentially accessible by other users as well as rogue processes, and possibly remaining accessible long after the user has logged out and disconnected. On such platforms you'd need to manually sanitize your data structures as far as possible; also, key-value assignment is non-trivial and typically requires a fairly unguessable session ID generator (which most aren't; unguessable, I mean). You appear to already know about this, but I'll refer to session fixation nonetheless. Also, memory databases and key-value stores might, for performance's sake, not properly initialize their memory and allow an attacker to recover sensitive data; typically you use low level API to request a 4K block, and read off it before initializing it. If the server did its homework properly you'll get 4096 zeroes. If it did not, you'll get whatever was stored in that block by the user before you (I saw this happen in a BANK, of all things). Then, access to those databases must be protected or secured somehow - a third party shouldn't be able to access data it has no right to (for example by enumerating stored datasets and querying their contents). On some systems you could use a different, totally unrelated (and vulnerable) virtual host to gain access to the content of such a keystore, by misdirecting the server into retrieving the wrong keyvalues. This caused an error, but sometimes the error text included precious information:
We're sorry. There is no document called *?=|1395481948749|1346625 |lserni |62609098affb79cd2e7c31b6793b539e |Fl==SPQR]. (Error: DB001A key was not found) Try using the Search function, or contact us with the details of this issue.
is slower, but more secure ("memory shredding" or "memory scrubbing"). In such a scenario, storing data to the database is as good as having passwords in cleartext; actually it's worse, because while a user must have file system access to the database files or authentication to the database server in order to reap the passwords, in many instances keystores are designed for "we're among friends" access, and anybody with local access can retrieve data (e.g. Redis "is designed to be accessed by trusted clients inside trusted environments"). This is OK, mind you, as long as you know what's happening and take countermeasures. After all, the Internet is insecure and yet here we are creating websites - but we use authentication and SSL when we need them, for we know of the insecurity. With Redis for example we could turn on authentication; now the attacker has to have both local access and the password, but the password is in the webapp files, so he needs read access to those files, too. If we make it so that this equals to impersonating the webapp user, we're (more or less) done: because a local attacker with webapp impersonation capabilities could just ask the password to any user and/or mount a man-in-the-middle attack. Of course this is all very, very rough; you need to examine your own workflow and ask yourself, "What could one do who arrived at this point? What else could he do? How do I take him out? How do I keep him out?". Desktop applications also have their own insecurities related to password and plaintext data persistence both in memory and on disk (you have to treat specially the sensitive memory region to prevent it from being cleartext swapped on disk, and have to scrub the memory manually before freeing).
Original QuestionData coming from Stack Exchange Network