User Authentication Orchestration: Painting Number One


14 min read
User Authentication Orchestration: Painting Number One

The orchestration of one's own unique authentication scheme (or just, 'scheme' in general) feels powerful.

This write up contains some notes, explainers, some breakdown & philosophies that belie my 'ethos' in creation and development.

Criticism and feedback are necessary if you know better.

Backstory

OpenLDAP is extremely convenient for user management because it is:

a) Open Source

b) A recognized and established internet protocol [making it hard to get trapped into a 'walled garden' if we're using open source deployments ---- that's a major key]

c) Since its a protocol its pretty much interoperable with any other application / software package distribution that includes some sort of general user authentication (this allows you to manage users entirely outside of the context of a niche situation).

d) Exponentially more secure than storing user credentials on the same server where users must authenticate (for obvious reasons)

e) Very easy to enhance & architect an authentication process that's 'tailor made' for our situation (again due to the malleability afforded by protocol-based solutions).

Issue: Ensuring Argon2ID Encryption

Just recently, OpenLDAP decided to package Argon2ID encryption with their deployments by default (and by recently, I mean like late July 2020; so no more than a few weeks ago).

This was a game changer.

Why This Was a Gamechanger

I'm not a fan of 'jerry rigging' / 'hacking' protocol-based solutions (but I'm more than happy to do this with individual applications.

'Why?'

Because most of a protocol's effectiveness and flexibility comes from its extensively outlined processes & standards (ironically).

These public standards allow for various developers and programmers to account for the nuances of said protocols, which makes our lives easier because when that happens - that makes integration a simple 'drop in'.

Tweaking Things Doesn't Though

Again, due to the outlined standards & expected behaviors of protocols, there are usually a wealth of solutions outfitted to facilitate integration.

Which is convenient.

But on the flip side - this does significantly increase the likelihood that something will 'break' or fuck up if we tweak the protocol to enhance it in a way that's not an established configuation (i.e., let's say you tweaked IPSec to try to handle packets in an entirely unspecified & unaddressed manner in the RFCs - its likely that the corresponding client / server application would fail to adapt to this aberration unless that, too, was "tweaked" in the same way).

Why Argon2ID is Important

Its the strongest KDF (password hashing) on planet earth - by a long shot.

Why This is the Case

Most are familiar with SHA256 (hash algo that Bitcoin uses). SHA256 is sufficiently secure, indubitably. However, its designed for "speed". By 'speed', I mean that a boasted feature and benefit of SHA256 is that it provides a substantial upgrade over SHA1 (not broken) in terms of security (obviously given SHA-1's brokenness) + it can be used by virtually all devices (was originally designed for optimal performance on 32-bit hardware, which actually was a great move at the time this 'standard' was declared by the NIST / NSA).

Speed is a good thing in certain situations - including for Bitcoin. Despite arguments suggesting that the ease by which SHA256 can be hashed is a net negative (due to the presence of massive mining farms leveraging ASCIs), it makes sense to want a fast hash algo (at least at the time that Satoshi launched the protocol). The speed at which SHA256 can be hashed ensured that the network would be able to quickly assess proposed blocks to the network + parse inputs to double-check & ensure no double spends are present + ensure valid signatures (that last part is not part of the 'validation' process though). While not often mentioned, this process must be crafted with thoughtfulness because Bitcoin's transactions are designed in such a way to where the time required to ensure the validity of a transaction with >12 signatures increases exponentially O(n)^2 - making it a DDoS risk if the protocol does not adapt in a way that mitigates the likelihood or practicality of such an attack.

What's the Point of Everything I Wrote Above?

As awesome & secure as SHA256 is, it's not something that we want to be using for a password hashing algorithm.

Passwords Are a Lot Easier to Crack Than Wallet Addresses: Ever heard someone talk about how Bitcoin wallet addresses are super secure because it would take a super computer like a quadrillion billion years to reverse engineer a private key via pure brute force? (fun fact: this does not make it 'unhackable' - but we'll get to that in another peice). That unique, super convenient property of Bitcoin derives from the secp256k1 (elliptic curve) operation that must be performed in order to generate the private / public key pair that will ultimately serve as one's address (after SHA256 + ripemd160 operations are performed as specified by the address encoding). Conversely, passwords are typically generated from someone's imagination - and more than often than not, people tend to value ease of recall over security when it comes to choosing a password.

Rainbow Tables / repeated logins / recognizable hash outputs / poor password hashing implementations have been the cause of countless breaches + compromises of user credentials.

How Librehash is Architecting the User Authentication Process to Protect Members

To be honest with you - I love pushing the limit on just how intricate & secure one can make the user authentication process. Makes you feel almost like you're some sort of untouchable, top flight secret spy
Back to serious mode

Based on avaialable solutions at our disposal, there's no excuse for there to be a situation where I find myself sending an e-mail out to users telling them that our "database got hacked", resulting in a massive extraction of user records.

I Don't Say This Because I Think We're 'Unhackable'

In fact - quite the opposite. As much as I work my ass off to ensure that never happens, you have to assume it will if you want to really beef up your platform's overall security.

'Why do you say this, Cryptomed?'

Because the only reason a 'massive data breach' is 'massive' is because the attacker was able to compromise a 'massive' amount of data.

So, intuitively, a smart sysadmin looks to federate the authentication process.

What Librehash Does Specifically

First thing that I did was devise a scheme to move all user passwords off of the server.

It goes w/o saying that this is most likely at the top of the list to recon for a would-be bad actor.

Separation also allows us to protect the actual location of user credentials by virtue of it being unknown (doesn't mean that, that location won't be 'under siege', but there should be no reason for anyone to especially target said server unless it has become clear to threat actors targeting us that this is where the 'jackpot' is at).

This Also Presents an Additional Attack Vector as Well Though

Since we've moved user credentials off server, that means that we're going to have to reach out to wherever those user credentials are stored at to check user / password combos for each login attempt on the platform.

Because of this fact - we must now:

Ensure that we are communicating with the correct server. Our whole authentication scheme is blown up if our client (member portal) gets fooled into sending a user/password combo to a malicious server posing as the real one unbeknownst to us

We also want to ensure there are no bad actors "sniffing" out the communication between our main platform and the server holding user credentials.

We need to set up an established, standardized way of authenticating (both ways; client to server & server to client) that both servers recognize [like a 'secret handshake' between old buddies].

Extensive Measures Were Taken From This Point Forward

Doubling back to the Argon2 hash algorithm that I mentioned prior - I had to spend a substantial amount of time figuring out how to configure OpenLDAP (open source ldap package) to hash user passwords using Argon2ID by defualt.

This is easier said than done. The apt repository for Ubuntu hasn't been updated to include the latest version of OpenLDAP with the argon2 module included out the box (>2.4.5).

Fortunately, the Osixia/OpenLDAP git repo always has the latest official build (to be clear - this is not an 'rc' release - don't believe in deploying 'rc' in prod...maybe Cronje does).

Cutting to the Chase

The repo I showed above is relevant because this is probably the best way to deploy an LDAP server (with necessary configurations) without having to worry about provisioning it to fit over your entire 'host' because this is a Docker Image.

Docker is a tool devs use to deploy applications in containerized environments - which is the most practical deployment strategy for modern day devops missions

Benefits of This Deployment Strategy

(In Theory) this would allow me to deploy the LDAP instance on the same host as the app portal platform (i.e., within the same local network) - while retaining the aforementioned benefits of user credential separation. Because I know a bit about deploying containerized applications it could (in theory) be a trivial matter for me to launch with ports only exposed to the internal network [389 & 636].

If one were to launch the LDAP server within the local environment, then mutual TLS authentication would make a ton of sense here (i.e., I'm paranoid +  lateral movement is a real thing). Provisioning self-signed certs is no biggie. Chance to flex X448 certs (sha512 hashed) if we want to - a bit abnormal though. ed448 ECDH + DH params >4096bit str. - usual trappings. Perhaps understating how streamlined that process is, but that's the beauty of Hashicorp Vault (another app deployment, we'll get to that one too).

Mutual TLS Authentication is outfitted to facilitate SASL authentication

SASL Authentication

At this point, I don't expect you to be 'hip' to any of this unless you're a sysadmin too (or you do IT work for companies / you have crippling social anxiety that's manifested imaginary friends that you want to provide secure authenticated logins for).

SASL Merely Means 'Out of Bound' Authentication

Let me slow down for a second.

When two servers communicate with each other - specifically to request information (e-mail servers are a good example), those servers must "agree" on how they're going to authenticate with each other.

Typically there are 2-4 diff choices for authentication and the client chooses among them (pre-configured). If the server receiving the request recognizes this, then the authentication process proceeds from there.

This is where ours gets pretty sexy.

SCRAM Authentication

At the time of writing, SCRAM-SHA-256-PLUS is the algo of choice. Although SCRAM-SHA-512-PLUS is an option too, but interoperability makes this a "wish list" (not sure if the relevant RFC actually listed SCRAM-SHA-512 either).

What is "SCRAM"?

Some James Bond shit.

No, seriously though:

link = https://www.isode.com/whitepapers/scram.html

Breaking Down How This Works

Remember when I mentioned the various things we need to account for during the user authentication process (like MITM interceptions / spoofing / etc.?)

Well, its time for us to figure out how we're going to deal with those problems.

Yes, We Do Have Certificates Generated For Mutual TLS - But That's Not Sufficient Alone

Without getting too technical here, self-signed TLS authentication is a must here, because it affords us an encrypted channel between the app portal and LDAP.

Why not a CA-issued one?

While a CA-issued one works for security just as well as a self-signed cert, we absolutely want to avoid that in this situation because ANYONE can obtain a valid x509 cert.

Isn't this enough?

No, because the certificate itself is a static resource and while we're using Hashicorp to manage all the x509 (certificate) pseudo-CA duties + cert storage, if that platform were to be compromised somehow, then we would be fucked.

Without hyperbole, that would be a disaster (especially in this context).

Take a look at this brief excerpt from Wikipedia that outlines how catastrophic a 'CA compromise' can be:

Apocalype.

Remember that mantra I cited at the beginning about a smart sysadmin architecting a system that federates things in a way to where there should be no single breach / compromise that essentially screws everyone?

Well that wisdom must be put in play here.

And that's why the SCRAM authentication protocol is perfect.

Walking Through SCRAM

I'm going to use the example that Wikipedia gives because this is my absolute favorite explainer of how this works.

Dilemma One: Ensuring We're Even Talking to Who We Think We're Talking To

No humor intended here, but a typical MITM attack is carried about by "fooling" the requesting server into thinking that they're contacting the right entity.

The person they're actually trying to contact is none the wiser of course because the request was intercepted before it ever got there.

I'm going to tweak our example ever so slightly to make things more understandable and intuitive.

Bob and Amy are Secret Russian Agents

I'm lazy and Russia is the first country that came to mind, don't read into it

And they've both been tasked with infiltrating blockchainland.

Amy has decided to stay back in Russia behind the control desk (that's stereotypical, but needed for this example - so let's make her Bob's superior #compromise).

Bob will be 'live' in the field. His instructions were to find the secret briefcase in blockchainland, then contact Amy once he has.

For safety, Bob brought no cell phone with him - but he knows Alice's Telegram username. So he just needs to get to a computer, create an account and contact her there.

Amy's Dilemma

As expected, she has received correspondence from an entity calling themselves "Bob".

But the account is new and there's no telling whether its Bob or the blockchainland enemies pretending to be Bob after they tortured him for information (yeah, that got kind of dark - sorry!)

Amy and Bob Did Agree On a 'Codeword' Conversation

Before Bob departed, Amy and Bob agreed that Amy would respond to Bob's contact by saying, "Who the hell is Amy? Is that some girl you're talking to?? This is Alice - your girlfriend"

Bob's response to that (if he's the real Bob), is supposed to be: "Good thing you're not my wife!"

Ba-dum Kssh

This Could Still Be the "Fake Bob"

How?

If Bob has been compromised to the point where the enemy land people have managed to extract the nature of his mission + Amy's contact, then its likely that they were able to get Bob to divulge their pre-arranged eye-wink, nudge-nudge pre-arranged conversation responses.

Amy Demands More Proof

Being the top flight spy she is, Amy wants to be doubly sure she's really talking to Bob.

Bob obliges and shoots her a DM on Twitter.

The Message is From Bob's Twitter Account

Which is a pretty strong additional 'vouch'.

But Amy is smart enough to realize that if Bob were captured, then the attackers likely were able to get to his laptop.

If Bob was lucky, perhaps he had time to close the lid & shut it down. But if they caught him by surprise, then they'd be logged on Bob's machine.

And if that's the case - accessing Bob's social media accounts could be as easy as simply visiting one of those sites on his computer to see if any of Bob's accounts are still 'logged in'.

Amy asks Bob to Send a Tweet From His Hidden XMPP Account (since they are spies they both maintain double lives)

Amy asks this of Bob because she knows for a fact that Bob can't have a lingering session open.

Also, since XMPP is a protocol, thre's no greater intel a hacker could glean from that.

Amy Also Provides Additional Proof

By virtue of Amy receiving that correspondence and responding from her known XMPP account [they share a MUC], Bob can be sure that the Amy he's talking to is probably the real deal holyfield.

All That Was Necessary For Mutual Authentication

Now we're going to get technical again about this facet of the authentication process.

Here's a basic walkthrough:

Server contacts pre-configured & deployed ldap server to check whether credentials provided during a login was legit or not.

The upstream server mandates that this process take place within the context of one (among a few available) authentication schemes

Our server elects for SASL Authentication (mandatory for XMPP protocol - fun fact). And also, we're equally as paranoid as the upstream server (responding to us), so we're going to demand that it provide some sort of demonstrable "proof" to allow us to credibly rule out the theory that the CA could have been compromised.

In order for the server to provide such 'proof', it will send us a SHA512 hash output of the pre-agreed upon PSK.

That SHA512 signature must be signed with the same private key used to generate the TLS certificate provisioned to the upstream ldap directory server (or whatever upstream server we're communicating with). For mutual TLS authentication, both client & server (or server & server) have one another's respective x509 certificates for authentication.

The x509 certificate includes an HPKP (pinned public key; in the form of a SHA256 hashed out signature). The client (app portal) uses this data to verify whether the signed password sent over by the upstream directory was signed with the certificate's private key.

Perhaps I mislabeled this authentication protocol, because it really should be called 'SCRAM-SSHA512-PLUS' (usually its referred to only by the SHA256/512 variants with only one 's') ; And, yes, there is a meaningful difference between the two variants (significant, in fact).

Difference Between SSHA / SHA Variants

SSHA can be considered the PBKDF2 version of whatever SHA algo we're talking about (in this context).

What does that mean?

Whereas SHA256 hashing consists solely of transforming some data as an input and churning a hashed output.

Conversely, PBKDF2 (stands for password-based key derivation function) involves a few additional operations (namely adding iterations / adjustable salt length / hash algo).

This is Built into Most Linux-Based Systems

Specifically, via the 'CRYPT' api for password generation (used in tandem with the mkpasswd command & can be called via various other OS-level applications as well).

So, to be clear:

  • SSHA256
  • PBKDF2(SHA256)
  • crypt(sha256)

All the same.

Back to the SCRAM Authentication Process We Were Outlining

We had to segway into the PBKDF2 function's relevance in this protocol because its necessary.

Why?

As stated in the prior section (among the authentication steps), the upstream server that we're attempting to contact [i.e., ldap directory] must provide itself.

Channel Binding - Virtually Bulletproof

Remember when we were outlining that whole shabang about using Vault to mint our CA imbued with the ability to essentially automate and simply the cert generation process for end users?

And how this is an optimal solution for an authentication scheme or other similar situation where two servers / remote entities must communicate sensitive information between one another.

With that being said, we don't need to worry about making that distinction here because channel binding mandates that both be used in tandem.

Channel Binding

Channel binding dictates that the client and server should establish communication first with an established channel provided by TLS authentication (1.3 in our case ; ed25519 + AES256/SHA384 --  secp384r1 curve generated key).

This negotiation and authentication is no different than what takes place between your browser and the website that you visit (hence why you receive a warning that you're about to visit an insecure website whenever visiting one without a CA cert).

Onus is Still Upon Server to Provide Verified 'Proof'

Details are a bit hazy for some implementations vs. others - because there are a few different ways of configuring this stage of the protocol.

But, generally:

Password created (can be via rand or fed directy to it) ; this is a non-login credential

Parameters established (if you're looking to opt for PBKDF2 in this case)

Nonce is established

^^ That latter part is particularly important because you want to ensure that each and every single upstream request (i.e., login attempt) is hashed with variation so that if there ever is an instance where an attacker is able to permeate two layers of TLS authentication (via normal tls auth provided by Browser CA-recognized entity ; like Let's Encrypt), then more power to them.

Because the information contained within cannot be replayed (i.e., remember, TLS 1.3 affords us "perfect forward secrecy").

Also, there is no discernible information that could be gleaned about which client was seeking authentication either (i.e,. if it was 'bob@librehash.com' - there's no conceivable reason / method for reverse engineering this information from the hashed / base-encoded output that's included in the communication coming from our server [app portal] to whatever upstream authentication broker / mechanism / etc. that we decide to use).

Conclusion

I hate to stop here, but I need to delegate my time to a few other proirities.

I know that this was a lengthy extraction - but its worth laying out the methodology behind how I'm trying to architect this platform.

GO TOP