The Web API Authentication guide, Signature Schemes

This post is part of a multi-part series. It builds on the first post, where I describe the framework we will use to evaluate authentication schemes. If you have not, it is probably a good idea to read it now (hint).

Here is where we are.

II. Evaluation of standard authentication schemes

Signature schemes, at your service!

CC0 image by Jacqueline Macou

Now we are getting into the more advanced stuff. Have you ever seen a request like this?

GET /?Param2=value2&Param1=value1 HTTP/1.1
Authorization: AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/service/aws4_request, 

That, my friend, is a signature on an HTTP request. From the looks of it, this is somewhat like the HTTP Digest and aims to provide similar properties.

There is no standard yet, but here is an RFC draft, if you are interested in the details. However, keep in mind that not all signature schemes are based on this RFC, as you will later discover.

Before jumping into the fun part, you should have a pretty good idea of what a signature is and what it can provide. Here is a post about data integrity, that will refresh your mind.

Let's get to it!

How it works?

As I said earlier, there is no single standard for this. Below, I describe the basic idea behind HTTP signatures, however not all schemes follow this outline. Some simplify or even skip an entire step. Be cautious when choosing what to use as you may unknowingly downgrade security.

The whole process can be summed up in the following five steps.

  1. Take every part of the request that has any significance during processing: HTTP method, the URL, query parameters, headers, the body and the time the request was made.
  2. Transform all of these values into a standard form, a so-called canonical representation.
  3. Calculate the hash of the canonical request.
  4. Sign the calculated hash with a secret key.
  5. Attach the signature and some meta parameters (time of the request, signature version, config parameters, etc.) to the request.

Once a client finishes the above process, it sends the request with the signature and metadata to the server. The server follows mostly the same process with three fundamental differences.

First, the server checks the meta parameters to make sure they match the server's settings. If there is a mismatch, the processing is terminated. Such a case may occur if an incompatible algorithm is used, or if the request was intended for a different service.

Second, depending on the type of algorithm in the meta parameters, the server chooses the right key to use when checking the signature. In case of a symmetric key, this is the same one used to sign the request. On the other hand, if the algorithm used was asymmetric, then the server uses the public key corresponding to the client's private one to verify the signature. This is standard public key cryptography.

Last, the server checks for replay attacks. Remember, the incoming request contains the time it was made. This makes it possible for the server to check whether that time is near the current server time. The server will only process requests that are within a small window of time. You will find more details about this under replay protection.

As you can see the concept is not rocket science, but don't let it fool you. The devil is in the details. Implementing such a signature scheme can be quite some work. For instance, Amazon provides a complete test suite and pages of documentation to aid in implementing their scheme.

Amazon's isn't the only party with a custom signature scheme; there is Escher (a generalized version of Amazon's v4), for instance. It is, at least on paper, compatible with AWS. There is Joyent's HTTP signature, which follows the RFC draft. And there is also a multitude of custom solutions around, but these are out-of-scope.

A quick note on large payloads and streaming

As you now know, calculating the canonical form requires the user to have access to the complete body of the request during processing. This may quickly become a problem if you deal with larger payloads (MBs or GBs) as it may not fit into memory.

In these cases, you can choose not to include the payload in the canonical form or utilize the streaming interface of the libraries. At the time of this writing, only the AWS SDK provides a streaming interface with signature authentication, so this is a significant limitation.


If you are lucky, you can find a lib that implements the scheme you would like to use. If not you will see for yourself how hard this is. You need to canonicalize requests and do various crypto operations while watching out for lots of edge cases.

Using a battle-tested library, like the AWS SDK, is a breeze, as it hides all the complexity behind simple APIs. Interfacing with lower level libraries, like Escher, are a bit less fun. They require extensive configuration and have more complicated APIs. Not to mention unintentional debug resistance. Sometimes low-level libraries just fail to authenticate, and can't provide any information other than "signature mismatch." This isn't too helpful.

These properties make signatures schemes the most complex authentication choice we have covered so far.

Reliance on HTTPS

Signature schemes provide integrity protection, i.e., they are resistant to tampering. They also have countermeasures to thwart replay attacks. While HTTPS is "only" an additional layer of security here, there is one thing signatures rely exclusively on it for. And that is encryption.

I highly recommended using this scheme only over HTTPS.

CSRF protection

CSRF attacks only apply in the browser and signature scheme provide complete protection, as the attacker has no way of producing a valid signature without having access to the secret used to sign it.

Therefore, any attempt to send a request from outside the client application will result in a signature mismatch error. There are other problems with using a signature in the browser. You can read more about them under "Recommended use cases."

Replay protection

CC0 image by Buenosia Carol

A replay attack tries to re-issue a request at a later point in time. Previous schemes depended upon HTTPS to solve this for them. With signatures, however, there is a timestamp included in the request itself, which can be used to detect and block replay attempts. Let's consider what would happen during a replay attack.

At first, a legitimate request is issued at 10:32. The timestamp on the request is 10:32. Let's say it takes a few seconds for the request to reach the server. When it arrives, the server clock is at 10:33. The server verifies the signature and then it checks the time. There is a few seconds difference, which is within its configured acceptable threshold, so it processes the request.

While the above was happening a crafty attacker recorded the HTTP request with the intention of replaying later.

10 minutes pass.

It is now 10:42. The attacker replays the request hoping to get it processed. It reaches the server within a few seconds. The signature checks out; it is a valid request after all. Then the server checks the time and finds a ~10-minute difference. It is configured to accept requests with the delta of 5 minutes at maximum, so it terminates processing and returns an error.

As you can see, signature schemes provide replay protection, which can be configured by setting the time delta parameter. The lower this parameter, the smaller the attacker's time window to execute a successful replay.

There is a small catch though, the client's and the server's time must be in sync for this to work correctly. If they become too out-of-sync, no successful request can be made.

Integrity protection

This is where the complexity pays off. Signature schemes, if implemented correctly, provide integrity for the full request. Every meaningful part of the request is used to calculate the signature. Therefore no attacker can modify the request and still retain a valid signature.

A classic man-in-the-middle attack would merely result in a signature mismatch error, and the server will refuse to process the request.

Recommended use cases

Signature schemes are especially great when you need to protect highly sensitive resources and relying only on HTTPS is not considered enough. A good example is the AWS API, where Amazon requires signatures for enhanced security. This is a great defense in depth mechanism. You may also see the use of signatures on specific critical endpoints, like user management. This level of security is usually only required between backend services, integrating over an insecure channel, such as the open internet.

Frontend usage is possible but has some issues that make it mostly impractical. First, consider the problem of key-exchange. The user's browser has to get hold of the key, which can be used to sign requests. One option might be to fetch this key from the server right after authentication. This, of course, undermines the fact, that these keys should never be transmitted over the wire. So the only option is to get it out-of-band, which opens up new problems.

Second, think about the capabilities of an active network attacker. He can read and modify any requests at will. He can't alter signed requests, but he can tamper with responses. The problem arises when the site is first loaded, and the frontend application written in JS is downloaded from the server. The attacker may very well modify the application to leak the secret.

Of course, HTTPS protects from both of the above, because of it's confidentiality guarantee. However, if we rely solely on HTTPS for security than signature schemes are not an upgrade, but merely a complexity.

In conclusion, I recommend against using this for browser-server authentication and encourage you to use it for server-server calls.

Pre-signed URLs (special purpose tokens)

Being able to guarantee a requests integrity opens up a few new possibilities. For instance, you can create a token, that is only valid for 5 minutes and only a particular operation. Bear with me, here is a real-world scenario.

You operate a small service in Heroku and would like to allow your signed-in users to upload images to an AWS S3 bucket. In the classic scenario, you provide an endpoint, which accepts a POST request with the file. Essentially, the user sends the files to your service on Heroku. You validate the session, and if it checks out, you upload the file to the S3 bucket from your service.

Consider the alternative. You provide an endpoint, which gives the user a unique URL, already signed by your AWS key. The URL contains the bucket name, the name of the file and any metadata you want to enforce. The user then uploads the file directly to S3, which will validate the pre-signed URL and store the file. Using this approach you saved lots of processing power and complexity, while still ensuring only logged-in users can upload files.

So how would such a pre-signed URL look like? The following sample was generated using Escher.

Note the X-EMS- headers. They hold the metadata and the signature itself.

Pre-signed URLs can also be used to provide single-sign-on between services. Here is how you would do it between service A and service B.

Create a pre-signed URL pointing to service B's authentication endpoint and containing claims about the user. Then, redirect the user to this URL. Service B will check the signature and if everything is okay, initialize a session for the user like it would after a standard login.

The key in both cases is the pre-signed URL. For one thing, it contains enough information for AWS and service B to know how to process the request.
Next, this URL can be given to the user, as the signature protects its content. You retain control. Furthermore, a pre-signed URL acts as a self-describing token, i.e., everything is encoded in it, and it can be passed around.

Contrast this with the HTTP Digest Auth scheme, which also provides integrity. Would it be possible to create a pre-signed URL using this scheme?

Not really. The problem is the that Digest Auth expects the authentication information in the Authorization header, which cannot be controlled from the URL. This makes it impossible to create self-describing URL, which can be passed around.

Taking care of your auth scheme

There are three things to keep in mind when using signatures. The first is library compatibility. Different signature schemes are almost always incompatible. For instance, the AWS signature has nothing to do with scheme defined in the RFC draft.

Secondly, secrets should be kept secret, i.e., load them from environment variables or keep them encrypted in a DB/filesystem and load the key from environment variables.

Thirdly, take some time to think about the cost-benefit factor. Signature schemes are complex animals, and most of the time you can probably get by utilizing something less secure and more straightforward.

Coming up next

Everything we covered so far is on the HTTP layer. The next scheme we cover goes a layer down, to the TLS protocol. It is dealing with client certificates. This is the same idea that websites use to authenticate themselves to us, the user. In these scenarios, users have issued certificates, and that is what they use during authentication. This solution goes all-in on HTTPS and provides excellent security properties.

Are TLS client certificates the next hype? How come they are not widespread? Chances are, you have not even seen one in the wild. What are the tradeoffs when using them? You will find the answer to all of these questions, and lots more in the next part of the series. Stay tuned!

Spot a mistake in reasoning? Have a different opinion? Sound off in the comments!