replacing 500 error in nginx auth_request

One of the great things about nginx is the auth_request module. It allows you to make a call to another URL to authenticate or authorize a user. For my current work that is perfect since virtuall everything follows a RESTful model.

Unfortunately, there is one problem. If the auth_request fails, the server responds with an HTTP status of 500. That normally is a bad thing since it indicates a much more severe problem than a failed authentication or authorization.

The logs indicate that

auth request unexpected status: 400 while sending to client

and then proceeds to return a 500 to the client.

Nginx offers some ways to trap certain proxy errors for fastcgi_intercept_errors and uwsgi_intercept_errors as described in this post. The suggested proxy_intercept_errors off;, doesn’t seem to do the trick either.

I managed to come up with a way that returns a 401 by using the following in the location block that performs the auth_request:

auth_request /auth;
error_page 500 =401 /error/401;

This captures the 500 returned and changes it to a 401. Then I added another location block for 401:

location /error/401 {
   return 401;
}

Now I get a 401 instead of the 500.

Much better.

On a side note it seems that someone else is also thinking about this.

\@matthias

Advertisements

3 things I spent too much time on in cloudformation

Cloudformation is very powerful. It’s very cool to be able to spin up an entire environment in one step. The servers get spun up with the right bits installed, networking is configured with security restrictions in place and load balancing all work.

With that power comes some pain. Anyone who’s worked with large cloudformation templates will know that I’m referring to. In my case it’s well over a thousand lines of JSON goodness. That can make things more difficult to troubleshoot.

Here are some lessons I’ve learned and, for my taste, spent too much time on.

Access to S3 bucket

When working with Cloudformation and S3 you get two choices to control access to S3. The first is the AWS::S3::BucketPolicy and the other is an AWS::IAM::Policy. Either will serve you well depending on the specific use case. A good explanation can be found in IAM policies and Bucket Policies and ACLs! Oh My! (Controlling Access to S3 Resources).

Where you’ll run into issues is when you’re using both. It took me better part of the day trying to get an AWS::IAM::Policy to work. Everything sure looked great. Then I finally realized that there was also an AWS::S3::BucketPolicy in place.

In that case (as the Oh My link points out), the one with least privilege wins!

Once I removed the extra AWS::S3::BucketPolicy everything worked perfectly.

Naming the load balancer

In Cloudformation you can configure load balancers in two ways. The first kind will be accessible via the Internet at large, while the second will be internal to a VPC. This is configured by setting the "Scheme" : "internal" for the AWS::ElasticLoadBalancing::LoadBalancer.

Now you can also add a AWS::Route53::RecordSetGroup to give that load balancer a more attractive name than the automatically generated AWS internal DNS name.

For the non-internal load balancer this can be done by pointing the AliasTarget to the CanonicalHostedZoneName  and things will work like this:

"AliasTarget": {
  "HostedZoneId": {
     "Fn::GetAtt": ["PublicLoadBalancer", "CanonicalHostedZoneNameID"]
  },
  "DNSName": {
    "Fn::GetAtt": ["PublicLoadBalancer", "CanonicalHostedZoneName"]
  }
}

However, this does not work for the internal type of load balancer.

In that case you need to use the DNSName:

"AliasTarget": {
    "HostedZoneId": {
    "Fn::GetAtt": ["InternalLoadBalancer", "CanonicalHostedZoneNameID"]
  },
  "DNSName": {
    "Fn::GetAtt": ["InternalLoadBalancer", "DNSName"]
  }
}

(Template) size matters

As I mentioned earlier templates can get big and unwieldy. We have some ansible playbooks we started using to deploy stacks and updates to stacks. Then we started getting errors about the template being to large.  Turns out I’m not the only one having an issue with the max size of a uploaded template being 51200 bytes.

Cloudformation can deal with much larger templates, but they have to come from S3. To make this work the awscli is very helpful.

Now for the large templates I use the following commands instead of the ansible playbook:

# first copy the template to S3
aws s3 cp template.json s3://<bucket>/templates/template.json
# validate the template
aws cloudformation validate-template --template-url \
    "https://s3.amazonaws.com/<bucket>/templates/template.json"
# then apply it if there was no error in validation
aws cloudformation update-stack --stack-name "thestack" --template-url \
    "https://s3.amazonaws.com/<bucket>/templates/template.json" \
    --parameters <parameters> --capabilities CAPABILITY_IAM 

Don’t forget the --capabilities CAPABILITY_IAM or the update will fail.

Overall I’m still quite fond of AWS. It’s empowering for development. None the less the Cloudformation templates do leave me feeling brutalized at times.

Hope this saves someone some time.

Cheers,

\@matthias