Monday 10 December 2018

Understanding PKCE as a Solution for Interception Attack

The OAuth2 Code Grant flow allows a secure Client be granted access to a protected resource on behalf of the owner. The Client, usually a server side web application, must be able to be trusted so as to mitigate Man In the Middle attacks and interception of the Client key which the Authorization server uses to verify the client's authenticity. For that reason the Code Grant flow, without proof of key, is not suitable for clients which are publicly accessible, such as a browser based Javascript web app or native mobile app.

Browser based apps are driven by an API, the resource server in the OAuth flow, and should carry no state. For this scenario the usual alternative to the Code Grant is the Implicit Grant. The Client is loaded into the browser, is visible to the user or anyone with access to the static content and therefore is redundant. The user is granted a token immediately in exchange for valid credentials as there is no benefit for a code exchange.



The implicit grant is inherently insecure because the authentication response, containing the token, is open to interception, once it is received by the browser. This can then be replayed to the resource server and accepted as a valid request for the protected resource.



The solution to these problems is not to use the Implicit Grant on any public client and instead use the Code Grant with an additional feature called 'Proof of Key for Code Exchange' to mitigate the interception attack problem of exposing the Client publicly. This is similar to the use of a cryptographic nonce in the OpenID implicit authentication request.

PKCE introduces an additional verifier into the process which is present on the initial code request and code exchange transactions. The verifier is a randomly generated encrypted string with a high enough entropy that the probability of guessing it is impossible. The Code Grant flow now goes like this:


  1. The User opens their browser and navigates to the web page, this is redirected, within the browser application to the authorization server, probably the hosted login screen. 
  2. The user enters their credentials and a code verifier value is created and stored in the browser and sent on the authorization request. The auth' server stores the code verifier, validates the user's credentials and returns the code. 
  3. Now the client makes the code exchange, it supplies the code and verifier to the auth' server which checks the verifier against the value originally supplied in 1. The code is exchanged for a token and returned to the client browser application.
  4. The client browser application may now use the token to request the resource.



Any attacker intercepting the code is not in possession of the verifier and therefore cannot exchange it for a token. The verifier can only be used once for each authentication request, so even if it were leaked it couldn't be replayed.

Monday 19 November 2018

Implementing OAuth2 & JWT in a Micro Services Architecture with Spring Boot 2

The theme of loosely coupled independent software components underpins the rationale of the modern micro services architecture. Of course, this is nothing new: Encapsulation is one of the four principles of Object Oriented programming. Micro services seeks to extend this principle beyond the scope of code to the wider systems architecture. A Network of small, independent units of granular functionality, which adhere to common communication standards, are the natural progression from  classic n-tier software systems.

Key to this independence is statelessness, REST principles suggest that each and every request should be independent of the next: There is little point in implementing separated software components only to bind them together again to meet constraints of other technologies which might require session state or a shared back end resource. An example of this is authentication and authorization. Too often we go to great lengths to accomplish a clean and simple system only to shoe-horn in a legacy authentication mechanism which introduces tighter coupling between the network of independent components.

JSON Web Token, used with an OAuth 2 flow, is a solution to this. JWT is based on asymmetric encryption and allows us to guarantee the authenticity of requests across the micro service network without having to create tight coupling between these services with session state or a central token store. JWT can be configured to carry custom state within the access token, removing the need for any user information to be stored within each independent application. This has clear benefits for security compliance, testing and redundancy issues. It ensures that we only need to on-board and manage users in one place. It also provides the option of completely outsourcing the identity management to a 3rd party such as Okta, AuthO or Ping Identity.

In this example I will create my own OAuth 2 authorization server which can easily be enhanced to an enterprise scale identity management service using the rich features of Spring Security. A client application will provide the front end functionality which will be supported by a separate resource exposed as a REST API. Other than the private key, the resource server will be completely independent of the authorization server but will be able to make secure decisions for access control.

All of these components will be implemented with Spring Boot 2. This is ideally suited for a container based delivery into a micro service environment. Any number of resource services can be added to the network without the overhead of shared state or resource to manage authentication or authorization.

This exemplifies the principle of simple, clean, granular services which are easy to maintain and enhance to meet the rapidly changing demands from the business.

OAuth 2 Code Grant Flow

The goal of the OAuth Code Grant is to authorize a client application to access a resource on behalf of a resource owner without having to know the resource owner's security credentials. Although the process is not bound to any specific token implementation JWT compliments the Code Grant process to achieve a clean separation between the back end resources.

Before looking at the flow it's important to understand the four different actors in the authorization user-case.
  • Resource Owner - the user who makes requests to the client via HTTP, usually with a browser.
  • Authorization Server - The Identity Management service which hosts user identities and grants tokens to authenticated users to access other resources via a client
  • Client Application - In the Code Grant flow the client is a web application which consumes resources from the resource server. It is important to understand that this is a server side web app and NOT a Javascript app executing in the browser. The Code Grant flow requires that the Client can be trusted as it retains a Client Secret value which it uses to authenticate it's request with the Auth' Server
  • Resource Server - A REST API which produces and consumes representations of state from and to the client application. Multiple APIs may be implemented in a network micro services.   


  1. Resource Owner, (User), opens a browser and navigates to Client app
    • The client checks for a cookie
    • If none is present it redirects to the hosted login page on the Authorization Server
  2. User provides authentication credentials, username and password
    • Upon successful authentication returns a code, along with a state value, and redirects the browser back to the client app
  3. The Client app receives the code
    • Verifies the state to check for CSRF
    • Provides the code and secret to the Authorization server
    • The Authorization server receives the code, authenticates the client's request using the secret, creates a token using the private key and returns it to the Client
  4. The Client receives the token and uses it to make subsequent requests to the resource server
    • The Resource server receives requests, decrypts the token using the public key

Our Client application is assigned a scope, only those resources in scope are accessible by the resource. Individual users are granted authorities, most commonly implemented as roles. In Spring Security the Role Based Access Decision Manager interrogates the authenticated user's granted authorities. In JWT we can pass these through to the resource service in the claims and filter them as roles. We can also add anything from the domain model, such as organisation association, and maintain that information in the resource service by passing them through as custom claims in the token.

Spring Boot 2 Implementation


Now that we understand the OAuth process and the use-case we intend to solve, let's walk through a real example implemented with Spring Boot 2. All three independent services, can be found on Github -
Let's start by creating the Authorization Server. As always, we start with the build and bring in the Spring Boot dependencies into the pom.xml.

The main configuration annotation sets up everything we need for the Authorization server, the hosted login page, web service and all the request and response logic of the OAuth flow. 

For the purposes of this example the UserDetails are stored as in-memory attributes. This could easily be extended with a custom UserDetailsService backed by a store.

The same goes for the ClientDetails. For a real world implementation we would want to be able to manage the Clients via UI and as with the UserDetailsService Spring Security allows us to implement a custom ClientDetailsService and store details however we choose.

OAuth does not dictate any specific type, or management of, the token. Spring Security allows us to implement the token how we wish but provides extensions for JWT.

For simplicity we'll just set a static signing key, which will be used here and in the Resource Server to decode the token. In a real world implementation this would be an Asymmetric key and we'd generate the private part and export it to the resource service via a robust key management tool

As well as the converter, we need to create a token enhancer to add the custom claims to the access token. Here I'm just setting a static String value against the 'organization'. In reality our UserDetails are likely to be part of a relational schema which could also describe how the user relates to a wider organization account. This would be very important for access control of resources and this value could again come from the UserDetailsService.

Then add the converter and enhancer to the token store.


Client App

As with the authorization service, the Client application is configured purely from a single annotation

and the client settings are provided in the application properties

The Code Grant flow is best suited to server side web apps so this application uses Thymeleaf to render the values returned from the resource service on a secured page which is only accessible to an authenticated user. The rest of the configuration sets up the view controllers and resource handlers for the template html pages.

Resource Server

The resource server security is configured for the OAuth flow with another annotation.  

We add the JWT token store and a custom converter to decode the token with the key and access the custom claims

The resource service is a simple REST service created with the Spring Web framework. In order to access the authentication details and authenticated principal in the controller we can simply include them as arguments in the method. The converter we configured earlier then adds the custom claims from the token to the object and we can pull back the details in the logic. I've also set up method security annotations so we can annotate the controllers with the Spring Security annotations and restrict access based on SpEL expressions. In this case I'm securing the method both by restricting the scope and the authenticated user's granted authorities.

Running the System


All three services are set to run on different ports. Using the Maven Spring-Boot plugin run each one on localhost and navigate to the Client on http://localhost:8082

The browser doesn't yet have a cookie and so presents a page inviting the user to login. Clicking the login will redirect the browser to the Authorization Server's hosted login page on http://localhost:8080/auth/login. Enter the username and password defined in the example in memory configuration and sign in. 




The browser is now redirected back to the client app which will render the Secured Page. The resources for which are fetched from the resource server, running on http://localhost:8081 and substituted into the template.

Dangers of Stateless Authorization

The JWT's validity can only be ascertained by decryption with the private key. There is no way for the Authorization Server to revoke the key once issued. For this reason the key is only valid for a short period of time. Also, The attack surface of a system using JWT is large. If the private key is compromised all identities and resource servers are compromised.

Wednesday 24 October 2018

White Listing S3 Bucket Access for AWS Resources


Limiting access to data with a white list is a security requirement of any serious data governance policy. In the Cloud the obvious storage choices, such as S3, might not seem like suitable solutions for hosting high risk data. However, the options available for securing data are very powerful. In this post I will show how to implement a robust white listing policy for an S3 bucket which limits access to resources with a given or assumed IAM role.

A common policy with high trust data, such as Personally Identifiable Information, is to only allow access via an application. No direct access to the file store hosting the secure data should be permitted. Of course, we want to avoid storing access credentials within the application itself, the container or machine image. The most risk adverse option to grant our resources, EC2, ECS, Lambda, with an IAM role. In an EC2 environment we can access those credentials from the Instance Profile, with Java we use the InstanceProfileCredentialsProvider  to enable the application to access the S3 resource.

The role is associated to the EC2 instance or, if we're a autoscaling a cluster, specified in the Launch Configuration and associated to an instance on launch.

The role associated to the instance grants the resource access to all operations in the S3 service. This does not limit access to the specific bucket or protect the resources within it in any way. Now we need to create a Bucket Policy which limits access only to authenticated principals with that role, or resources with the assumed role.

The policy is a 'Deny' with a White List of roles which are not subject to that effect. In many examples the 'NotPrinciple' statement is used to define the White List. This will work but requires us to name the instance Id, as well as the assumed role, as the principal is not subject to the 'Deny' effect. This causes us problems in an autoscaled EC2 group as we aren't able to add and remove specific instance Ids to the policy as and when they launch. We could implement some kind of elaborate callback as UserData and amend the policy but that would require us to grant access to manage IAM policies from the EC2 instance which would violate the Least Privilege principle.

A more elegant solution is to use a 'Condition' clause, instead of the 'NotPrinciple' statement, and include a 'StringNotLike' attribute which defines the Role ID. This means we don't need to explicitly define instance ids in the White List.

Here's the Bucket Policy which will limit access to only those resources which are granted the role we created earlier.



Tuesday 27 February 2018

Serverless Automated Lifecycle Management, For Free

Intro

As developers we should focus on implementation and delivery of business logic which solves the customer requirements. Everything else is ancillary to this. The customer doesn't care about our processes and so they should be slick and easy to manage as not to consume the time and money we have to focus on providing solutions. This isn't to say that the build and delivery processes aren't important. The continuous build process should ensure the integrity of the artifacts and the continuous delivery of those artifacts should be repeatable and error free. The entire process should be flexible, allowing us to delivery builds into different environments, with zero overhead, at any moment in time, managed directly from version control.

In this example I will show how to achieve this using tools which are provided as services, (or serverless as far as we are concerned), for free, to create a complete automated lifecycle process resulting in production deployment managed through the correct governance of version control.



Responsibilities and Governance

The governance of version control is paramount and each team member must understand their role and responsibility in its management. In an automated continuous integration environment the master branch, or trunk, represents the source code delivered to production and is owned by the Delivery Manager. Only the Delivery Manager role can authorize Pull Requests to the master, merge in branches and tag releases. It is these actions which will trigger a production build delivered to the production environment with zero downtime.

Developers branch or fork from the master. They do this for each piece of work, a bug fix or new feature requested by the business and raised as a documented issue. Many copies of the master maybe taken, branches or forks may be merged by the team to collate work. Continuous integration of these development branches and build of the source can happen at any time, the resulting artifact automatically deployed to another environment, staging, dev, test, etc... The important point is to ensure that we map a forked repository or branch to an environment and that the master copy remains secured by the Delivery Manager.

Example of a version control timeline

Application Architecture

The simplicity of the process and success of the development team to utilize it is underpinned by the gold standard of application architecture. To me this means statelessness and clean separation of concerns from top to bottom. A messy architecture will result in spaghetti code. Spaghetti code will result in build complexity and difficulty in testing. These problems will hinder the processes causing the team to spend time 'shoe horning' it to fit the configuration. The process will become bespoke and unrepeatable which ultimately will result in mistakes and errors in production deployment which wastes more time.

In this example I will use a Single Page App', written in Vue.js feeding from a Spring Boot REST API providing a gateway to read/write functionality of data in a relational database. This provides clean separation of concerns and statelessness to ensure scalability.

Token based authentication in an SPA architecture 

Storage and authentication of user identity is often the most difficult piece of state to remove from a system. Requests to a REST API should be completely independent from each other, there should be no user identity shared across server side machine instances. This is achieved through token based authentication and authorization, in this case OAuth, using JWT. The token provider in this example is Auth0. The token carries the identity and its granted authorities within the resources exposed through the API. The identity store and token provider are, in this case, completely separated from the resource server under development. Again, this is key to good architecture as it separates concerns and keeps the state away from the functionality. This ensures we don't need to customize the build for different environments by moving data around, obfuscating identities and fudging security credentials.

Dependency Management

I will focus on the build process of the REST  API and its underlying service and repository components. In this simple example all of those three tiers fall under one build. We could separate them out into different projects for added physical separation but so long as the architecture is stateless the cleanliness is maintained. In the event of separate build being required we must manage those build orders and dependencies within each project. There are two options for this: We can trigger build of dependencies prior to the build of the deliverable artifact or we can keep our dependent builds completely separate and store them in an artifact repository. In my opinion, the later option is more desirable as it adds another level of separation to the build process and therefore greater flexibility and more options for the team. In this case dependency management as a service, such as JitPack.io is a good choice as it integrates very easily with our other tool services, especially GitHub. I use Maven to build most of my backend Java projects and so its a simple case of associating my GitHub account to JitPack and importing the projects I want to store builds for. Additionally, I need to add the JitPack repository to build files and change their group Id.



Setting up the Environment

My target environment is Heroku. I prefer it to AWS because it is free, to a point, and its container based approach removes any requirement for managing any infrastructure, even as code. As my user identities are stored with a third party I don't need to concern myself with network segregation and complex firewall settings to protect highly secure data. I can just throw the application onto a container on a VM and run it.

Before I can deploy my application to Heroku from my CI tool I need to set up the container and the apps I need. The backend relational database, Spring Boot API and the from end files will all deploy to different Heroku apps. I won't go into the detail of setting up the MySQL backend, I use ClearDB and just go through the basic setup to get an end point from where I can create a database and let Hibernate create the schema from the JPA mappings on my domain model in my Spring Boot API application with the following setting


The application is a web app and so I've chosen to run it on a web container. This prepares the app to receive web traffic, HTTP requests from the front end, and provides some default options for execution. I need to tell the container how to run the Spring Boot app and how to connect to the database. Heroku will automatically detect that my app is Java and built with Maven and provides the default command


to start the app when the container is fired up. In Spring Boot all the configuration, such as database connection URLs and credentials, are consolidated into an application.properties. I can override that by creating all the properties I need for this environment as environment variables within the container and this is a straight forward task of setting the properties and values in the Config Vars settings. Obviously, these can be varied for production, test, pre-prod or whatever environment we want the version control we will link it to to reflect.

Now the container is configured I need to link it to the build as a deployment target. This is secured through an API key which I must set in the project settings for the CI tool.

Build and Deployment 

The build is triggered automatically by a push to a branch. This is achieved through the association of  the GitHub and build server account, CircleCI, another free container based tool which integrates seamlessly with Git. Before importing the project we need to add a file to the source to tell CircleCI what to do above the default of just detecting and runing the Maven build. Its a simple case of adding the a circle.yml file to the project root. There are many options here for managing test reports, runing custom script and deploying the build to an environment. I'm using CircleCI 1.0, which has built in Heroku support. This makes life much easier than using v2.0, which requires some customization to deploy to a Heroku Dyno. The deployment configuration section of the file tells CircleCI where to deploy the resulting artifact when triggered from specific locations in Git.



In the above configuration I have two deployment scenarios. The first tells CircleCI to deploy to a staging environment, a Heroku app named rest-api-stage, whenever there's a commit to the staging branch. The release configuration is a little different. This time CircleCI will only deploy to the production Heroku app, named rest-api, when the repository is tagged with a label matching the regex, e.g. v1.0, v1.1, v1.2.1 ...etc. the important control here is that only the owner of the repository, that's me as the Delivery Manager, can trigger the build. We could add multiple usernames of people added to the project repo as collaborators.

Heroku uses git to deploy code onto the containers. Under the hood, the above configuration is adding the build artifacts to a local git repository on the build container, connecting to the remote repository on the Heroku container and pushing the files.


A successful build and deploy to Heroku. See the full detail - https://circleci.com/gh/johnhunsley/returns/33

Now we've connected all the dots and implemented a complete end-to-end process for managing the life cycle of the app. If I need a new environment I simply create the new heroku app, branch or fork the repository and edit the circle.yml and add the app name as the deployment target. The Delivery Manager just needs to ensure that s/he doesn't merge that file and sets the correct production target on the master branch. All part of the their responsibility for managing the production delivery.

Unit and Integration Testing

Going slightly off track here but it's worth noting the power of Spring Boot's testing capability. We have two types of tests: Unit and Integration. Both are written as unit tests and exist in the test dir of the application. Unit tests isolate the individual classes we want to test using Mockito, or which ever flavour of mocking framework you usually use. I've configured my Maven build file to only execute anything named *IntegrationTest when the integration-test goal is executed. CircleCI will execute this goal as part of the default maven build process. Integration tests usually rely on external resources, such as a real database, to achieve their goals. In the case of this example I am testing the read/write functions of my repository types. However, I still don't want to have a build dependency on an external database, such as an instance of MySQL. Spring allows me to test that functionality against an in-memory HSQLDB instance which is initialized with the test and torn down at the end. It's a simple case of including the HSQLDB dependency in the test scope.



Caveats

Of course, we all like things for free. In this example the processes implemented with the tools I have used cost nothing. Just like a free beer, the barman isn't going to give you more than one for nothing! If you want to scale up, increase the number of users, number of developers in the team, number of build process you want to run concurrently and of course the hits on your application then you're going to have to expect to pay. However, I'd challenge anyone to find it lower cost solution than the tools outlined in this example which give you absolute control of your end to end lifecycle process. From setting up a dev' team right through to production delivery and support, it really can be very simple and cost effective to achieve.


**Update**

Just as I have published this post I've received an update from CircleCI saying that they're not going to support v1.0 builds anymore after August 2018 and that everyone must migrate to v2.0. I dont mind that so much, v2.0 gives us much granular control of the build process but it is lacking the simple Heroku support from v1.0. I'm currently looking into deploying to Heroku from a v2.0 build and will certainly post a blog about it when I figure it out!