Sunday 1 May 2022

Quick, Easy and Comprehensive API Validation with Open API & Atlassian

Validation is a fundamental part of API developement which can often be complex and difficult to implement consistently and comprehensively across HTTP services. A large factor of post development bugs result from misconfigured or miscommunicated validation of both request and response entities, queries and paths to resources. Implementation must be comprehensive, validate all aspects of the request and response if required and return consistent error messages and statuses.

A wealth of validation frameworks are availble within the Java and Spring space. javax.validation is probably the most obvisous, which utilizes java model annoation sets to apply patterns and rules to data at the point of serialisation. SpringMVC builds on both javax.validation and Hibernate Validator to validate request objects.

Request entity data is not the only place we should apply validation. Query parameters, path variables and headers are also important and validation can be applied using similar pattern matching techniques from JSR 380 implementations. Validating these requires code implementation at different places in the architecture. We need annotations and error handling in the the controller and also on the model. We may also want to validate parts of the request which aren't modeled, for example: what if a request contains a query parameter not defined on the api spec'. Should the request be rejected? 

A simpler and more consistent approach is to validate requests, prior to hitting our functional controllers, using a comprehensive schema for all aspects of the request. The Open API standard is used to document and define API specifications. Once we have a spec' for a our API we can use it to validate entire request at one common point prior to our controllers or object serialisation using a nifty framework from Atlassian. 

In this example I will create a simple REST API which exposes read and write end points for a resource, automatically define an Open API Spec' document for the service and use it to apply comprehensive validation across all endpoint exposed by the service.

Build the API

This example uses Springboot web to quickly set up a Controller which exposes a GET and a POST API that consumes and produces a simple Java Model object which is serialised into JSON. Here's the Maven build file containing all the Springboot starters and dependencies we need to create the api.
Here's the model
and the the Controller

Create the OpenAPI Document

Now we have defined our system we need an OpenAPI json document against which requests to the API can be validated. Creating this is a simple case of including the OpenAPI dependency in the build, running the application and then visting the path to the resource where the document has been automatically generated. 

Visting localhost:8080/api-docs returns the following api document which can be copied into the resources directory of the application.

Aside from using the document for validation we can also drop it into the online Swagger Editor which creates a more appealing UI for browsing and testing the service. Copy the generated OpenAPI document and paste it into the editor at https://editor.swagger.io/



Configure the Validator

Once the OpenAPI document has been added to the classpath we need to include the Atlassian dependency and configure it to use the spec'. You can take a look at the open source in the Atlassian Bitbucker repository - https://bitbucket.org/atlassian/swagger-request-validator/src/master/ 
Firstly, we need our config to provide the OpenAPIValidationFilter into Springboot's servlet engine filter chain.
Here we can tell the system to validate either the request, response or both, according to the specification. 
We then inject another bean into Spring containing reference to the OpenAPI document.

Handling Errors and Customising Error Responses

The standard way of handling exceptions in Java and transforming them into an HTTP response is a Controller Advice annotated Exception Handler class. The OpenAPIValidator will throw an InvalidRequestException when validation of any request component fails. We then implement code in the ExceptionHandler to translate that into a response status code and body.

Making requests

Run the application using the Springboot maven plugin run goal and hit the API endpoints with the invalid requests. The OpenAPIValidator validates the request against the full OpenAPI specification document we generated. This includes JSON schema's for the entities, patterns for values in the entity or the queries and paths and also the query and header labels. The validation is comprehensive and covers the entire API, not just the entity. The error messages and responses statuses are consistent across the service.

All the code for this example is available here - https://github.com/johnhunsley/OpenAPI_Request_Validator_Example

Tuesday 10 December 2019

Use SpEL to Conditionally Suppress JSON Fields During Serialisation

If you use the Jackson Framework to serialise objects to Json, exposed through a Spring REST API,  then you might often come across a requirement to suppress the field dynamically during serialisation when a condition is met. This example shows how to extend the framework by implementing the PropertyFilter stereotype to use Spring Expression Language (SpEL) expressions to conditionally write out the field and value at the point of serialisation.



The Jackson framework provides customisation to conditionally filter out Java object values with the PropertyFilter interface. This is implemented within the framework as a BeanPropertyFilter and SimpleBeanPropertyFilter which provides some static functions to filter out different properties on the target bean being serialised. This is accomplished by adding the filter to the Object Mapper within the context and then telling Jackson to apply that filter to certain beans by annotating the bean class with the JsonFilter stereotype .

These basic functions are useful but don't allow the developer to provide any more complex conditions and logic. For example, you might wish to only include an integer Json field and value in output if the value in the object is greater than a predefined value.

Spring Expression Language is perfect for this job; it provides a framework to inject complex conditional logic through an expression represented as a String. Using this in a new annotation which can be added to serializable beans will allow us to easily define expressions which conditionally suppress Json properties.

I first created a new stereotype to mark a Java field and define an expression used during serialisation.



I then created an implementation of the Jackson PropertyFilter interface which looks for fields marked for conditional suppression and applies the expression to that field value.



The function injects the bean instance into the SpEL evaluation context and ensures that the expression evaluates as a boolean and throws out a SpEL Exception with an appropriate error code if not.

SpEL provides convenient syntax to refer to values within the context at runtime. However, typing these out in an expression specifically for a field seems overly clunky so I abbreviated that syntax with a substitute character and replaced it during the evaluation.



I added some further configuration to ensure the SpEL Filter is injected into the Object Mapper in the Spring Context, whether it's already there or not, and an annotation to enable the functionality.



The filter works as intended. When coupled with the existing JsonFilter annotation it is very easy to define complex logic which conditionally suppresses Json elements.



The full source can be found on my public GitHub or integrated into your project directly from Jitpack

Thursday 1 August 2019

Spring Boot Caching

In a microservices environment caching is an important none functional requirement which can help reduce load overhead, latency and improve the user experience. Where ever data is loaded, particularly from an HTTP endpoint on another service, caching should be considered. In Spring Boot it is easy to implement caching on any function and the cache itself can be implemented independently of the service in which it is applied.


When to Cache?

In a microservice environment multiple calls are made to and from different services within the context of a single request to a gateway. In a 'database per service' architecture the same data is retrieved from the same end point multiple times throughout the course of the initial request. Where this is done in the same thread it might be tempting to wrap the data and pass it down the chain. However, this approach breaks the single responsibility rule and restricts the extensibility of our independent services. Caching is a convenient answer but introduces potential problems. 

The amount of data cached, length of time it is cached for and amount of times it is used are all  variables which must be traded off for the price of improved performance. If we cache data for too long or don't refresh it often enough then we run the risk of error or even perceived corruption within the system. Conversely, if we don't cache long enough or refresh too often, we don't get any benefit from the cache.

Let's first consider how to enable caching in the Spring Boot application and introduce a simple cache to improve the performance of the system. In this example an application uses a Feign client to retrieve the same data multiple times from an endpoint on another service via HTTP. These are expensive and caching will clearly help improve performance.

Enable Caching in the Application

To enable caching we need to bring in the relevant dependencies and annotate a class loaded into the Spring container. As usual Spring Boot provides a convenient starter.

Add the enable caching annotation on the initialising class with the main method.

Caching the Data

The @Cacheable annotation tells Spring to cache the data returned by the method against a generated key on the first request and then takes that data from the cache whenever the same parameters are used. In this case a repository method uses a Feign client to call the endpoint. It is this method which we annotate and tell Spring to cache the data it returns against a key generated from the method parameters.

So far we've introduced the concept of caching to the system and told Spring to cache beans returned by the annotated method in the cache called 'mycache'. Now we need to implement and configure that cache. By default Spring will implement a simple in memory Map via the default CacheManager bean. We can override that by defining our own CacheManager implementation in some custom configuration. Spring provides many different manager implementations as hooks for implementing different well known cache providers, such as Ehcache.

Implementing a Cache

In the above example we have told the application to cache data returned from a method the first time it is called with the given parameter and then retrieve that from the cache each time that parameter is used after that. Currently this doesn't solve any of the trade offs mentioned earlier. We only want the same data to be kept in the cache for the duration of time it will be used for. If that data changes after a period of time we want to ensure the call to the method refreshes whats in the cache with data from the actual service that provides it. To solve this we must implement some constraints on the cache to ensure we refresh the data when required and prevent the cache growing too large which could cause resource issues.

Configuration for Ehcache requires us to introduce a config xml file to the classpath. If you're not a fan of this and prefer code configuration, as is the Spring Boot standard then a handy SourceForge library provides some alchemy for wrapping the Ehcache config into a Spring CacheManager.



The max entries, TTL and TTI values are loaded from our applicaiton properties/yml file or from a config server if we're using one. Max entries limits the number of objects in the cache and a 'Least Recently Used' policy manages eviction behind the scenes. Time To Live limits the maximum amount of time objects can reside in the cache. Time to Idle limits the time an object can reside in the cache without being used and should be set to a lower value than TTL. Both these values are critical to the success of the cache. Set them too low and we reduce the effectiveness of caching. Set too high and we run the risk of using data which isn't fresh and causing unintentional error downstream. This trade off can only be optimised through performance testing.

Cache Keys

Along with other configuration we can create a customised KeyGenerator bean. By default Spring uses a SimpleKeyGenerator class which either uses the hashCode of the a single parameter or combines multiple parameters to store the objects against in the cache. If you're using some kind of correlation Id to uniquely identify user request to the gateway then it might be usesful to pass this into the method as a parameter or create a custom KeyGenerator bean to key data so that we ensure different user requests and threads don't use the same cached data. However, it really does depend on your use case.

Testing 

It might be tempting to test that caching is working by using a Mock framework to mock the feign client. I've found this doesn't work because the Mock proxy intercepts the method call before the Cachable interceptor. I used Wiremock to stub the client call instead and verified the number of calls made doesn't increase after the first method request. We can also autowire the CacheManager into the test to access the cache and test its contents.

Tuesday 1 January 2019

Spring Cloud AWS - Parameter Store

Externalising configuration is key to managing and automating the deployment of an application into different environments. Since Spring Boot all application configuration can be managed in one single properties or yaml file which can be baked into an image, added to a container or pulled from a source at runtime. Spring Cloud AWS now integrated with the System Manager which allows application configuration to be externalised into the Parameter Store: a simple name value store which allows access to parameter values to be controlled with IAM roles. Parameter values are read from the store and automatically injected into the application by the property placeholder configurator, negating the need for application properties to be stored in a file.

As with all cloud services, the key to its effectiveness is governance. We can restrict access to different parameters, or different named parameter groups, using IAM policies attached to the resource running our application. Coupled with Spring Profiles, this allows us to create a powerful process for the management of environments within the same account and VPC.

In this simple example I'll create a Spring Boot application which will run on an EC2 instance and read its configuration from the System Manager Parameter Store. I will show how to ensure that the application accesses only those parameters specific to the environment in which its deployed using Spring Profiles and IAM policies.


First we'll create the simple Spring Boot application and import the Spring Cloud AWS SSM dependency. It's worth ensuring that we only import the Spring Cloud dependencies we need otherwise convention over configuration will attempt to access lots of other AWS resources we don't want to use.

No we'll create two parameters in the SSM store with the same property name but for different environments: development and production. Creating a new parameter is quite simple and requires a key, description and a value. Spring Cloud AWS requires the key to be named in a specific format which can be configured in a bootstrap.yml file but defaults to:

/config/application.name/

The key must start with a forward slash, can be configured in a boot strap parameter named aws.paramstore.prefix and defaults to config. The second path element of the key defaults to the name of the application as defined in parameter spring.application.name

The rest of the key, after the application name element, must be the name of the parameter identified in the application. e.g.

/config/application.name/some.parameter.name

In the application this could be referenced with the following annotation:


The application name element of the key can be suffixed with an environment label allowing us to specify different values for development and production, e.g.

/config/application.name-production/some.parameter.name

/config/application.name-development/some.parameter.name

Now that we have two different parameters for each environment referenced by the same name we can put some access control in to ensure that only those values can be accessed by resources running in the appropriate environment. The following IAM policy will be attached to the production EC2 instances, on which the Spring Boot app will run, and will ensure that only production parameter values cannot be read by production resources. A similar policy can be created for development resources.

When we run the application we can activate specific Spring Profiles: development or production and Spring will automatically attempt to access the named parameter suffixed with that profile name. For example, to run the application in production:

java -jar -Dspring.profiles.active=production my-app.jar

So long as we can ensure the EC2 instance, or container, running the application has the correct IAM role associated we can be certain that only those parameter resources for that environment will be accessed.

Monday 10 December 2018

Understanding PKCE as a Solution for Interception Attack

The OAuth2 Code Grant flow allows a secure Client be granted access to a protected resource on behalf of the owner. The Client, usually a server side web application, must be able to be trusted so as to mitigate Man In the Middle attacks and interception of the Client key which the Authorization server uses to verify the client's authenticity. For that reason the Code Grant flow, without proof of key, is not suitable for clients which are publicly accessible, such as a browser based Javascript web app or native mobile app.

Browser based apps are driven by an API, the resource server in the OAuth flow, and should carry no state. For this scenario the usual alternative to the Code Grant is the Implicit Grant. The Client is loaded into the browser, is visible to the user or anyone with access to the static content and therefore is redundant. The user is granted a token immediately in exchange for valid credentials as there is no benefit for a code exchange.



The implicit grant is inherently insecure because the authentication response, containing the token, is open to interception, once it is received by the browser. This can then be replayed to the resource server and accepted as a valid request for the protected resource.



The solution to these problems is not to use the Implicit Grant on any public client and instead use the Code Grant with an additional feature called 'Proof of Key for Code Exchange' to mitigate the interception attack problem of exposing the Client publicly. This is similar to the use of a cryptographic nonce in the OpenID implicit authentication request.

PKCE introduces an additional verifier into the process which is present on the initial code request and code exchange transactions. The verifier is a randomly generated encrypted string with a high enough entropy that the probability of guessing it is impossible. The Code Grant flow now goes like this:


  1. The User opens their browser and navigates to the web page, this is redirected, within the browser application to the authorization server, probably the hosted login screen. 
  2. The user enters their credentials and a code verifier value is created and stored in the browser and sent on the authorization request. The auth' server stores the code verifier, validates the user's credentials and returns the code. 
  3. Now the client makes the code exchange, it supplies the code and verifier to the auth' server which checks the verifier against the value originally supplied in 1. The code is exchanged for a token and returned to the client browser application.
  4. The client browser application may now use the token to request the resource.



Any attacker intercepting the code is not in possession of the verifier and therefore cannot exchange it for a token. The verifier can only be used once for each authentication request, so even if it were leaked it couldn't be replayed.

Monday 19 November 2018

Implementing OAuth2 & JWT in a Micro Services Architecture with Spring Boot 2

The theme of loosely coupled independent software components underpins the rationale of the modern micro services architecture. Of course, this is nothing new: Encapsulation is one of the four principles of Object Oriented programming. Micro services seeks to extend this principle beyond the scope of code to the wider systems architecture. A Network of small, independent units of granular functionality, which adhere to common communication standards, are the natural progression from  classic n-tier software systems.

Key to this independence is statelessness, REST principles suggest that each and every request should be independent of the next: There is little point in implementing separated software components only to bind them together again to meet constraints of other technologies which might require session state or a shared back end resource. An example of this is authentication and authorization. Too often we go to great lengths to accomplish a clean and simple system only to shoe-horn in a legacy authentication mechanism which introduces tighter coupling between the network of independent components.

JSON Web Token, used with an OAuth 2 flow, is a solution to this. JWT is based on asymmetric encryption and allows us to guarantee the authenticity of requests across the micro service network without having to create tight coupling between these services with session state or a central token store. JWT can be configured to carry custom state within the access token, removing the need for any user information to be stored within each independent application. This has clear benefits for security compliance, testing and redundancy issues. It ensures that we only need to on-board and manage users in one place. It also provides the option of completely outsourcing the identity management to a 3rd party such as Okta, AuthO or Ping Identity.

In this example I will create my own OAuth 2 authorization server which can easily be enhanced to an enterprise scale identity management service using the rich features of Spring Security. A client application will provide the front end functionality which will be supported by a separate resource exposed as a REST API. Other than the private key, the resource server will be completely independent of the authorization server but will be able to make secure decisions for access control.

All of these components will be implemented with Spring Boot 2. This is ideally suited for a container based delivery into a micro service environment. Any number of resource services can be added to the network without the overhead of shared state or resource to manage authentication or authorization.

This exemplifies the principle of simple, clean, granular services which are easy to maintain and enhance to meet the rapidly changing demands from the business.

OAuth 2 Code Grant Flow

The goal of the OAuth Code Grant is to authorize a client application to access a resource on behalf of a resource owner without having to know the resource owner's security credentials. Although the process is not bound to any specific token implementation JWT compliments the Code Grant process to achieve a clean separation between the back end resources.

Before looking at the flow it's important to understand the four different actors in the authorization user-case.
  • Resource Owner - the user who makes requests to the client via HTTP, usually with a browser.
  • Authorization Server - The Identity Management service which hosts user identities and grants tokens to authenticated users to access other resources via a client
  • Client Application - In the Code Grant flow the client is a web application which consumes resources from the resource server. It is important to understand that this is a server side web app and NOT a Javascript app executing in the browser. The Code Grant flow requires that the Client can be trusted as it retains a Client Secret value which it uses to authenticate it's request with the Auth' Server
  • Resource Server - A REST API which produces and consumes representations of state from and to the client application. Multiple APIs may be implemented in a network micro services.   


  1. Resource Owner, (User), opens a browser and navigates to Client app
    • The client checks for a cookie
    • If none is present it redirects to the hosted login page on the Authorization Server
  2. User provides authentication credentials, username and password
    • Upon successful authentication returns a code, along with a state value, and redirects the browser back to the client app
  3. The Client app receives the code
    • Verifies the state to check for CSRF
    • Provides the code and secret to the Authorization server
    • The Authorization server receives the code, authenticates the client's request using the secret, creates a token using the private key and returns it to the Client
  4. The Client receives the token and uses it to make subsequent requests to the resource server
    • The Resource server receives requests, decrypts the token using the public key

Our Client application is assigned a scope, only those resources in scope are accessible by the resource. Individual users are granted authorities, most commonly implemented as roles. In Spring Security the Role Based Access Decision Manager interrogates the authenticated user's granted authorities. In JWT we can pass these through to the resource service in the claims and filter them as roles. We can also add anything from the domain model, such as organisation association, and maintain that information in the resource service by passing them through as custom claims in the token.

Spring Boot 2 Implementation


Now that we understand the OAuth process and the use-case we intend to solve, let's walk through a real example implemented with Spring Boot 2. All three independent services, can be found on Github -
Let's start by creating the Authorization Server. As always, we start with the build and bring in the Spring Boot dependencies into the pom.xml.

The main configuration annotation sets up everything we need for the Authorization server, the hosted login page, web service and all the request and response logic of the OAuth flow. 

For the purposes of this example the UserDetails are stored as in-memory attributes. This could easily be extended with a custom UserDetailsService backed by a store.

The same goes for the ClientDetails. For a real world implementation we would want to be able to manage the Clients via UI and as with the UserDetailsService Spring Security allows us to implement a custom ClientDetailsService and store details however we choose.

OAuth does not dictate any specific type, or management of, the token. Spring Security allows us to implement the token how we wish but provides extensions for JWT.

For simplicity we'll just set a static signing key, which will be used here and in the Resource Server to decode the token. In a real world implementation this would be an Asymmetric key and we'd generate the private part and export it to the resource service via a robust key management tool

As well as the converter, we need to create a token enhancer to add the custom claims to the access token. Here I'm just setting a static String value against the 'organization'. In reality our UserDetails are likely to be part of a relational schema which could also describe how the user relates to a wider organization account. This would be very important for access control of resources and this value could again come from the UserDetailsService.

Then add the converter and enhancer to the token store.


Client App

As with the authorization service, the Client application is configured purely from a single annotation

and the client settings are provided in the application properties

The Code Grant flow is best suited to server side web apps so this application uses Thymeleaf to render the values returned from the resource service on a secured page which is only accessible to an authenticated user. The rest of the configuration sets up the view controllers and resource handlers for the template html pages.

Resource Server

The resource server security is configured for the OAuth flow with another annotation.  

We add the JWT token store and a custom converter to decode the token with the key and access the custom claims

The resource service is a simple REST service created with the Spring Web framework. In order to access the authentication details and authenticated principal in the controller we can simply include them as arguments in the method. The converter we configured earlier then adds the custom claims from the token to the object and we can pull back the details in the logic. I've also set up method security annotations so we can annotate the controllers with the Spring Security annotations and restrict access based on SpEL expressions. In this case I'm securing the method both by restricting the scope and the authenticated user's granted authorities.

Running the System


All three services are set to run on different ports. Using the Maven Spring-Boot plugin run each one on localhost and navigate to the Client on http://localhost:8082

The browser doesn't yet have a cookie and so presents a page inviting the user to login. Clicking the login will redirect the browser to the Authorization Server's hosted login page on http://localhost:8080/auth/login. Enter the username and password defined in the example in memory configuration and sign in. 




The browser is now redirected back to the client app which will render the Secured Page. The resources for which are fetched from the resource server, running on http://localhost:8081 and substituted into the template.

Dangers of Stateless Authorization

The JWT's validity can only be ascertained by decryption with the private key. There is no way for the Authorization Server to revoke the key once issued. For this reason the key is only valid for a short period of time. Also, The attack surface of a system using JWT is large. If the private key is compromised all identities and resource servers are compromised.

Wednesday 24 October 2018

White Listing S3 Bucket Access for AWS Resources


Limiting access to data with a white list is a security requirement of any serious data governance policy. In the Cloud the obvious storage choices, such as S3, might not seem like suitable solutions for hosting high risk data. However, the options available for securing data are very powerful. In this post I will show how to implement a robust white listing policy for an S3 bucket which limits access to resources with a given or assumed IAM role.

A common policy with high trust data, such as Personally Identifiable Information, is to only allow access via an application. No direct access to the file store hosting the secure data should be permitted. Of course, we want to avoid storing access credentials within the application itself, the container or machine image. The most risk adverse option to grant our resources, EC2, ECS, Lambda, with an IAM role. In an EC2 environment we can access those credentials from the Instance Profile, with Java we use the InstanceProfileCredentialsProvider  to enable the application to access the S3 resource.

The role is associated to the EC2 instance or, if we're a autoscaling a cluster, specified in the Launch Configuration and associated to an instance on launch.

The role associated to the instance grants the resource access to all operations in the S3 service. This does not limit access to the specific bucket or protect the resources within it in any way. Now we need to create a Bucket Policy which limits access only to authenticated principals with that role, or resources with the assumed role.

The policy is a 'Deny' with a White List of roles which are not subject to that effect. In many examples the 'NotPrinciple' statement is used to define the White List. This will work but requires us to name the instance Id, as well as the assumed role, as the principal is not subject to the 'Deny' effect. This causes us problems in an autoscaled EC2 group as we aren't able to add and remove specific instance Ids to the policy as and when they launch. We could implement some kind of elaborate callback as UserData and amend the policy but that would require us to grant access to manage IAM policies from the EC2 instance which would violate the Least Privilege principle.

A more elegant solution is to use a 'Condition' clause, instead of the 'NotPrinciple' statement, and include a 'StringNotLike' attribute which defines the Role ID. This means we don't need to explicitly define instance ids in the White List.

Here's the Bucket Policy which will limit access to only those resources which are granted the role we created earlier.