Intro
As developers we should focus on implementation and delivery of business logic which solves the customer requirements. Everything else is ancillary to this. The customer doesn't care about our processes and so they should be slick and easy to manage as not to consume the time and money we have to focus on providing solutions. This isn't to say that the build and delivery processes aren't important. The continuous build process should ensure the integrity of the artifacts and the continuous delivery of those artifacts should be repeatable and error free. The entire process should be flexible, allowing us to delivery builds into different environments, with zero overhead, at any moment in time, managed directly from version control.
In this example I will show how to achieve this using tools which are provided as services, (or serverless as far as we are concerned), for free, to create a complete automated lifecycle process resulting in production deployment managed through the correct governance of version control.
Responsibilities and Governance
The governance of version control is paramount and each team member must understand their role and responsibility in its management. In an automated continuous integration environment the master branch, or trunk, represents the source code delivered to production and is owned by the Delivery Manager. Only the Delivery Manager role can authorize Pull Requests to the master, merge in branches and tag releases. It is these actions which will trigger a production build delivered to the production environment with zero downtime.
Developers branch or fork from the master. They do this for each piece of work, a bug fix or new feature requested by the business and raised as a documented issue. Many copies of the master maybe taken, branches or forks may be merged by the team to collate work. Continuous integration of these development branches and build of the source can happen at any time, the resulting artifact automatically deployed to another environment, staging, dev, test, etc... The important point is to ensure that we map a forked repository or branch to an environment and that the master copy remains secured by the Delivery Manager.
|
Example of a version control timeline |
Application Architecture
The simplicity of the process and success of the development team to utilize it is underpinned by the gold standard of application architecture. To me this means statelessness and clean separation of concerns from top to bottom. A messy architecture will result in spaghetti code. Spaghetti code will result in build complexity and difficulty in testing. These problems will hinder the processes causing the team to spend time 'shoe horning' it to fit the configuration. The process will become bespoke and unrepeatable which ultimately will result in mistakes and errors in production deployment which wastes more time.
In this example I will use a Single Page App', written in Vue.js feeding from a Spring Boot REST API providing a gateway to read/write functionality of data in a relational database. This provides clean separation of concerns and statelessness to ensure scalability.
|
Token based authentication in an SPA architecture |
Storage and authentication of user identity is often the most difficult piece of state to remove from a system. Requests to a REST API should be completely independent from each other, there should be no user identity shared across server side machine instances. This is achieved through token based authentication and authorization, in this case OAuth, using JWT. The token provider in this example is
Auth0. The token carries the identity and its granted authorities within the resources exposed through the API. The identity store and token provider are, in this case, completely separated from the resource server under development. Again, this is key to good architecture as it separates concerns and keeps the state away from the functionality. This ensures we don't need to customize the build for different environments by moving data around, obfuscating identities and fudging security credentials.
Dependency Management
I will focus on the build process of the REST API and its underlying service and repository components. In this simple example all of those three tiers fall under one build. We could separate them out into different projects for added physical separation but so long as the architecture is stateless the cleanliness is maintained. In the event of separate build being required we must manage those build orders and dependencies within each project. There are two options for this: We can trigger build of dependencies prior to the build of the deliverable artifact or we can keep our dependent builds completely separate and store them in an artifact repository. In my opinion, the later option is more desirable as it adds another level of separation to the build process and therefore greater flexibility and more options for the team. In this case dependency management as a service, such as
JitPack.io is a good choice as it integrates very easily with our other tool services, especially
GitHub. I use Maven to build most of my backend Java projects and so its a simple case of associating my GitHub account to JitPack and importing the projects I want to store builds for. Additionally, I need to add the JitPack repository to build files and change their group Id.
Setting up the Environment
My target environment is
Heroku. I prefer it to AWS because it is free, to a point, and its container based approach removes any requirement for managing any infrastructure, even as code. As my user identities are stored with a third party I don't need to concern myself with network segregation and complex firewall settings to protect highly secure data. I can just throw the application onto a container on a VM and run it.
Before I can deploy my application to Heroku from my CI tool I need to set up the container and the apps I need. The backend relational database, Spring Boot API and the from end files will all deploy to different Heroku apps. I won't go into the detail of setting up the MySQL backend, I use ClearDB and just go through the basic setup to get an end point from where I can create a database and let Hibernate create the schema from the JPA mappings on my domain model in my Spring Boot API application with the following setting
The application is a web app and so I've chosen to run it on a web container. This prepares the app to receive web traffic, HTTP requests from the front end, and provides some default options for execution. I need to tell the container how to run the Spring Boot app and how to connect to the database. Heroku will automatically detect that my app is Java and built with Maven and provides the default command
to start the app when the container is fired up. In Spring Boot all the configuration, such as database connection URLs and credentials, are consolidated into an application.properties. I can override that by creating all the properties I need for this environment as environment variables within the container and this is a straight forward task of setting the properties and values in the Config Vars settings. Obviously, these can be varied for production, test, pre-prod or whatever environment we want the version control we will link it to to reflect.
Now the container is configured I need to link it to the build as a deployment target. This is secured through an API key which I must set in the project settings for the CI tool.
Build and Deployment
The build is triggered automatically by a push to a branch. This is achieved through the association of the GitHub and build server account,
CircleCI, another free container based tool which integrates seamlessly with Git. Before importing the project we need to add a file to the source to tell CircleCI what to do above the default of just detecting and runing the Maven build. Its a simple case of adding the a circle.yml file to the project root. There are many options here for managing test reports, runing custom script and deploying the build to an environment. I'm using CircleCI 1.0, which has built in Heroku support. This makes life much easier than using v2.0, which requires some customization to deploy to a Heroku Dyno. The deployment configuration section of the file tells CircleCI where to deploy the resulting artifact when triggered from specific locations in Git.
In the above configuration I have two deployment scenarios. The first tells CircleCI to deploy to a staging environment, a Heroku app named
rest-api-stage, whenever there's a commit to the staging branch. The release configuration is a little different. This time CircleCI will only deploy to the production Heroku app, named
rest-api, when the repository is tagged with a label matching the regex, e.g. v1.0, v1.1, v1.2.1 ...etc. the important control here is that only the owner of the repository, that's me as the Delivery Manager, can trigger the build. We could add multiple usernames of people added to the project repo as collaborators.
Heroku uses git to deploy code onto the containers. Under the hood, the above configuration is adding the build artifacts to a local git repository on the build container, connecting to the remote repository on the Heroku container and pushing the files.
Now we've connected all the dots and implemented a complete end-to-end process for managing the life cycle of the app. If I need a new environment I simply create the new heroku app, branch or fork the repository and edit the circle.yml and add the app name as the deployment target. The Delivery Manager just needs to ensure that s/he doesn't merge that file and sets the correct production target on the master branch. All part of the their responsibility for managing the production delivery.
Unit and Integration Testing
Going slightly off track here but it's worth noting the power of Spring Boot's testing capability. We have two types of tests: Unit and Integration. Both are written as unit tests and exist in the test dir of the application. Unit tests isolate the individual classes we want to test using Mockito, or which ever flavour of mocking framework you usually use. I've configured my Maven build file to only execute anything named
*IntegrationTest when the integration-test goal is executed. CircleCI will execute this goal as part of the default maven build process. Integration tests usually rely on external resources, such as a real database, to achieve their goals. In the case of this example I am testing the read/write functions of my repository types. However, I still don't want to have a build dependency on an external database, such as an instance of MySQL. Spring allows me to test that functionality against an in-memory HSQLDB instance which is initialized with the test and torn down at the end. It's a simple case of including the HSQLDB dependency in the test scope.
Caveats
Of course, we all like things for free. In this example the processes implemented with the tools I have used cost nothing. Just like a free beer, the barman isn't going to give you more than one for nothing! If you want to scale up, increase the number of users, number of developers in the team, number of build process you want to run concurrently and of course the hits on your application then you're going to have to expect to pay. However, I'd challenge anyone to find it lower cost solution than the tools outlined in this example which give you absolute control of your end to end lifecycle process. From setting up a dev' team right through to production delivery and support, it really can be very simple and cost effective to achieve.
**Update**
Just as I have published this post I've received an update from CircleCI saying that they're not going to support v1.0 builds anymore after August 2018 and that everyone must migrate to v2.0. I dont mind that so much, v2.0 gives us much granular control of the build process but it is lacking the simple Heroku support from v1.0. I'm currently looking into deploying to Heroku from a v2.0 build and will certainly post a blog about it when I figure it out!