In my journey of constantly learning, while going through week 7 of the AWS Build Accelerator 2023 on software engineering best practices, I encountered the concept of the Twelve-Factor App. Naturally, I was curious and read the document to learn about the best practices derived from the authors' experiences in building, deploying, and scaling production applications.
If you're a developer looking for a quick read on what the Twelve-Factor app is, then you've come to the right place.
In this article, I try to simplify the concepts that I've read from the Twelve-Factor App. I've also included high-level illustrations that use practical examples and AWS services.
Then, let's define what a Twelve-Factor App is.
The Twelve-Factor App is a collection of best practices for building scalable applications. These were documented from the experiences and observations of contributors from Heroku.
According to them, Twelve-Factor Apps:
- automate setup processes to minimize friction when onboarding new developers
- are portable between different execution environments
- can be deployed on modern cloud providers such as AWS, GCP, and Azure.
- minimize differences between development and production environments and use CI/CD.
- can scale out without significant changes to the architecture.
To make our applications compliant with the definition of Twelve-Factor Apps, there are twelve guidelines to consider.
- tracked in a version control system (VCS).
- correlated to one codebase.
- deployed to different environments using the same codebase.
- Production
- Staging
- Local development
- have multiple codebases.
- Multiple codebases are distributed systems.
- Each component of the distributed system is an app.
- share the same code with different apps.
- Use shared libraries for sharing code to multiple apps.
- explicitly declare dependencies.
- uses package managers.
- doesn't rely on system-wide packages.
Not relying on system-wide packages means that you should declare all dependencies needed to run your application.
An alternative here is using Docker to build your application, then you can be sure that your app has access to packages in the operating system.
By following the guidelines on app dependencies, developers can run the app locally with only the language runtime and dependency manager (JavaScript and npm, Python and pip).
- stores configs in the environment (
.env
file). - has different configs for each deployment.
- has no hard-coded configs in the codebase.
- A codebase doesn't have hard-coded configs if you can open-source it without leaking credentials.
- use local and third-party services.
- The app can use a local PostgreSQL and has no issues using a cloud service like Amazon RDS.
- add, remove, or replace backing services as needed.
- If there is a database issue, you can spin up a new database restored from a recent backup.
- There should be no code changes needed.
Backing services are used by the app over the network.
- Data Stores (Amazon RDS, MySQL, PostgreSQL)
- Messaging/Queueing Systems (Amazon SQS, RabbitMQ, Beanstalkd)
- SMTP (Amazon SES)
- Caching Systems (Amazon ElastiCache, Memcached, Redis)
The Build stage converts code into an executable bundle, fetches dependencies, and compiles binaries and assets.
- A build is triggered by developers when new code is deployed.
- The build stage allows complex workflows such as adding unit and integration tests.
The Release stage combines the build with deployment configs.
- A release should have a unique release ID, with a timestamp, or an incrementing number.
- Deployment tools have release management tools, where you can roll back to a previous release.
The Runtime stage runs the app in the execution environment.
- is stateless.
- never assumes that anything cached will be available for a future request.
- stores data that needs to be persisted in a stateful backing service like a database.
- stores session state in a session store like Memcached or Redis.
The memory or filesystem can be used as cache for a single transaction.
- exposes HTTP endpoints by port binding.
- listens to incoming requests from the port.
With port binding, an app can be the backing service for another app by providing the backing service URL. For example, a frontend app using a backend app.
- scales out (horizontal scaling) when needing more capacity.
- handles diverse workloads.
- HTTP requests are handled by a web process.
- Long-running tasks are handled by a worker process.
Scaling out (horizontal scaling) is the preferred scaling solution since if you follow the guidelines of the Twelve-Factor App, your application can support more requests by adding more instances.
Scaling up (Vertical scaling) is limited in scalability since it only upgrades the instance configuration (memory, CPU, GPU), not to mention you also need to stop that instance to perform the upgrade.
When building scalable applications, a mix of scaling up and scaling out is ideal.
- can be started or stopped quickly.
- minimizes startup time.
- shuts down gracefully.
- can handle unexpected, non-graceful terminations.
For shutting down gracefully:
- process current requests before shutting down.
- worker process should return unprocessed jobs to the queue.
- jobs should be reentrant and workers should be idempotent.
- supports continuous deployment.
- minimizes time gaps, personnel gaps, and tools gaps.
- has similar backing services for development and production.
Backing services like database, queue, and cache should have dev/prod parity.
- Time gap - A developer works on a feature or bug fix, which takes time.
- Personnel gap - A developer writes code, while another developer deploys it.
- Tools gap - Different tools are used in development and production.
- Developers should be able to write code and deploy it after a few minutes or a few hours.
- Developers are closely involved in deploying and observing their code in production
- Development and production should be as similar as possible.
Tools like Docker and Vagrant can be used to allow local environments to mimic production environments.
- doesn't route or store it's output logs.
- doesn't write or manage log files.
- stream logs to different services for analysis and monitoring.
- runs admin processes in identical environments.
- deploys admin process code along with the application code.
- running database migrations.
- running console commands via SSH.
- running one-time scripts (like fixing bad records).
To recap, we've learned how we can make our applications more secure and scalable by following the Twelve-Factor App guidelines.
The Twelve-Factor App consists of:
- Codebase – One codebase tracked in revision control, many deploys
- Dependencies – Explicitly declare and isolate dependencies
- Configuration – Store configuration in the environment
- Backing services – Treat backing services as attached resources
- Build, release, run – Strictly separate build and run stages
- Processes – Execute the app as one or more stateless processes
- Port binding – Export services via port binding
- Concurrency – Scale-out via the process model
- Disposability – Maximize robustness with fast start-up and graceful shutdown
- Dev/prod parity – Keep development, staging, and production as similar as possible
- Logs – Treat logs as event streams
- Admin processes – Run admin/management tasks as one-off processes
I recommend reading through the Twelve-Factor App since it's always a good thing to read through the source material. It should take you about an hour or two to finish reading the entire document.
Thank you for reading and if you have any questions or feedback, feel free to comment or connect with me here.