How Kinsta Improved the End-to-End Development Experience


At Kinsta, we have projects of all sizes for Application Hosting, Database Hosting, and Managed WordPress Hosting.

With Kinsta cloud hosting solutions, you can deploy applications in a number of languages and frameworks, such as NodeJS, PHP, Ruby, Go, Scala, and Python. With a Dockerfile, you can deploy any application. You can connect your Git repository (hosted on GitHub, GitLab, or Bitbucket) to deploy your code directly to Kinsta.

You can host MariaDB, Redis, MySQL, and PostgreSQL databases out-of-the-box, saving you time to focus on developing your applications rather than suffering with hosting configurations.

And if you choose our Managed WordPress Hosting, you experience the power of Google Cloud C2 machines on their Premium tier network and Cloudflare-integrated security, making your WordPress websites the fastest and safest in the market.

Overcoming the Challenge of Developing Cloud-Native Applications on a Distributed Team

One of the biggest challenges of developing and maintaining cloud-native applications at the enterprise level is having a consistent experience through the entire development lifecycle. This is even harder for remote companies with distributed teams working on different platforms, with different setups, and asynchronous communication. We need to provide a consistent, reliable, and scalable solution that works for:

  • Developers and quality assurance teams, regardless of their operating systems, create a straightforward and minimal setup for developing and testing features.
  • DevOps, SysOps, and Infra teams, to configure and maintain staging and production environments.

At Kinsta, we rely heavily on Docker for this consistent experience at every step, from development to production. In this post, we walk you through:

  • How to leverage Docker Desktop to increase developers’ productivity.
  • How we build Docker images and push them to Google Container Registry via CI pipelines with CircleCI and GitHub Actions.
  • How we use CD pipelines to promote incremental changes to production using Docker images, Google Kubernetes Engine, and Cloud Deploy.
  • How the QA team seamlessly uses prebuilt Docker images in different environments.

Using Docker Desktop to Improve the Developer Experience

Running an application locally requires developers to meticulously prepare the environment, install all the dependencies, set up servers and services, and make sure they are properly configured. When you run multiple applications, this can be cumbersome, especially when it comes to complex projects with multiple dependencies. When you introduce to this variable multiple contributors with multiple operating systems, chaos is installed. To prevent it, we use Docker.

With Docker, you can declare the environment configurations, install the dependencies, and build images with everything where it should be. Anyone, anywhere, with any OS can use the same images and have exactly the same experience as everyone else.

Declare Your Configuration With Docker Compose

To get started, create a Docker Compose file, docker-compose.yml. It is a declarative configuration file written in YAML format that tells Docker what your application’s desired state is. Docker uses this information to set up the environment for your application.

Docker Compose files come in very handy when you have more than one container running and there are dependencies between containers.

To create your docker-compose.yml file:

  1. Start by choosing an image as the base for our application. Search on Docker Hub and try to find a Docker image that already contains your app’s dependencies. Make sure to use a specific image tag to avoid errors. Using the latest tag can cause unforeseen errors in your application. You can use multiple base images for multiple dependencies. For example, one for PostgreSQL and one for Redis.
  2. Use volumes to persist data on your host if you need to. Persisting data on the host machine helps you avoid losing data if docker containers are deleted or if you have to recreate them.
  3. Use networks to isolate your setup to avoid network conflicts with the host and other containers. It also helps your containers to easily find and communicate with each other.

Bringing all together, we have a docker-compose.yml that looks like this:

version: '3.8'services:
  db:
    image: postgres:14.7-alpine3.17
    hostname: mk_db
    restart: on-failure
    ports:
      - ${DB_PORT:-5432}:5432
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: ${DB_USER:-user}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
      POSTGRES_DB: ${DB_NAME:-main}
    networks:
      - mk_network
  redis:
    image: redis:6.2.11-alpine3.17
    hostname: mk_redis
    restart: on-failure
    ports:
      - ${REDIS_PORT:-6379}:6379
    networks:
      - mk_network
      
volumes:
  db_data:

networks:
  mk_network:
    name: mk_network

Containerize the Application

Build a Docker Image for Your Application

First, we need to build a Docker image using a Dockerfile, and then call that from docker-compose.yml.

To create your Dockerfile file:

  1. Start by choosing an image as a base. Use the smallest base image that works for the app. Usually, alpine images are very minimal with nearly zero extra packages installed. You can start with an alpine image and build on top of that:
    FROM node:18.15.0-alpine3.17
    
  2. Sometimes you need to use a specific CPU architecture to avoid conflicts. For example, suppose that you use an arm64-based processor but you need to build an amd64 image. You can do that by specifying the -- platform in Dockerfile:
    FROM --platform=amd64 node:18.15.0-alpine3.17
    
  3. Define the application directory and install the dependencies and copy the output to your root directory:
    WORKDIR /opt/app 
    COPY package.json yarn.lock ./ 
    RUN yarn install 
    COPY . .
  4. Call the Dockerfile from docker-compose.yml:
    services:
      ...redis
      ...db
      
      app:
        build:
          context: .
          dockerfile: Dockerfile
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        networks:
          - mk_network
        depends_on:
          - redis
          - db
  5. Implement auto-reload so that when you change something in the source code, you can preview your changes immediately without having to rebuild the application manually. To do that, build the image first, then run it in a separate service:
    services:
      ... redis
      ... db
      
      build-docker:
        image: myapp
        build:
          context: .
          dockerfile: Dockerfile
      app:
        image: myapp
        platforms:
          - "linux/amd64"
        command: yarn dev
        restart: on-failure
        ports:
          - ${PORT:-4000}:${PORT:-4000}
        volumes:
          - .:/opt/app
          - node_modules:/opt/app/node_modules
        networks:
          - mk_network
        depends_on:
          - redis
          - db
          - build-docker
          
    volumes:
      node_modules:

Pro Tip: Note that node_modules is also mounted explicitly to avoid platform-specific issues with packages. It means that instead of using the node_modules on the host, the docker container uses its own but maps it on the host in a separate volume.

Incrementally Build the Production Images With Continuous Integration

The majority of our apps and services use CI/CD for deployment. Docker plays an important role in the process. Every change in the main branch immediately triggers a build pipeline through either GitHub Actions or CircleCI. The general workflow is very simple: it installs the dependencies, runs the tests, builds the docker image, and pushes it to Google Container Registry (or Artifact Registry). The part that we discuss in this article is the build step.

Building the Docker Images

We use multi-stage builds for security and performance reasons.

Stage 1: Builder

In this stage we copy the entire code base with all source and configuration, install all dependencies, including dev dependencies, and build the app. It creates a dist/ folder and copies the built version of the code there. But this image is way too large with a huge set of footprints to be used for production. Also, as we use private NPM registries, we use our private NPM_TOKEN in this stage as well. So, we definitely don’t want this stage to be exposed to the outside world. The only thing we need from this stage is dist/ folder.

Stage 2: Production

Most people use this stage for runtime as it is very close to what we need to run the app. However, we still need to install production dependencies and that means we leave footprints and need the NPM_TOKEN. So this stage is still not ready to be exposed. Also, pay attention to yarn cache clean on line 19. That tiny command cuts our image size by up to 60%.

Stage 3: Runtime

The last stage needs to be as slim as possible with minimal footprints. So we just copy the fully-baked app from production and move on. We put all those yarn and NPM_TOKEN stuff behind and only run the app.

This is the final Dockerfile.production:

# Stage 1: build the source code 
FROM node:18.15.0-alpine3.17 as builder 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install 
COPY . . 
RUN yarn build 

# Stage 2: copy the built version and build the production dependencies FROM node:18.15.0-alpine3.17 as production 
WORKDIR /opt/app 
COPY package.json yarn.lock ./ 
RUN yarn install --production && yarn cache clean 
COPY --from=builder /opt/app/dist/ ./dist/ 

# Stage 3: copy the production ready app to runtime 
FROM node:18.15.0-alpine3.17 as runtime 
WORKDIR /opt/app 
COPY --from=production /opt/app/ . 
CMD ["yarn", "start"]

Note that, for all the stages, we start copying package.json and yarn.lock files first, installing the dependencies, and then copying the rest of the code base. The reason is that Docker builds each command as a layer on top of the previous one. And each build could use the previous layers if available and only build the new layers for performance purposes.

Let’s say you have changed something in src/services/service1.ts without touching the packages. It means the first four layers of builder stage are untouched and could be re-used. That makes the build process incredibly faster.

Pushing the App To Google Container Registry Through CircleCI Pipelines

There are several ways to build a Docker image in CircleCI pipelines. In our case, we chose to use circleci/gcp-gcr orbs:

executors:
  docker-executor:
    docker:
      - image: cimg/base:2023.03
orbs:
  gcp-gcr: circleci/[email protected]
jobs:
  ...
  deploy:
    description: Build & push image to Google Artifact Registry
    executor: docker-executor
    steps:
      ...
      - gcp-gcr/build-image:
          image: my-app
          dockerfile: Dockerfile.production
          tag: ${CIRCLE_SHA1:0:7},latest
      - gcp-gcr/push-image:
          image: my-app
          tag: ${CIRCLE_SHA1:0:7},latest

Minimum configuration is needed to build and push our app, thanks to Docker.

Pushing the App To Google Container Registry Through GitHub Actions

As an alternative to CircleCI, we can use GitHub Actions to deploy the application continuously. We set up gcloud and build and push the Docker image to gcr.io:

jobs:
  setup-build:
    name: Setup, Build
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Get Image Tag
      run: |
        echo "TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV

    - uses: google-github-actions/setup-gcloud@master
      with:
        service_account_key: ${{ secrets.GCP_SA_KEY }}
        project_id: ${{ secrets.GCP_PROJECT_ID }}

    - run: |-
        gcloud --quiet auth configure-docker

    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG" \
          --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest" \
          .

    - name: Push
      run: |-
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG"
        docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest"

With every small change pushed to the main branch, we build and push a new Docker image to the registry.

Deploying Changes To Google Kubernetes Engine Using Google Delivery Pipelines

Having ready-to-use Docker images for each and every change also makes it easier to deploy to production or roll back in case something goes wrong. We use Google Kubernetes Engine to manage and serve our apps and use Google Cloud Deploy and Delivery Pipelines for our Continuous Deployment process.

When the Docker image is built after each small change (with the CI pipeline shown above) we take one step further and deploy the change to our dev cluster using gcloud. Let’s take a look at that step in CircleCI pipeline:

- run:
    name: Create new release
    command: gcloud deploy releases create release-${CIRCLE_SHA1:0:7} --delivery-pipeline my-del-pipeline --region $REGION --annotations commitId=$CIRCLE_SHA1 --images my-app=gcr.io/${PROJECT_ID}/my-app:${CIRCLE_SHA1:0:7}

This triggers a release process to roll out the changes in our dev Kubernetes cluster. After testing and getting the approvals, we promote the change to staging and then production. This is all possible because we have a slim isolated Docker image for each change that has almost everything it needs. We only need to tell the deployment which tag to use.

How the Quality Assurance Team Benefits From This Process

The QA team needs mostly a pre-production cloud version of the apps to be tested. However, sometimes they need to run a pre-built app locally (with all the dependencies) to test a certain feature. In these cases, they don’t want or need to go through all the pain of cloning the entire project, installing npm packages, building the app, facing developer errors, and going over the entire development process to get the app up and running. Now that everything is already available as a Docker image on Google Container Registry, all they need is a service in Docker compose file:

services:
  ...redis
  ...db
  
  app:
    image: gcr.io/${PROJECT_ID}/my-app:latest
    restart: on-failure
    ports:
      - ${PORT:-4000}:${PORT:-4000}
    environment:
      - NODE_ENV=production
      - REDIS_URL=redis://redis:6379
      - DATABASE_URL=postgresql://${DB_USER:-user}:${DB_PASSWORD:-password}@db:5432/main
    networks:
      - mk_network
    depends_on:
      - redis
      - db

With this service, they can spin up the application on their local machines using Docker containers by running:

docker compose up

This is a huge step towards simplifying testing processes. Even if QA decides to test a specific tag of the app, they can easily change the image tag on line 6 and re-run the Docker compose command. Even if they decide to compare different versions of the app simultaneously, they can easily achieve that with a few tweaks. The biggest benefit is to keep our QA team away from developer challenges.

Advantages of Using Docker

  • Almost zero footprints for dependencies: If you ever decide to upgrade the version of Redis or Postgres, you can just change 1 line and re-run the app. No need to change anything on your system. Additionally, if you have two apps that both need Redis (maybe even with different versions) you can have both running in their own isolated environment without any conflicts with each other.
  • Multiple instances of the app: There are a lot of cases where we need to run the same app with a different command. Such as initializing the DB, running tests, watching DB changes, or listening to messages. In each of these cases, since we already have the built image ready, we just add another service to the Docker compose file with a different command, and we’re done.
  • Easier Testing Environment: More often than not, you just need to run the app. You don’t need the code, the packages, or any local database connections. You only want to make sure the app works properly or need a running instance as a backend service while you’re working on your own project. That could also be the case for QA, Pull Request reviewers, or even UX folks who want to make sure their design has been implemented properly. Our docker setup makes it very easy for all of them to take things going without having to deal with too many technical issues.

This article was originally published on Docker.

Amin Choroomi

Software developer at Kinsta. Passionate about Docker and Kubernetes, he specializes in application development and DevOps practices.



Source link

Jaspreet Singh Ghuman

Jaspreet Singh Ghuman

Jassweb.com/

Passionate Professional Blogger, Freelancer, WordPress Enthusiast, Digital Marketer, Web Developer, Server Operator, Networking Expert. Empowering online presence with diverse skills.

jassweb logo

Jassweb always keeps its services up-to-date with the latest trends in the market, providing its customers all over the world with high-end and easily extensible internet, intranet, and extranet products.

Contact
San Vito Al Tagliamento 33078
Pordenone Italy
Item added to cart.
0 items - 0.00
Open chat
Scan the code
Hello 👋
Can we help you?