In the last blog post, we talked about the technical foundations of DevSecOps. Now it is time to dive into a possible implementation. Remember, combining Development, Operations, and Security (DevSecOps) is the logical next step for teams aiming to streamline their workflows without compromising on compliance and data integrity.
This blog post walks you through a straightforward DevSecOps approach, detailing how to integrate tools such as GitLab, GitLab Runner, Terraform, and Kaniko. We’ll also explore how to bring in security tools (Trivy, Cosign, TFSec, TFLint) and HashiCorp Vault to protect sensitive data.
The Overall Approach
The goal is to establish a well-structured pipeline that manages both infrastructure and application deployments in a secure manner. We leverage GitLab’s powerful CI/CD features to automate each stage, from code integration to testing and deployment. At the same time, we ensure security checks are in place to scan for vulnerabilities, enforce best practices, and protect any sensitive credentials.
Tools and Security Tools
Our DevOps workflow relies on:
- GitLab for source control and CI/CD.
- GitLab Runner (with different executor options) to run pipeline jobs.
- Terraform to create and manage Infrastructure as Code (IaC).
- Kaniko to build container images efficiently within Kubernetes.
- Security Tools (Trivy, TFSec, TFLint, Cosign) to scan, test, and secure both infrastructure and container images.
- HashiCorp Vault for secure storage and retrieval of secrets and credentials.
GitLab and GitLab Runner
GitLab is an open-source DevOps platform where you can store, version-control, and review your code. One of its biggest strengths lies in its built-in CI/CD pipelines. To run these pipelines, you need a GitLab Runner.
Types of Runners
- Shared Runner: Available to all projects in a GitLab instance.
- Group Runner: Restricted to a group and its subgroups.
- Specific Runner: Dedicated to a single project.
Choosing which runner to use depends on your project’s scope and the security boundaries you want to enforce. You can also run multiple runners simultaneously to reduce wait times if your teams frequently push new code.
Runner Executors
When registering a runner, you also choose a Runner Executor, which dictates how and where your jobs run:
- SSH: Executes commands remotely via SSH.
- Shell: Executes local shell commands on the host machine.
- VirtualBox/Parallels: Creates a clean virtual environment for each build.
- Docker: Uses Docker Engine to provide a consistent environment for each job.
- Kubernetes: Spawns a new pod for each job in a Kubernetes cluster, offering scalability.
For this pipeline, and this is usually our recommendation, we use the Kubernetes Executor so that each job runs in its own container. This ensures isolation and reproducibility.
Automating Infrastructure with Terraform
Terraform is a powerful tool to define and provision infrastructure in a consistent manner. By writing IaC modules, you can version-control your infrastructure, reuse configurations, and track changes over time.
3Storing and Reusing Modules
We store Terraform modules in the GitLab Infrastructure Registry, making them easy to reuse across multiple deployments. A typical workflow might look like this:
- Create or update a Terraform module.
- Open a Merge Request (MR) in GitLab.
- Run tests to ensure that the Terraform module can be deployed successfully.
- Once tests pass, merge the MR into the main branch.
- Tag the new version of the module, triggering a CI/CD pipeline that packages and uploads it to the registry.
With this setup, developers on other projects can easily consume these modules for quick, standardized deployments.
Building Container Images with Kaniko
To build and push container images without using Docker-in-Docker (which can introduce security and performance issues), we employ Kaniko. This tool allows us to build container images from a Dockerfile inside a Kubernetes cluster without requiring privileged containers.
- Code changes trigger the CI/CD pipeline in GitLab.
- Kaniko builds the container image within the pipeline.
- Once built, the image is pushed to a container registry (e.g., Docker Hub or GitLab’s own registry).
Using Kaniko helps avoid the overhead and security concerns of Docker-in-Docker, while still giving you flexibility in customizing the build process.
Scanning for Vulnerabilities with Trivy
Trivy is a versatile scanner that checks for vulnerabilities in container images, Kubernetes resources, and even code dependencies. It relies on the National Vulnerability Database (NVD) to detect potential issues. Trivy works in two modes:
- Standalone: Perfect for running scans in a single runner container.
- Client/Server: Uses a central server for multiple clients, reducing database-download overhead.
For most CI/CD pipelines, especially when only a single runner is scanning at a time, Standalone mode is sufficient.
Signing Images with Cosign
After successfully building a container image, we use Cosign to sign it. This step ensures:
- Integrity: Verifies that nobody has tampered with the image.
- Authenticity: Confirms the identity of the signer.
By signing and verifying the container images, you protect your pipeline from supply chain attacks and maintain trust in the artifacts you deploy.
Scanning Infrastructure with TFSec and TFLint
Terraform code can contain misconfigurations that lead to security breaches. Two popular tools for analyzing Terraform are:
- TFSec: Uses static analysis to detect potential misconfigurations and vulnerabilities in cloud infrastructure.
- TFLint: A Terraform linter that warns about deprecated syntax, ensures naming conventions, and enforces best practices.
Running these tools as part of your pipeline helps you catch security issues before any infrastructure is provisioned.
Storing Secrets with HashiCorp Vault
Often, Terraform scripts and container builds need credentials or tokens to access external services. Instead of hardcoding them in your code or environment variables, store them in HashiCorp Vault. Vault provides:
- Secure Storage: Safeguards secrets in an encrypted system.
- Access Control: Allows only authorized processes or users to retrieve specific secrets.
- Auditability: Provides clear logs on who accessed which credentials, and when.
Integrating Vault into your GitLab pipelines ensures that any sensitive data remains protected at all times.
Before implementing a DevSecOps approach for Infrastructure as Code (IaC), you need a clear GitLab structure. In this example, GitLab’s SaaS offering is used, so there’s no need to host your own GitLab instance. Here’s how a potential setup can look:
- Root Group (e.g., “GitOps-Shared”)
- Acts as the main group where all subgroups and projects reside.
- Registers a Group Runner that handles CI/CD jobs across all subgroups.
- Stores necessary credentials and access tokens as CI/CD variables (though most secrets will be managed in Vault).
- IaC Subgroup
- Holds critical Terraform modules and container images.
- Sub-subgroups include “Terraform-Modules,” “Terraform-Registry,” and “Container,” each focusing on module development, hosting the registry, or building container images.
- Pipeline-Collection Project
- Manages CI/CD and IaC pipelines that interact with the Terraform modules and containers.
By organizing GitLab in this hierarchical structure, you streamline development, centralize key configuration and secrets, and lay the groundwork for a secure DevSecOps pipeline. Each subgroup can focus on its specific tasks—such as building modules or images—while still tapping into shared resources and runners at the root level.
So that’s enough for today. In the next blog post, we will show more details about the implementation.