Supply Chain Series

Exploring Tekton Supply Chain for Cloud Native Development

This blog series offers an in-depth look at Tekton Supply Chain, a key tool for automating CI/CD workflows in cloud-native environments. Tekton, designed for Kubernetes, streamlines the development process by facilitating the build, test, and deployment of applications across diverse cloud infrastructures. Throughout this series, we will dissect Tekton’s components, including Pipelines and Triggers, demonstrating their role in enhancing efficiency and scalability in software delivery. Aimed at developers, DevOps professionals, and cloud technology enthusiasts, these posts will provide practical insights and strategies for optimizing cloud-native application development with Tekton.

Part 1: Overview

Part 2: Frameworks & Tools

Part 3: Introducing the SLSA framework

Part 4: SLSA Levels & Tracks

Part 5: SLSA Attestation

Part 6: Introduction to Tekton

Part 7: How to work with Tekton

Part 8: Tasks and Pipelines

Part 9: Workspaces and Secrets

Part 10: Configuring Tekton Chains and Tekton Dashboard

Part 10: Supply Chain – Configuring Tekton Chains and Tekton Dashboard

Introduction

This is the last blog for our Tekton tutorial. In the last blog posts, we discussed how to install and configure Tekton. Now we want to briefly discuss how to configure Tekton Chains and the Tekton dashboard.

Tekton Chains

As discussed, an important requirement of the SLSA framework is attestation. Tekton provides Tekton Chains for that, which is part of the Tekton ecosystem.

In addition to generating attestation, Tekton Chains makes it possible to sign task run results with, for example, x.509 certificates or a KMS and store the signature in one of numerous storage backends. Normally the attestation with the artifact is stored within the OCI store. It is also possible to define a dedicated storage location independent of the artifact. Alternative storage options include Sigstor’s record servers, document stores such as Firestore, DynamoDB and Mongo and Grafeas.

An important aspect of Tekton Chains is the ability to integrate the Sigstor project. This also refers to Fulcio and Rekor, which were explained in more detail here. SLSA’s requirements for provenance, (which can be solved by Rekor and Fulcio), were that keys must be stored within a secure location and that there is no possibility for the tenant to subsequently change the attestation in any way . Although key management via a KMS is just as valid as using Fulcio and both solutions would meet the requirements of SLSA, Rekor, in particular, satisfies the requirement of immutability. As already mentioned, Rekor’s core is based on Merkle trees, which make deletion or modification impossible. Both Fulcio and Rekor represent an important trust connection between producer and consumer through the services provided by Sigstore.

Tekton Chains offers the advantage that neither the signature nor the attestation need to be provided through custom steps in the pipeline itself. For example, even if a developer integrates Cosign into a task to sign an image, Chains works regardless. The only requirement in the pipeline are the so-called ‘Results’. These allow Tekton to clearly communicate which artifacts should be signed. Results have two areas of application. A result can be passed through the pipeline into the parameters of another task or into the when functionality, or the data of a result can be output.

The data output by results serves the user as a source of information, for example about which digest a built container image has or which commit SHA a cloned repository has. The Tekton Chains Controller uses Results to determine which artifacts should be attested. The controller searches for results of the individual tasks with the ending “*_IMAGE_URL” and “*IMAGE_DIGEST”, where the IMAGE URL is the URL to the artifact, IMAGE_DIGEST is the digest of the artifact and the asterisk is any name.

Tekton Dashboard

The Tekton dashboard is a powerful tool that makes managing Tekton easier. The dashboard can be used in two modes: read only mode or read/write mode.

The authorizations of the dashboard can be configured by means service accounts, as is typical for Kubernetes. However, this poses a problem because the dashboard itself does not come with authentication or authorization in either mode. There are no options for regulating the dashboard’s permissions through RBAC. In this case, RBAC would only apply to the dashboard’s ServiceAccount, but not to all users. In practice, this means that all authorizations that the Tekton Dashboard service account has also apply to the person accessing the dashboard. This is a big problem, especially if the dashboard is public accessible.

Kubernetes does not have native management of users because – unlike service accounts – they are not manageable objects of the API server. For example, it is not possible to regulate authentication via user name and password. However, there are several authentication methods that use certificates, bearer tokens or authentication proxy.

Two of these methods can be used to secure the Tekton dashboard. On the one hand, OIDC tokens and on the other hand, Kubernetes user impersonation.

OIDC is an extension of the Open Authorization 2.0 (OAuth2) framework. The OAuth2 framework is an authorization framework that allows an application to carry out actions or gain access to data on behalf of a user without having to use credentials. OIDC extends the functionality of OAuth 2.0 by adding standardizations for user authentication and provision of user information.

Kubernetes user impersonation allows a user to impersonate another user. This gives the user all the rights of the user they are posing as. Kubernetes achieves this through impersonation headers. The user information of the actual user is overwritten with the user information of another user when a request is made to the Kubernetes API server before authentication occurs.

There are different tools to achieve this. One of this tools is Open Unison from Tremolo. Open Unison offers some advantages. It is possible to implement single sign-on (SSO) for graphical user interfaces and session-based access to Kubernetes via the command line interface. When using Openunison or similar technologies, communication no longer takes place directly with the Kubernetes API server, but rather runs via Open Unison. Open Unison uses Jetstack’s reverse proxy for OIDC.

When a user wants to access the Tekton dashboard, Open Unison redirects the user to the configured Identity Provider (IDP). After the user has authenticated himself with the IDP, he receives an id_token. The id_token contains information about the authenticated user such as name, email, group membership or the token expiration time. The id_token is a JavaScript Object Notation Web Token (JWT)

The reverse proxy uses the IDP’s public key to get the id_to to validate. After successful validation, the reverse proxy appends the Impersonation header to the request to the Kubernetes API server. The Kubernetes API server checks the Impersonation header to see whether the impersonated user has the appropriate permissions to execute the request. If so, the Kubernetes API server executes the request as an impersonated user. The reverse proxy then forwards the response it received from the Kubernetes API server to the user.

The following steps describe the configuration of the dashboard with OAuth2:

Create a namespace:

kubectl create ns consumerrbac

Installation of Cert Manager:

helm install \                                                                                                                   

  cert-manager jetstack/cert-manager \

  –namespace consumerrbac \

  –version v1.11.0 \

   –set installCRDs=true

In order to create certificates, an issuer is needed:

apiVersion: cert-manager.io/v1

kind: Issuer

metadata:

  name: letsencrypt-prod

  namespace: consumerrbac

spec:

  acme:

    server: https://acme-v02.api.letsencrypt.org/directory

    email:

    privateKeySecretRef:

      name: letsencrypt-prod

    solvers:

    – http01:

        ingress:

          class:  nginx

Now nginx, can be installed:

helm install nginx-ingress ingress-nginx/ingress-nginx –namespace consumerrbac

Now, the ingress can be created.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: tekton-dashboard

  namespace: tekton-pipelines

  annotations:

    kubernetes.io/ingress.class: “nginx”

    nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.consumerrbac.svc.cluster.local/oauth2/auth

    nginx.ingress.kubernetes.io/auth-signin: https://dashboard.35.198.151.194.nip.io/oauth2/sign_in?rd=httpd://$host$request_uri

    nginx.ingress.kubernetes.io/ssl-redirect: “false”

spec:

  rules:

  – host: dashboard.35.198.151.194.nip.io

    http:

      paths:

      – pathType: Prefix

        path: /

        backend:

          service:

            name: tekton-dashboard

            port:

              number: 9097

OAuth Proxy (we use Google and exptect the application created there):

  • In the Google Cloud dashboard, select APIs & Services
  • On the left, select Credentials
  • Press CREATE CREDENTIALS and select O Auth client ID
  • For Application Type, select Web application
  • Give the app a name and enter Authorized JavaScript origins and Authorized redirect URIs
  • Click Create and remember Client ID and Client Secret
  • A values.yaml must be created for the installation.
config:

  clientID:

  clientSecret:

 

extraArgs:

  provider: google

  whitelist-domain: .35.198.151.194.nip.io

  cookie-domain: .35.198.151.194.nip.io

  redirect-url: https://dashboard.35.198.151.194.nip.io/oauth2/callback

  cookie-secure: ‘false’

  cookie-refresh: 1h

Configuration of Open Unison

Create a namespace:

kubectl create ns openunison

Add Helm repo:

helm repo add tremolo https://nexus.tremolo.io/repository/helm/

helm repo update

Before openunison can be deployed, the oauth must be configured in the Google Cloud.

  • In Credentials, under Apis & Services, click CREATE CREDENTIALS
  • Then on OAuth Client ID
  • Select Web application as the application type and then give it a name
  • Authorized JavaScript origins: https://k8sou.apps.x.x.x.x.nip.io

Now Open Unison can be installed:

helm install openunison tremolo/openunison-operator –namespace openunison

Finally, Open Unison has to be configured with the appropriate settings.

This concludes our series on Tekton. Hope you enjoyed it.

Part 9: Supply Chain – Workspaces and Secrets

Introduction

As mentioned in the last blog post, the next thing we want to talk about and discuss is authentication, workspaces and secrets. Let’s begin with Workspaces.

Workspaces

As already mentioned, in Tekton each task is run in a pod. The concept of workspaces exists in Tekton so that pods can share data between each oterh. Workspaces can also help with other thins: Workspaces can be used to mount secrets, config maps, tools or a build cache in a pod. Tekton Workspaces work similarly to Kubernetes Volumes. This also applies to their configuration.

The configuration of the workspace is done in the pipeline, task run or in a TriggerTemplate.

Configuring a workspace is very similar to configuring a Kubernetes volume. This example creates a workspace that is used to mount the Dockerfile and associated resources from the pod that clones the repository to the pod that builds and uploads the image. In Tekton, VolumeClaimTemplates are used to create a PersistentVolumeClaim and its associated volume when executing a Task or PipelineRun. (Tekton Workspaces, n.d.) The further configuration of the workspaces is similar to that of a PersistentVolumeClaim in Kubernetes. The accessMode specifies how and which pods have access to a volume. ReadWriteOnce means that pods on the same node have read and write access to the volume. The storage space size in this configuration is one Gigabyte.

Of course, the steps to clone the repository and build and upload the container image to a registry require appropriate permissions. This can be done via the two following options:

  • First, the corresponding Kubernetes secrets with the credentials are mounted in the pod.
  • Second, authentication is implemented via a Kubernetes Service Account. The mounted volume is a Kubernetes Secret Volume. The data in this volume is read-only and is managed in the container’s memory via the temporary file system (tmpfs) file system, making the volume volatile. Secrets can be specified under Workspaces in the yaml configuration as follows.

Tekton can also isolate workspaces. This helps to make data accessible only for certain steps in a task or sidecars. However, this option is still an alpha feature and therefore cannot (yet) be used.

Secret Management

Kubernetes secrets are not encrypted by default, only encoded. This means that anyone with appropriate permissions can access the secrets via the cluster or the etcd store. It should also be noted that anyone who has the rights to create a pod has read access to the secrets in the corresponding namespace.

Kubernetes offers two ways to deal with this problem. Option one is the encrypting of secrets in the etcd store. This means that the secrets are still kept within Kubernetes.

Option two involves the utilization of third-party applications and the Container Storage Interface (CSI) driver. In this case, secrets are not managed directly by Kubernetes and are therefore not on the cluster.

One popular tool for the second approach is Hashicorp Vault. Like the other tools, Vault follows the just-in-time access approach. This gives a system access to a secret for a specific time and as needed. This approach reduces the blast radius by compromising the build system.

In addition, this minimizes the configuration effort because extra Role Based Access Control (RBAC) rules for secrets, for example in the namespaces for development, test and production, do not have to be created and the secrets do not have to be stored in this extra.

The Secrets Store CSI Driver makes it possible to mount secrets from Vault, into the CSI. Once the CSI driver knows which secrets should be mounted from the provider, SecretProviderClass objects are configured. These represent custom resources under Kubernetes. When a pod is started, the driver communicates with the provider to obtain the information specified in the SecretProviderClass.

Authentication

In the following two use cases Tekton needs secrets for authentication:

  • Authentication against Git (for example cloning)
  • Authentication against the Container Registry

As described in the last blog post within the PipelineRun example, Secrets can be mounted. The following examples show how to create those secrets:

Both manifest files can be created via the kubectl-command.

If a config.json file does not yet exist, you have to generate it first. To do this, you must log in to the desired registry via docker.

docker login registry.gitlab.com

Within the config.json the credentials from the Docker config.json must be specified in base64 encoded.

cat ~/.docker/config.json | base64

It is important to ensure that this does not happen via Docker Desktop, because then the “credsStore”: “desktop” field is included in the config.json and it must be ensured that the config.json has the following format:

{

        “auths”: {

                “registry.gitlab.com”: {

                        “auth”: “”

                }

        }

}

Furthermore, the secrets can be added to the ServiceAccount, which is specified via the serviceAccountName field.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: build-bot
  namespace: tekton
secrets:
  - name: git-credentials
  - name: dockerconfig

If the credentials are not provided via the ServiceAccount, they must be defined in the pipeline run under the pod template.

podTemplate:
    securityContext:
      fsGroup: 65532
    imagePullSecrets:
    - name: dockerconfig
    - name: git-credentials

After the pipelinerun.yaml has been configured it can be executed

kubectl create -f pipelinerun.yaml

Pipeline run logs can be viewed using the tkn command line tool:

tkn pr logs clone-read-run- -f -n tekton

After the pipeline has run through, you can check whether it has been signed and attested.

kubectl get tr [TASKRUN_NAME] -o json | jq -r .metadata.annotations

Part 2: Supply Chain – Frameworks & Tools

Frameworks

Secure Software Development Framework The Secure Software Development Framework (SSDF) is a framework published by the National Institute of Standards and Technology (NIST) and includes software development practices based on established security practices that make the software development life cycle more secure. The SSDF provides a set of software development practices that can be incorporated into an existing software development lifecycle. In-toto Attestation Framework The aim of the in-toto Attestation Framework is to define a uniform and flexible standard for software attestation. By using so-called predicates within the attestation definition, different information can be represented. Examples of this are the SLSA Provenance predicates for the two SBOM formats Software Package Data Exchange (SPDX) and CycloneDX. Supply-chain Levels for Software Artifacts Das SLSA-Framework ist ein inkrementell implementierbares Framework, wel- ches der Software Supply Chain Sicherheit dient.

Tools

Notary V2 Docker began development on Notary in 2015 and worked on it until the project was handed over to the Cloud Native Computing Foundation in 2017. Notary’s functionality includes signing and validating Open Container Initiative (OIC) artifacts. Signing and validation are done using public and private keys. The public key is stored in a container registry and the artifacts are signed with the private key. The artifact with the signature is then uploaded to the registry. The authenticity of an artifact can be verified using the registry’s public key. Sigstore Sigstore is a project of the Linux Foundation with support from many companies such as Google and RedHat. Sigstore is open source. Sigstore simplifies the signing and attestation of artifacts and the associated distribution of signatures and attestations. Sigstore mainly consists of three technologies: Cosign, Fulcio, Rekor. Cosign is a commandline tool that is responsible for signing and verifying software artifacts. Cosign also supports in-toto attestation, making it SLSA-compliant. Another functionality of Cosign is the keyless signing mode, which uses another technology from the Sigstore project. Fulcio is a code-signing certificate authority that generates short-lived certificates The advantage of this approach is that developers do not have to worry about key and certificate management themselves. The identity of the signer is ensured by the OpenID Connect (OIDC) protocol. For example, when Cosign makes a request to Fulcio to obtain a short-lived certificate, the user must log in to their GitHub or with an Google Account to authenticate. The user’s identity is stored within the certifate. Signatures should be checked by everyone, hence they are stored in a central location, called Rekor. Rekor is a transparency log, in which the digital signatures are appended. The entries can only be appended, but cannot be deleted or changed. To ensure this, Rekor uses Trillan. Trillan, in turn, is based on Merkle trees. The following figure shows how Developers can work with Sigstore:

The figure shows how Developers can work with Sigstore.

Developers can request a certificate from the Fulcio Certificate Authority. Authentication is doney with Open ID Connect. Develpers can then publish the Signed Artifact as well as the the Signing Certificate. On the other site, Developers, can find and download artifacts and check their signatures in the log.

Part 1: Supply Chain – Overview

What is a Supply Chain?

Software supply chains are comparable to a supply chain in the real world. Very few companies, whether food or automobile manufacturers, produce all the components required for the end product by themselves – hence it is the same with software supply chains. In addition, hardly any software does not contain open source code. According to the “Open Source Security and Risk Analysis 2023” by  Synopsis, 96% of the 1703 different software products analyzed contain open source software and according to the “ Octoverse Report 2022” from Github, 90% of all companies use open source software.

The list of open source software is long and covers pretty much every aspect. Programming languages such as Python, Java or JavaScript and associated frameworks such as Django, Spring or Angular are widely used for software development. Linux operating systems are widespread. Container technologies, Kubernetes as well as Kubernetes-native applications are common for software development.

Risks and dangers

Risks and dangers can occur at any step of software development, whether it occurs at the beginning or end of the supply chain or in between. Attacks do not target the entire supply chain, but rather focus on some steps within the chain. Attack vectors can be categorized:

  • The first area is the source code itself. This includes the version management system, for example GitHub or GitLab. These attacks are usually carried out by people within an organization. This involves making unauthorized changes to the source code or administrative changes to the management system or its infrastructure. The consequences of this are that the software build uses modified source code during the build process. This is what happened during the SushiSwaps attack. SushiSwap is an open source, decentralized cryptocurrency platform. A user published an unauthorized Git commit with the aim of introducing malicious code into the system. This resulted in the theft of $3 million.
  • The second area targets software dependencies that are used during the build. If during the build, incorrect or changed source files are included in the build process built artifacts are therefore compromised. An example of this was the EventStream backdoor. In 2018, the malicious package “flatmap” stream was released and subsequently used as a dependency in the “Node.js Package event” stream. In the end, there were over 8 million downloads of the infected package.
  • The third area includes all kind of attacks where an attacker manages to change packages without making any changes to the source code or dependencies. This affects the build system and process as well as the package repository. An example of that kind of attack on the build system was the attack on the monitoring software “Orion” from SolarWinds. In this attack, the attackers were able to introduce a backdoor into the software build cycle. The backdoor used by the attackers was rolled out to over 30,000 customers via an update from SolarWinds.

Another attack that does not relate to the a source code repository itself, but can be categorized in this area and can occur throughout the entire software development, is so-called “typosquotting”. This attack is particularly easy to carry out and targets a developer’s carelessness. This attack involves uploading software packages to a package manager with names very similar to existing software packages. Due to the similarity of the names, there may be a risk of confusion between the two packages and so the actual package is not downloaded – instead the wrong one is chosen.