Supply Chain Series

Exploring Tekton Supply Chain for Cloud Native Development

This blog series offers an in-depth look at Tekton Supply Chain, a key tool for automating CI/CD workflows in cloud-native environments. Tekton, designed for Kubernetes, streamlines the development process by facilitating the build, test, and deployment of applications across diverse cloud infrastructures. Throughout this series, we will dissect Tekton’s components, including Pipelines and Triggers, demonstrating their role in enhancing efficiency and scalability in software delivery. Aimed at developers, DevOps professionals, and cloud technology enthusiasts, these posts will provide practical insights and strategies for optimizing cloud-native application development with Tekton.

Part 1: Overview

Part 2: Frameworks & Tools

Part 3: Introducing the SLSA framework

Part 4: SLSA Levels & Tracks

Part 5: SLSA Attestation

Part 6: Introduction to Tekton

Part 7: How to work with Tekton

Part 8: Tasks and Pipelines

Part 9: Workspaces and Secrets

Part 10: Configuring Tekton Chains and Tekton Dashboard

Part 10: Supply Chain – Configuring Tekton Chains and Tekton Dashboard


This is the last blog for our Tekton tutorial. In the last blog posts, we discussed how to install and configure Tekton. Now we want to briefly discuss how to configure Tekton Chains and the Tekton dashboard.

Tekton Chains

As discussed, an important requirement of the SLSA framework is attestation. Tekton provides Tekton Chains for that, which is part of the Tekton ecosystem.

In addition to generating attestation, Tekton Chains makes it possible to sign task run results with, for example, x.509 certificates or a KMS and store the signature in one of numerous storage backends. Normally the attestation with the artifact is stored within the OCI store. It is also possible to define a dedicated storage location independent of the artifact. Alternative storage options include Sigstor’s record servers, document stores such as Firestore, DynamoDB and Mongo and Grafeas.

An important aspect of Tekton Chains is the ability to integrate the Sigstor project. This also refers to Fulcio and Rekor, which were explained in more detail here. SLSA’s requirements for provenance, (which can be solved by Rekor and Fulcio), were that keys must be stored within a secure location and that there is no possibility for the tenant to subsequently change the attestation in any way . Although key management via a KMS is just as valid as using Fulcio and both solutions would meet the requirements of SLSA, Rekor, in particular, satisfies the requirement of immutability. As already mentioned, Rekor’s core is based on Merkle trees, which make deletion or modification impossible. Both Fulcio and Rekor represent an important trust connection between producer and consumer through the services provided by Sigstore.

Tekton Chains offers the advantage that neither the signature nor the attestation need to be provided through custom steps in the pipeline itself. For example, even if a developer integrates Cosign into a task to sign an image, Chains works regardless. The only requirement in the pipeline are the so-called ‘Results’. These allow Tekton to clearly communicate which artifacts should be signed. Results have two areas of application. A result can be passed through the pipeline into the parameters of another task or into the when functionality, or the data of a result can be output.

The data output by results serves the user as a source of information, for example about which digest a built container image has or which commit SHA a cloned repository has. The Tekton Chains Controller uses Results to determine which artifacts should be attested. The controller searches for results of the individual tasks with the ending “*_IMAGE_URL” and “*IMAGE_DIGEST”, where the IMAGE URL is the URL to the artifact, IMAGE_DIGEST is the digest of the artifact and the asterisk is any name.

Tekton Dashboard

The Tekton dashboard is a powerful tool that makes managing Tekton easier. The dashboard can be used in two modes: read only mode or read/write mode.

The authorizations of the dashboard can be configured by means service accounts, as is typical for Kubernetes. However, this poses a problem because the dashboard itself does not come with authentication or authorization in either mode. There are no options for regulating the dashboard’s permissions through RBAC. In this case, RBAC would only apply to the dashboard’s ServiceAccount, but not to all users. In practice, this means that all authorizations that the Tekton Dashboard service account has also apply to the person accessing the dashboard. This is a big problem, especially if the dashboard is public accessible.

Kubernetes does not have native management of users because – unlike service accounts – they are not manageable objects of the API server. For example, it is not possible to regulate authentication via user name and password. However, there are several authentication methods that use certificates, bearer tokens or authentication proxy.

Two of these methods can be used to secure the Tekton dashboard. On the one hand, OIDC tokens and on the other hand, Kubernetes user impersonation.

OIDC is an extension of the Open Authorization 2.0 (OAuth2) framework. The OAuth2 framework is an authorization framework that allows an application to carry out actions or gain access to data on behalf of a user without having to use credentials. OIDC extends the functionality of OAuth 2.0 by adding standardizations for user authentication and provision of user information.

Kubernetes user impersonation allows a user to impersonate another user. This gives the user all the rights of the user they are posing as. Kubernetes achieves this through impersonation headers. The user information of the actual user is overwritten with the user information of another user when a request is made to the Kubernetes API server before authentication occurs.

There are different tools to achieve this. One of this tools is Open Unison from Tremolo. Open Unison offers some advantages. It is possible to implement single sign-on (SSO) for graphical user interfaces and session-based access to Kubernetes via the command line interface. When using Openunison or similar technologies, communication no longer takes place directly with the Kubernetes API server, but rather runs via Open Unison. Open Unison uses Jetstack’s reverse proxy for OIDC.

When a user wants to access the Tekton dashboard, Open Unison redirects the user to the configured Identity Provider (IDP). After the user has authenticated himself with the IDP, he receives an id_token. The id_token contains information about the authenticated user such as name, email, group membership or the token expiration time. The id_token is a JavaScript Object Notation Web Token (JWT)

The reverse proxy uses the IDP’s public key to get the id_to to validate. After successful validation, the reverse proxy appends the Impersonation header to the request to the Kubernetes API server. The Kubernetes API server checks the Impersonation header to see whether the impersonated user has the appropriate permissions to execute the request. If so, the Kubernetes API server executes the request as an impersonated user. The reverse proxy then forwards the response it received from the Kubernetes API server to the user.

The following steps describe the configuration of the dashboard with OAuth2:

Create a namespace:

kubectl create ns consumerrbac

Installation of Cert Manager:

helm install \                                                                                                                   

  cert-manager jetstack/cert-manager \

  –namespace consumerrbac \

  –version v1.11.0 \

   –set installCRDs=true

In order to create certificates, an issuer is needed:


kind: Issuer


  name: letsencrypt-prod

  namespace: consumerrbac






      name: letsencrypt-prod


    – http01:


          class:  nginx

Now nginx, can be installed:

helm install nginx-ingress ingress-nginx/ingress-nginx –namespace consumerrbac

Now, the ingress can be created.


kind: Ingress


  name: tekton-dashboard

  namespace: tekton-pipelines

  annotations: “nginx” http://oauth2-proxy.consumerrbac.svc.cluster.local/oauth2/auth$host$request_uri “false”



  – host:



      – pathType: Prefix

        path: /



            name: tekton-dashboard


              number: 9097

OAuth Proxy (we use Google and exptect the application created there):

  • In the Google Cloud dashboard, select APIs & Services
  • On the left, select Credentials
  • Press CREATE CREDENTIALS and select O Auth client ID
  • For Application Type, select Web application
  • Give the app a name and enter Authorized JavaScript origins and Authorized redirect URIs
  • Click Create and remember Client ID and Client Secret
  • A values.yaml must be created for the installation.





  provider: google




  cookie-secure: ‘false’

  cookie-refresh: 1h

Configuration of Open Unison

Create a namespace:

kubectl create ns openunison

Add Helm repo:

helm repo add tremolo

helm repo update

Before openunison can be deployed, the oauth must be configured in the Google Cloud.

  • In Credentials, under Apis & Services, click CREATE CREDENTIALS
  • Then on OAuth Client ID
  • Select Web application as the application type and then give it a name
  • Authorized JavaScript origins:

Now Open Unison can be installed:

helm install openunison tremolo/openunison-operator –namespace openunison

Finally, Open Unison has to be configured with the appropriate settings.

This concludes our series on Tekton. Hope you enjoyed it.

Part 9: Supply Chain – Workspaces and Secrets


As mentioned in the last blog post, the next thing we want to talk about and discuss is authentication, workspaces and secrets. Let’s begin with Workspaces.


As already mentioned, in Tekton each task is run in a pod. The concept of workspaces exists in Tekton so that pods can share data between each oterh. Workspaces can also help with other thins: Workspaces can be used to mount secrets, config maps, tools or a build cache in a pod. Tekton Workspaces work similarly to Kubernetes Volumes. This also applies to their configuration.

The configuration of the workspace is done in the pipeline, task run or in a TriggerTemplate.

Configuring a workspace is very similar to configuring a Kubernetes volume. This example creates a workspace that is used to mount the Dockerfile and associated resources from the pod that clones the repository to the pod that builds and uploads the image. In Tekton, VolumeClaimTemplates are used to create a PersistentVolumeClaim and its associated volume when executing a Task or PipelineRun. (Tekton Workspaces, n.d.) The further configuration of the workspaces is similar to that of a PersistentVolumeClaim in Kubernetes. The accessMode specifies how and which pods have access to a volume. ReadWriteOnce means that pods on the same node have read and write access to the volume. The storage space size in this configuration is one Gigabyte.

Of course, the steps to clone the repository and build and upload the container image to a registry require appropriate permissions. This can be done via the two following options:

  • First, the corresponding Kubernetes secrets with the credentials are mounted in the pod.
  • Second, authentication is implemented via a Kubernetes Service Account. The mounted volume is a Kubernetes Secret Volume. The data in this volume is read-only and is managed in the container’s memory via the temporary file system (tmpfs) file system, making the volume volatile. Secrets can be specified under Workspaces in the yaml configuration as follows.

Tekton can also isolate workspaces. This helps to make data accessible only for certain steps in a task or sidecars. However, this option is still an alpha feature and therefore cannot (yet) be used.

Secret Management

Kubernetes secrets are not encrypted by default, only encoded. This means that anyone with appropriate permissions can access the secrets via the cluster or the etcd store. It should also be noted that anyone who has the rights to create a pod has read access to the secrets in the corresponding namespace.

Kubernetes offers two ways to deal with this problem. Option one is the encrypting of secrets in the etcd store. This means that the secrets are still kept within Kubernetes.

Option two involves the utilization of third-party applications and the Container Storage Interface (CSI) driver. In this case, secrets are not managed directly by Kubernetes and are therefore not on the cluster.

One popular tool for the second approach is Hashicorp Vault. Like the other tools, Vault follows the just-in-time access approach. This gives a system access to a secret for a specific time and as needed. This approach reduces the blast radius by compromising the build system.

In addition, this minimizes the configuration effort because extra Role Based Access Control (RBAC) rules for secrets, for example in the namespaces for development, test and production, do not have to be created and the secrets do not have to be stored in this extra.

The Secrets Store CSI Driver makes it possible to mount secrets from Vault, into the CSI. Once the CSI driver knows which secrets should be mounted from the provider, SecretProviderClass objects are configured. These represent custom resources under Kubernetes. When a pod is started, the driver communicates with the provider to obtain the information specified in the SecretProviderClass.


In the following two use cases Tekton needs secrets for authentication:

  • Authentication against Git (for example cloning)
  • Authentication against the Container Registry

As described in the last blog post within the PipelineRun example, Secrets can be mounted. The following examples show how to create those secrets:

Both manifest files can be created via the kubectl-command.

If a config.json file does not yet exist, you have to generate it first. To do this, you must log in to the desired registry via docker.

docker login

Within the config.json the credentials from the Docker config.json must be specified in base64 encoded.

cat ~/.docker/config.json | base64

It is important to ensure that this does not happen via Docker Desktop, because then the “credsStore”: “desktop” field is included in the config.json and it must be ensured that the config.json has the following format:


        “auths”: {

                “”: {

                        “auth”: “”




Furthermore, the secrets can be added to the ServiceAccount, which is specified via the serviceAccountName field.

apiVersion: v1
kind: ServiceAccount
  name: build-bot
  namespace: tekton
  - name: git-credentials
  - name: dockerconfig

If the credentials are not provided via the ServiceAccount, they must be defined in the pipeline run under the pod template.

      fsGroup: 65532
    - name: dockerconfig
    - name: git-credentials

After the pipelinerun.yaml has been configured it can be executed

kubectl create -f pipelinerun.yaml

Pipeline run logs can be viewed using the tkn command line tool:

tkn pr logs clone-read-run- -f -n tekton

After the pipeline has run through, you can check whether it has been signed and attested.

kubectl get tr [TASKRUN_NAME] -o json | jq -r .metadata.annotations

Part 8: Supply Chain – Tasks and Pipelines


Now it is time to gain a better understanding of tasks and pipelines. Before we create a pipeline, lets first create a Tekton namespace:

kubectl create ns tekton

In Tekton, a pipeline can consist of one or more tasks, which can be executed one after the other or in parallel with one another.

The pipeline includes the fetch-source, show-content and build-push tasks. Fetch-source clones the repo in which the Dockerfile is located and build-push builds the image and uploads it to a repo. The show-content task displays the artifacts obtained through Fetch-source.

One of biggest of advantages of Tekton is, that not all tasks and pipelines need to be rewritten because Tekton provides a Tekton Hub where users can share their tasks and pipelines with each other. Two tasks from the Tekton Hub were used in this example.

The first task, called Git-clone, clones a Git repository and saves the data to a workspace. Workspaces will be discussed in more detail later.
The second task, originating from the Tekton hub, builds an image and uploads it to any container registry. The task uses Kaniko to build an image. The task also saves the name and a digest of the image in a result so that Tekton Chains can sign the image and create an attestation. Tekton chains and results will be discussed in a later point.

The example of the “git-clone” task shows the name of the task has in the context of the pipeline. The “taskRef” field is used to reference the individual tasks, in this case git-clone. You can also define here which parameters and workspaces should be passed to the task.
The “url” parameter of the task is assigned the “repo-url” parameter. The names of the parameters can differ from pipeline to task. The notation $(params.repo-url) refers to the parameter that is in the “params” field. Parameters that come from a task or pipeline run are set in this field.

In order to use those tasks, we may not forget to install them. Here is the first task:

It can be applied either via tkn oder kubectl apply:

tkn hub install task git-clone

kubectl apply -f

The second task can also be found on Tekton Hub and can be installed as follows:

tkn hub install task kaniko

kubectl apply -f

If you have some troubles with the builder image, you might change the appropriate section as follows:

The last task is the show-readme.

To apply the pipeline, we just enter the following command:

kubectl apply -f pipeline.yaml

Installation of Tekton Chains

Since Chains is not installed via the operator, Chains must be installed separately.

kubectl apply –filename

If you prefer to install a special version, you can issue the following command:

kubectl apply -f${VERSION}/release.yaml

After installation, the configmap can be configured.

Configuration in the manifest:

kubectl patch configmap chains-config -n tekton-chains -p=‘{“data”:{“artifacts.taskrun.format”:”in-toto”, “”: “tekton, oci”, “”: “tekton, oci”}}’

Furthermore, the keyless signing mode can be activated, which uses Fulcio from the Sigstore project.

kubectl patch configmap chains-config -n tekton-chains -p=‘{“data”:{“signers.x509.fulcio.enabled”: “true”}}’

Chains supports automatic binary uploads to a transparency log and uses Rekor by default. If activated, all signatures and attestations are logged.

kubectl patch configmap chains-config -n tekton-chains -p=‘{“data”:{“transparency.enabled”: “true”}}’

After the ConfigMap has been patched, it is recommended to delete the Chains Pod so that the changes are registered by the Pod

kubectl delete po -n tekton-chains -l app=tekton-chains-controller

Pipeline Runs

A run, whether TaskRun or PipelineRun, is instantiated and executes pipelines and tasks. When a PipelineRun is executed, TaskRuns are automatically created for the individual tasks. Among other things, the pipeline that is to be executed is referenced, the parameters that are to be used in a task are defined, or the pod templates. A blueprint for the executed pods is created using the templates. For example, environment variables can be provided for each pod and scheduling settings can be configured via nodeSelectors, tolerances and affinities.

After everything has been configured, the pipeline run can be executed.

Within the param section, you can specify the git repo for cloning and also choose where the image should be uploaded to.

So that’s it for today. In the next blog post, we will discuss authentication, workspaces, secrets and more. So there is still a lot of interesting stuff left.

Part 7: Supply Chain – How to work with Tekton

How to work with Tekton

In the last blog post, we briefly discussed what is Tekton and how an installation can take place. In this blog post, we go a step further and show how to work with Tekton when we want to build a Supply Chain.

Let’s start with the installation first:

The components required for Tekton are installed via the Tekton Operator. This can be done in 3 different ways: 

  • Via the Operator Lifecycle Manager
  • With a release file
  • By code

We chose the installation via the release file. However, there is one disadvantage – the lifecycle management has to be taken over by yourself:

kubectl apply -f

Now the Tekton CRDs have been installed Later on, we will show how to configure Tekton via a TektonConfig file – but let’s discuss some theory first.

As already mentioned, the provenance must be immutable in order to reach build level three. This assumes that the user-created build steps have no ability to inject code into the source or modify the content in any way that is not intended. Therefore,  we have to take care of the Tekton pipeline. In detail this means:

  • Artifacts must be managed in a version management system.
  • When working and changing artifacts, the identities of the actors must be clear. The identity of the person who made the changes and uploaded the changes to the system and those of the person who approved the changes must be identifiable.
  • All actors involved must be verified using two-factor verification or similar authentication mechanisms.
  • Every change must be reviewed by another person before, for example, a branch can be merged into git.
  • All changes to an artifact must be traceable through a change history. This includes the identities of the people involved, the time ID of the change, a review, a description and justification for the change, the content of the change and the higher-level revisions.
  • Furthermore, the version and change history must be stored permanently and deletion must be impossible unless there is a clear and transparent policy for deletion, for example based on legal or political requirements. In addition, it should not be possible to change the history.

In order to ensure that the security chain is not interrupted, Tekton provides the option of resolvers.

Basically, a Tekton resolver is a component within the Tekton Pipelines. In the context of Tekton, a resolver is responsible for handling “references” to external sources. These external sources can be anything from Git repositories to OCI images, among others.

Tekton uses resolvers to deploy Tekton resources as tasks and pipelines from remote sources. Hence, Tekton provides resolvers to access resources in Git repositories or OCI registries, for example. Resolvers can be used for both public and private repositories.

The configuration of the provider can be divided into two parts: 

  • The first part of the configuration can be found in a ConfigMap. Tekton uses  the ConfigMap to store, among other things, standard values such as the default URL to a repository or the default Git revision, fetch timeout and API token.
  • The second part is in the PipelineRun and TaskRun definition. Within the PipelineRun and TaskRun definition, the repository URL, the revision and the path to the pipeline or task are defined under the spec field.

The following snippet shows a sample config:

<script src=></script>

The TektonConfig can easily be deployed:

kubectl apply -f tekton-config.yaml

One disadvantage of using resolvers is that the git-resolver-configmap configuration applies to the entire cluster. Only one API token can be specified within the configuration. This means that every user of a resolver has access to the same repositories, which would make multi-tenancy impossible.

Another disadvantage is that resolvers cannot exclude the possibility that resources  not coming from a version control system may be used. To ensure that the resolvers are not bypassed, there is an option to sign resources. Policies can then check whether a resource has a valid signature. This ensures that only resources with the correct signature can be executed.

You can use the Tekton CLI tool to sign the resources.

The CLI supports signing key files with the format Elliptic Curve Digital Signature Algorithm (ecdsa), Edwards-curve 25519 (Ed25519) and Rivest-Shamir-Adleman (RSA) or a KMS including Google Cloud Platform (GCP), Amazon Web Services (AWS), Vault and Azure.

The verification of the signatures is done via policies. Filters and keys can be defined in a policy. The filters are used to define the repositories from where the pipelines and tasks can come from. Keys are used to verify the signature.

When evaluating whether one or more policies are applicable, the filters check whether the source URL matches one of the specified filters. If one or more filters apply, the corresponding guidelines are used for further review. If multiple filters apply, the resource must pass validation by all policies. The filters are specified according to the regular-expression (regex) scheme.

After filtering, the signature verification of the resources is carried out using keys. These keys can be specified in three different ways:

  • As a Kubernetes secret
  • As an encoded string,
  • Or via a KMS system 

The policies have three operating modes: ignore, warn, and fail. In “ignore” mode, a mismatch with the policy is ignored and the Tekton resource is still executed. In “warn” mode, if a mismatch occurs, a warning is generated in the logs, but the run continues to execute. In “fail” mode, the run will not start if no suitable policy is found or the resource does not pass a check.

That’s it for today. In the next part, we will talk about Pipelines and Tasks.

Part 6: Supply Chain – Introduction to Tekton

What is Tekton?

Tekton is an open source CI/CD tool and was initially a project of the Continuous Delivery Foundation. Kubernetes is used as the underlying platform for Tekton and can be extended by using Custom Resource Definitions (CRD). They act as an extension of the Kubernetes Application Programming Interface (API). This way, Tekton integrates into Kubernetes and can be used like any other Kubernetes resources.

The smallest unit in the Tekton workflow is a step and each step is executed in its own container. Several steps taken together are called a task. Each task runs in a Kubernetes Pod. This means, that the containers in the pod share the same resources, such as a volume. If there is a need to run steps in multiple pods or due to advanced automation requirements, more tasks can be run in pipelines. Pipelines are formed from several tasks. In addition, the individual components are reusable. Steps can be integrated into different tasks and tasks into different pipelines.

The concept of steps, tasks and pipelines is depicted in the following picture.

The concept of steps, tasks and pipelines is depicted in the picture. The actual execution of tasks and pipelines is managed by two resources: TaskRun and PipelineRun. The TaskRun is responsible for managing the execution of tasks, while the PipelineRun monitors and controls the execution of pipelines. Furthermore, the run resources determine when tasks and pipelines should be executed. These can be triggered at a specific time or by an event. Runs represent the link between resources and tasks and pipelines to ensure the modular structure. The parameters required for the steps are passed to the pipelines and tasks through the runs.

The actual execution of tasks and pipelines is managed by two resources: TaskRun and PipelineRun. The TaskRun is responsible for managing the execution of tasks, while the PipelineRun monitors and controls the execution of pipelines. Furthermore, the run resources determine when tasks and pipelines should be executed. These can be triggered at a specific time or by an event. Runs represent the link between resources and tasks and pipelines to ensure the modular structure. The parameters required for the steps are passed to the pipelines and tasks through the runs.


Like with many other Kubernetes application, Kubernetes operators prove to be indispensable for Tekton and help to ensure that the actual state of the Tekton application matches the desired state. Through their specialized knowledge, operators extend the functions of the Kubernetes API to ensure the automation of applications.

The Tekton Operator is responsible for installing Tekton and managing the individual Tekton resources. Other Tekton resources include the Tekton pipeline and trigger resources, which are managed via the operator. The resources are managed via Custom Resource TektonConfig. The scope of manageable resources depends on the platform on which Tekton is deployed. For example, on Kubernetes it is possible to manage the Tekton dashboard via TektonConfig, but not on OpenShift and vice versa, TektonAddons can be managed on OpenShift but not on Kubernetes. This can be seen in profiles.

There are three profiles to choose from in the Tekton operator version Operator-v0.67.0:

  • The “All” profile installs all resources available to the platform. As already mentioned, the dashboard is deployed for Kubernetes in addition to the resources and the TektonAddon is deployed on OpenShift.
  • The “Basic” profile installs the components TektonPipeline, TektonTrigger and Tekton-Chain.
  • The third profile “lite” only installs the TektonPipelines resource.

Part 5: Supply Chain – SLSA Attestation


After having explained the basics of the SLSA framework, we want to give some insights into SLSA attestations now.

What is an attestation? An attestation is a way to generate authenticated metadata about an artifact. This makes it possible for a consumer of software to find out how it was built, who built it and which build system it was built with. Attestations also provide the option of having metadata verified by a policy engine, such as in-toto or binary authorization. For this purpose, the SLSA framework offers the slsa-verifier, which verifies the SLSA provenance format.

General Model

The SLSA framework defines a general model according to which an attestation should be built.

The figure shows the general SLSA model. The model defined by the SLSA framework consists of four main components, namely the bundle, the envelope, the statement and the predicate. The outer-right component of the model is the bundle, which describes a bundle of several attestations. Different attestations at different points in the software supply chain can be bundled in one place. These include, for example, vulnerability scans, the build process and the artifact. Combining the attestations in a bundle makes it easier for the software consumer to make well-founded decisions.

The model defined by the SLSA framework consists of four main components, namely the bundle, the envelope, the statement and the predicate. The outer-right component of the model is the bundle, which describes a bundle of several attestations. Different attestations at different points in the software supply chain can be bundled in one place. These include, for example, vulnerability scans, the build process and the artifact. Combining the attestations in a bundle makes it easier for the software consumer to make well-founded decisions.

The actual attestation of an artifact is located within the envelope, which in turn consists of two components: The signature and the message. The signature contains information about the issuer of the attestation and the message contains further information. The third component is the statement. The statement binds the attestation to a specific artifact and must contain at least one subject and one predicate. The subject indicates which artifacts the predicate applies to. The Predicate is the innermost part of an attestation and contains information about the subject. The SLSA framework does not contain precise definitions regarding the scope of information that can be included in an attestation. However, links are suggested. The links serve to represent a hypergraph. In this hypergraph, the artifacts are represented as nodes and the attestations as hyperedges. The links are intended to enable predicate-agnostic processing of the graph.

Furthermore, the SLSA framework defines that there should be a central storage in which the attestations are stored and retrieved and can be viewed by a consumer.


The Provenance Predicate provides verifiable information about a software artifact and precisely explains the place of origin, the date of origin and the creation process of the artifact. This guarantee gives the user of a software the confidence that the artifact meets expectations and can be reproduced independently if necessary.

Verification Summary Attestation (VSA)

The VSA describes at which SLSA level an artifact or group of artifacts was verified and other details about the verification process, including the SLSA level at which the dependencies were verified. This makes it easier to decide whether to use an artifact. This means that a consumer does not have to include all attestations of an artifact and the attestations of their dependencies in the decision-making process. It also allows software producers to keep details of their build process confidential while still being able to communicate that certain verifications have been carried out. The model is based on the existence of a verifier, which has a position of trust with the consumer. The verifier verifies the artifact and the associated attestations against a policy and then generates the VSA.

Part 4: Supply Chain – SLSA Levels & Tracks


In the last blog post, we talked about the SLSA terminology. Now it is time to focus on SLSA levels and tracks.

Within the SLSA framework, there are levels and tracks. Depending on them, it possible to incrementally harden and improve different areas of the supply chain. Tracks focus on individual sub-areas of the supply chain, for example on the build or provenance area. This helps to do the hardening independently of individual areas. The levels within the individual tracks indicate the quality of the hardening within the are. Each level defines requirements in order to reach the corresponding level. In practice, it is possible to be at level 3 in the build track, whereas only level 1 was reached in the source track. The levels are designed in such a way that the implementation costs increase with each level, so level one is basically easy to implement, whereas level 3 is difficult to achieve.

Currently, levels 0 to 3 and the build track are available. The lowest level 0 means that no requirements of the framework have been implemented. As already mentioned, each track defines different requirements for each level in order to be able to fulfill them. In the following we will focus on the build track.

The build track focuses on the provenance and validability of artifacts. It’s about defining how an artifact was built, who built the artifact, what process built the artifact, and what the artifact contains. This gives a consumer of software the certainty of whether the artifact they want to use is what it claims to be.


  • Level 1 focuses on documentation and error prevention. In this case, documentation means that the build process is documented by provenance data. This makes it easier to trace errors, for example if an incorrect source version was used. The SLSA framework specifies certain requirements for the build process and provenance data. The build process must be consistent. In addition, the provenance data, which contains information how the artifact was created, must be generated by the build platform. This data includes information about who created the artifact, what build process was used, and what dependencies and parameters were used in the build. It is also important that the provenance data is accessible to consumers of the artifact. This ensures transparency and trustworthiness because the origin and construction of the artifact are traceable. Since Level 1 is considered the start of the build track, it is easy for any company or development team to implement. Little or no changes to the workflow are required. The provenance data does not have to be complete or signed. At this level it is still possible to manipulate the provenance data.
  • Build Level 2 focuses on ensuring that an artifact can no longer be manipulated after the build. Unlike Level 1, the software build at Level 2 must be carried out on a hosted build platform. This means that it cannot be carried out on a developer’s laptop, for example. In order to exclude manipulation of the artifact after the build, the provenance data is generated and signed by the build platform itself. The signature prevents manipulation and reduces the attack surface.
  • The third level focuses on the build process itself to prevent manipulation during the build. This is mainly about hardening the build platform. The build platform must be architected in such a way that builds cannot influence each other. This includes, for example, that the key used to sign the provenance data is not accessible to the build scripts. Level 3 is the most complex to implement of the three levels, as entire workflows and system structures sometimes have to be restructured. However, risks are maximally reduced during the build, ensuring confidentiality, integrity and availability.


After having described the importance of the individual levels, we will discuss the technical requirements that must be met in order to reach the corresponding level.

The Build Track divides the requirements into two parts. The first part of the requirements must be fulfilled by the producer and the second part contains requirements that are required to be fulfilled by the build system itself.

The producer can be anyone who is responsible for creating it and passing it on to third parties, for example entire organizations or a team of developers. The producer has three tasks. He is responsible for selecting the right build platform, defining the build process and distributing the provenance data.

To choose the right build platform, it is crucial that the build platform is capable of achieving the desired SLSA level.

SLSA version 1 requires the build process to be consistent. This makes it possible to provide the verifier with a clear understanding of how an artifact was created. Maintaining consistency in the build process makes it easier for others to assess the quality, reliability, and security of the artifact. Consistency in the build process creates expectations about how an artifact was built and how to ensure it meets the necessary standards. If a package is distributed through a package ecosystem that requires explicit metadata about the build process in the form of a configuration file, the producer must complete the configuration file and keep it up to date. This metadata can contain information about the artifact’s source code repository and build parameters.

The requirements for the build platform consist of two major requirementsments: the generation of provenance data and isolation.

Regarding provenance generation, SLSA defines 3 points, each of which correlates with the corresponding SLSA levels. To reach level 1, it is sufficient if this data exists, for level 2 the data must be authentic and for level 3 it must be unchangeable.

Level 1 defines that at the end of a build process there is provenance data about the build process and that this must be able to be uniquely identified by a cryptographic digest. The recommended format for provenance data should be SLSA provenance. Other formats can be used, but the other formats must have the same amount of information and should be bidirectionally translatable into SLSA Provenance.

To reach level 2, the provenance data must be generated by the build service’s control plain; unlike Level 1, where it didn’t matter who or what created the provenance data. In this case, however, there are two exceptions. The Subject field and fields marked as not necessary for Level 2 may be generated by a tenant of the build system. Provenance data must be authentic and the authenticity must be validatable. This is achieved by signing the provenance data. The provenance data is signed by a private key. The key may only be accessible to the build platform and not to the tenant.

At level 3 there are three requirements for the build system. The first requirement describes that cryptographic data such as the private key must be stored in a secure environment. For example, a Key Management System (KMS) must be used. The second requirement requires that the cryptographic data be isolated from the custom build steps. The third requirement describes that all fields in the provenance data were generated by the build system itself and that the custom build steps do not change the contents of this data.

SLSA’s requirement for the builds is that they run in isolation from each other so that there is no influence between the builds. Likewise, the build environment must be isolated from the control plane of the build system. Regarding isolation, SLSA has two requirements. For one thing, all steps of a build must take place on a hosted build platform and not, for example, on a developer’s laptop or computer. The implementation of the step is part of SLSA Level 2. On the other hand, all steps of a build must run in isolation from each other, so that there can be no external influences on the steps other than those previously defined as external parameters, and these must be included as such Provenance data must be recognizable. Isolation also includes the fact that one build cannot access cryptographic data of another build, such as signature keys. It must not be possible for two builds running in parallel to change the memory of another process. A short-lived environment must be provided for each build and when using a cache during the build, it must not be possible for a build to change it. Each run of a build must always produce the same result, whether a cache is used or not.

Part 3: Supply Chain – Introducing the SLSA framework

The SLSA Framework

In the last blog, we showed different frameworks for supply chains including the Secure Software Development Framework (SDDF), In-toto Attestation Framework and Supply Chain Level for Software Artefacts. We also addressed tools including Notary and Sigstore (with Cosign, Fulcio and Rekor).

In the following blog post, we will set the foundations for the SLSA framework. We begin with a definition and terminologies. SLSA is a framework that helps to secure a software supply chain. It gives software developers a guide on how to secure development and provides consumers with the information about the security of software artefacts. This includes a domain specific language, the possibility to evaluate the trustworthiness of artifacts, and a checklist of requirements in order to comply with the SSDF.

Finally, I also want to mention the areas in which SLSA is not used.

SLSA does not guarantee whether the code written for the artifacts has followed secure coding practices. In addition, it does not protect against companies that deliberately incorporate malware into their software. However, it can minimize the risk of an introducing vulnerabilities from internal sources. It’s also worth noting that the SLSA level, by design, is not transitive. A level describes the integrity protections of an artifact’s build process and top-level source but nothing about the artifact’s dependencies. It means that each artifact’s SLSA rating is independent of its dependencies The reason for this is that the development of software would be slowed down too much because one would have to pay attention and wait until a dependency meets the same standards as the artifact for which it is used.


SLSA defines several roles within in a software supply chain. A person or an organization can have multiple roles. A role can also be used by several people or companies.

The role of the producer is the one or organization that makes software available for others to consume. The role of the consumer describes who uses or consumes the producer’s software. A producer can also be a consumer at the same time, since, for example, he uses libraries or frameworks from others for his software. The third role is the verifier. He checks the authenticity of an artifact. The fourth and final role is the infrastructure provider, which provides software and services to other roles. These can be, for example, package registries or build platforms.

Build Model

The picture shows: A tenant invokes the build by specifying external parameters through an interface, either directly or via some trigger. Usually, at least one of these external parameters is a reference to a dependency. (External parameters are literal values while dependencies are artifacts.)

 The build platform’s control plane interprets these external parameters, fetches an initial set of dependencies, initializes a build environment, and then starts the execution within that environment.

The build then performs arbitrary steps, which might include fetching additional dependencies, and then produces one or more output artifacts. The steps within the build environment are under the tenant’s control. The build platform isolates build environments from one another to some degree (which is measured by the SLSA Build Level).
  1. A tenant invokes the build by specifying external parameters through an interface, either directly or via some trigger. Usually, at least one of these external parameters is a reference to a dependency. (External parameters are literal values while dependencies are artifacts.)
  2.  The build platform’s control plane interprets these external parameters, fetches an initial set of dependencies, initializes a build environment, and then starts the execution within that environment.
  3. The build then performs arbitrary steps, which might include fetching additional dependencies, and then produces one or more output artifacts. The steps within the build environment are under the tenant’s control. The build platform isolates build environments from one another to some degree (which is measured by the SLSA Build Level).

Package Model

After an artifact has been created, it is necessary to distribute it -in the form of packages. A package is an identifiable software unit that is intended for distribution. The term “package” can refer to both the artifact itself and the associated package name. An artifact is an immutable object, such as a file, while the package name is the name of a changeable collection of objects. All artifacts in a collection represent the same software but in different versions, with the name being ecosystem specific. The term “package ecosystem” describes a set of rules and conventions that determine how packages are distributed and how clients can resolve a package name into one or more specific artifacts. This is where a package manager client comes into play, a client-side tool used to interact with a package ecosystem. Finally, the package registry is responsible for mapping package names to artifacts. A single ecosystem can support multiple registries to ensure the efficient management and distribution of software packages.

For containers, the package ecosystem can be the Open Container Initiative (OCI), which aims to establish standards for containers and container platforms. Within OCI a package in relation to OCI is called a blob, the package name is called a repository and the artifact is called a blob. To reference a specific immutable blob, the reference is composed of repository, blob and digest.

Verification Models

After artifacts have been built and a suitable way to distribute artifacts has been found, a way must be found to verify the artifacts and the build platform on which they were built. The SLSA framework suggests that the operator of the build platform certifies the build platform within regular intervals and makes the certification available to consumers. Furthermore, artifacts are checked regularly. Expectations are a set of constraints on the package’s lineage metadata. Provenance verification involves checking the artifacts from the package ecosystem to ensure that the package’s expectations are met before the package is used. During Build platform verification, build platforms are certified for their compliance with SLSA requirements at the specified level.

Part 2: Supply Chain – Frameworks & Tools


Secure Software Development Framework The Secure Software Development Framework (SSDF) is a framework published by the National Institute of Standards and Technology (NIST) and includes software development practices based on established security practices that make the software development life cycle more secure. The SSDF provides a set of software development practices that can be incorporated into an existing software development lifecycle. In-toto Attestation Framework The aim of the in-toto Attestation Framework is to define a uniform and flexible standard for software attestation. By using so-called predicates within the attestation definition, different information can be represented. Examples of this are the SLSA Provenance predicates for the two SBOM formats Software Package Data Exchange (SPDX) and CycloneDX. Supply-chain Levels for Software Artifacts Das SLSA-Framework ist ein inkrementell implementierbares Framework, wel- ches der Software Supply Chain Sicherheit dient.


Notary V2 Docker began development on Notary in 2015 and worked on it until the project was handed over to the Cloud Native Computing Foundation in 2017. Notary’s functionality includes signing and validating Open Container Initiative (OIC) artifacts. Signing and validation are done using public and private keys. The public key is stored in a container registry and the artifacts are signed with the private key. The artifact with the signature is then uploaded to the registry. The authenticity of an artifact can be verified using the registry’s public key. Sigstore Sigstore is a project of the Linux Foundation with support from many companies such as Google and RedHat. Sigstore is open source. Sigstore simplifies the signing and attestation of artifacts and the associated distribution of signatures and attestations. Sigstore mainly consists of three technologies: Cosign, Fulcio, Rekor. Cosign is a commandline tool that is responsible for signing and verifying software artifacts. Cosign also supports in-toto attestation, making it SLSA-compliant. Another functionality of Cosign is the keyless signing mode, which uses another technology from the Sigstore project. Fulcio is a code-signing certificate authority that generates short-lived certificates The advantage of this approach is that developers do not have to worry about key and certificate management themselves. The identity of the signer is ensured by the OpenID Connect (OIDC) protocol. For example, when Cosign makes a request to Fulcio to obtain a short-lived certificate, the user must log in to their GitHub or with an Google Account to authenticate. The user’s identity is stored within the certifate. Signatures should be checked by everyone, hence they are stored in a central location, called Rekor. Rekor is a transparency log, in which the digital signatures are appended. The entries can only be appended, but cannot be deleted or changed. To ensure this, Rekor uses Trillan. Trillan, in turn, is based on Merkle trees. The following figure shows how Developers can work with Sigstore:

The figure shows how Developers can work with Sigstore.

Developers can request a certificate from the Fulcio Certificate Authority. Authentication is doney with Open ID Connect. Develpers can then publish the Signed Artifact as well as the the Signing Certificate. On the other site, Developers, can find and download artifacts and check their signatures in the log.