Part 7: Supply Chain – How to work with Tekton

Table of Contents:

Tags:

How to work with Tekton

In the last blog post, we briefly discussed what is Tekton and how an installation can take place. In this blog post, we go a step further and show how to work with Tekton when we want to build a Supply Chain.

Let’s start with the installation first:

The components required for Tekton are installed via the Tekton Operator. This can be done in 3 different ways: 

  • Via the Operator Lifecycle Manager
  • With a release file
  • By code

We chose the installation via the release file. However, there is one disadvantage – the lifecycle management has to be taken over by yourself:

kubectl apply -f https://storage.googleapis.com/tekton-releases/operator/previous/v0.66.0/release.yaml

Now the Tekton CRDs have been installed Later on, we will show how to configure Tekton via a TektonConfig file – but let’s discuss some theory first.

As already mentioned, the provenance must be immutable in order to reach build level three. This assumes that the user-created build steps have no ability to inject code into the source or modify the content in any way that is not intended. Therefore,  we have to take care of the Tekton pipeline. In detail this means:

  • Artifacts must be managed in a version management system.
  • When working and changing artifacts, the identities of the actors must be clear. The identity of the person who made the changes and uploaded the changes to the system and those of the person who approved the changes must be identifiable.
  • All actors involved must be verified using two-factor verification or similar authentication mechanisms.
  • Every change must be reviewed by another person before, for example, a branch can be merged into git.
  • All changes to an artifact must be traceable through a change history. This includes the identities of the people involved, the time ID of the change, a review, a description and justification for the change, the content of the change and the higher-level revisions.
  • Furthermore, the version and change history must be stored permanently and deletion must be impossible unless there is a clear and transparent policy for deletion, for example based on legal or political requirements. In addition, it should not be possible to change the history.

In order to ensure that the security chain is not interrupted, Tekton provides the option of resolvers.

Basically, a Tekton resolver is a component within the Tekton Pipelines. In the context of Tekton, a resolver is responsible for handling “references” to external sources. These external sources can be anything from Git repositories to OCI images, among others.

Tekton uses resolvers to deploy Tekton resources as tasks and pipelines from remote sources. Hence, Tekton provides resolvers to access resources in Git repositories or OCI registries, for example. Resolvers can be used for both public and private repositories.

The configuration of the provider can be divided into two parts: 

  • The first part of the configuration can be found in a ConfigMap. Tekton uses  the ConfigMap to store, among other things, standard values such as the default URL to a repository or the default Git revision, fetch timeout and API token.
  • The second part is in the PipelineRun and TaskRun definition. Within the PipelineRun and TaskRun definition, the repository URL, the revision and the path to the pipeline or task are defined under the spec field.

The following snippet shows a sample config:

<script src=https://gist.github.com/gsoeldner-sc/762d463a3b10faa752e6520e0213f6bf.js></script>

The TektonConfig can easily be deployed:

kubectl apply -f tekton-config.yaml

One disadvantage of using resolvers is that the git-resolver-configmap configuration applies to the entire cluster. Only one API token can be specified within the configuration. This means that every user of a resolver has access to the same repositories, which would make multi-tenancy impossible.

Another disadvantage is that resolvers cannot exclude the possibility that resources  not coming from a version control system may be used. To ensure that the resolvers are not bypassed, there is an option to sign resources. Policies can then check whether a resource has a valid signature. This ensures that only resources with the correct signature can be executed.

You can use the Tekton CLI tool to sign the resources.

The CLI supports signing key files with the format Elliptic Curve Digital Signature Algorithm (ecdsa), Edwards-curve 25519 (Ed25519) and Rivest-Shamir-Adleman (RSA) or a KMS including Google Cloud Platform (GCP), Amazon Web Services (AWS), Vault and Azure.

The verification of the signatures is done via policies. Filters and keys can be defined in a policy. The filters are used to define the repositories from where the pipelines and tasks can come from. Keys are used to verify the signature.

When evaluating whether one or more policies are applicable, the filters check whether the source URL matches one of the specified filters. If one or more filters apply, the corresponding guidelines are used for further review. If multiple filters apply, the resource must pass validation by all policies. The filters are specified according to the regular-expression (regex) scheme.

After filtering, the signature verification of the resources is carried out using keys. These keys can be specified in three different ways:

  • As a Kubernetes secret
  • As an encoded string,
  • Or via a KMS system 

The policies have three operating modes: ignore, warn, and fail. In “ignore” mode, a mismatch with the policy is ignored and the Tekton resource is still executed. In “warn” mode, if a mismatch occurs, a warning is generated in the logs, but the run continues to execute. In “fail” mode, the run will not start if no suitable policy is found or the resource does not pass a check.

That’s it for today. In the next part, we will talk about Pipelines and Tasks.

Autor

Denny Fehler

Consultant

Dr. Guido Söldner

Geschäftsführer

Guido Söldner ist Geschäftsführer und Principal Consultant bei Söldner Consult. Sein Themenfeld umfasst Cloud Infrastruktur, Automatisierung und DevOps, Kubernetes, Machine Learning und Enterprise Programmierung mit Spring.