Skip to content
On this page

Deploy to Kyma Runtime

You can run your CAP application in the Kyma Runtime. This runtime of the SAP Business Technology Platform is the SAP managed offering for the Kyma project. This guide helps you to run your CAP applications on SAP BTP Kyma Runtime.

Overview

As well as Kubernetes, Kyma is a platform to run containerized workloads. The service's files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources.

In consequence, two kinds of artifacts are needed to run applications on Kubernetes:

  1. Container images
  2. Kubernetes resources

The following diagram shows the steps to run on the SAP BTP Kyma Runtime:

A CAP Helm chart is added to your project. Then you built your project as container images and push those images to a container registry of your choice. As last step the Helm chart is deployed to your Kyma resources, where service instances of SAP BTP services are created and pods pull the previously created container images from the container registry.

  1. Add a Helm chart
  2. Build container images
  3. Push container images to a container registry
  4. Deploy your application by applying Kubernetes resources

Prerequisites

TIP

If you're new to this topic, we've got you covered with the End-to-End tutorial Deploy Your CAP Application on SAP BTP Kyma Runtime covering all the steps and how to fulfill the prerequisites in detail.

WARNING

Make yourself familiar with Kyma and Kubernetes. CAP doesn't provide consulting on it.

Prepare for Production

The detailed procedure is described in the Deploy to Cloud Foundry guide. Run this command to fast-forward:

sh
cds add hana,xsuaa --for production
cds add hana,xsuaa --for production

Add Helm Chart

CAP provides a configurable Helm chart for Node.js and Java applications.

sh
cds add helm
cds add helm

This command adds the Helm chart to the chart folder of your project.

The files in the charts folder support the deployment of your CAP service, database and UI content, and the creation of instances for BTP services.

Learn more about CAP Helm chart.

Build Images

We recommend using Cloud Native Buildpacks to transform source code (or artifacts) into container images. For local development Cloud Native Buildpacks can be easily consumed by using the pack CLI that is a prerequisite for the next steps.

Learn more about Cloud Native Buildpacks

Build CAP Node.js Image

Do the productive build for your application, which writes into the gen/srv folder:

sh
cds build --production
cds build --production

Build the image:

sh
pack build bookshop-srv \
     --path gen/srv \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=
pack build bookshop-srv \
     --path gen/srv \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=

The pack CLI builds the image that contains the build result in the gen/srv folder and the required npm packages by using the Paketo Node.js Buildpack that is based on the Paketo base builder.

Find the resulting docker image bookshop-srv in your local docker registry.

sh
docker images
docker images

Build CAP Java Image

Add the cds-feature-k8s dependency to your pom.xml:

xml
<dependencies>
	<!-- Features -->
	<dependency>
		<groupId>com.sap.cds</groupId>
		<artifactId>cds-feature-k8s</artifactId>
		<scope>runtime</scope>
	</dependency>
</dependencies>
<dependencies>
	<!-- Features -->
	<dependency>
		<groupId>com.sap.cds</groupId>
		<artifactId>cds-feature-k8s</artifactId>
		<scope>runtime</scope>
	</dependency>
</dependencies>

Build your Java project:

sh
mvn package
mvn package

Build the docker image using the SapMachine and Java buildpacks:

sh
pack build bookshop-srv \
        --path srv/target/*-exec.jar \
        --buildpack gcr.io/paketo-buildpacks/sap-machine \
        --buildpack gcr.io/paketo-buildpacks/java \
        --builder paketobuildpacks/builder:base \
        --env SPRING_PROFILES_ACTIVE=cloud \
        --env BP_JVM_VERSION=11
pack build bookshop-srv \
        --path srv/target/*-exec.jar \
        --buildpack gcr.io/paketo-buildpacks/sap-machine \
        --buildpack gcr.io/paketo-buildpacks/java \
        --builder paketobuildpacks/builder:base \
        --env SPRING_PROFILES_ACTIVE=cloud \
        --env BP_JVM_VERSION=11

We recommend SapMachine as Java buildpack.

Build Approuter Image

sh
pack build bookshop-approuter \
     --path app \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=
pack build bookshop-approuter \
     --path app \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=

Build Database Image

Do the productive build:

sh
cds build --production
cds build --production

Build the docker image:

sh
pack build bookshop-hana-deployer \
     --path gen/db \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=
pack build bookshop-hana-deployer \
     --path gen/db \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=
sh
pack build bookshop-hana-deployer \
     --path db \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=
pack build bookshop-hana-deployer \
     --path db \
     --buildpack gcr.io/paketo-buildpacks/nodejs \
     --builder paketobuildpacks/builder:base \
     --env BP_NODE_RUN_SCRIPTS=

UI Deployment

For UI access, you can use the standalone and the managed approuter as explained in this blog.

The cds add helm command supports deployment to the HTML5 application repository which can be used with both options.

For that, create a container image with your UI files configured with the HTML5 application deployer.

You can find an example with SAPUI5 applications in the Kyma Launchpad Tutorial of the BTP End-To-End tutorial series.

The cds add helm command also supports deployment of standalone approuter.

For deploying standalone approuter, create a container image.

To configure backend destinations, have a look at the approuter configuration section.

WARNING

Approuter deployment is only supported for @sap/approuter:12.0.1 and above.

Push Images

The Kyma runtime needs reliable access to the docker images you provide. Using a local registry only, this isn't given. Therefore, upload the images to your container registry service.

Log in to Your Container Registry

sh
docker login <your-registry> -u <your-user>
docker login <your-registry> -u <your-user>

Push Images to Your Container Registry

Docker images can be identified by its hash, or one or multiple tags. You have already tagged your docker images by the build. Add tags starting with your container registry's hostname to push the images to it.

Upload your docker images by repeating the following steps for each image:

  1. Add a tag for the remote container registry to a local docker image:

    sh
    docker tag <image-name>[:<image-version>] \
            <your-container-registry>/<image-name>[:<image-version>]
    docker tag <image-name>[:<image-version>] \
            <your-container-registry>/<image-name>[:<image-version>]
    sh
    docker tag bookshop-srv[:<image-version>] \
               your-sample-registry.com/bookshop-srv[:<image-version>]
    docker tag bookshop-srv[:<image-version>] \
               your-sample-registry.com/bookshop-srv[:<image-version>]
  2. Push a docker image:

    sh
    docker push your-sample-registry.com/bookshop-srv[:<image-version>]
    docker push your-sample-registry.com/bookshop-srv[:<image-version>]

Deploy Helm Chart

Once your Helm chart is created, your container images are uploaded to a registry and your cluster is prepared, you're almost set for deploying your Kyma application.

Create Service Instances for SAP HANA Cloud

  1. Enable SAP HANA for your project as explained in the CAP guide for SAP HANA.
  2. Create an SAP HANA database.
  3. To create HDI containers from Kyma, you need to create a mapping between your namespace and SAP HANA Cloud instance.

Tools plan of SAP HANA Cloud service isn't available in Trial Accounts. But you can still use HDI containers created from Cloud Foundry with Kyma:

  1. Create a HDI container for your application using a Cloud Foundry account.

  2. Create a Kubernetes secret with the credentials from a service key from the Cloud Foundry account.

  3. Add additional properties to the Kubernetes secret.

    yaml
    stringData:
      # <…>
      .metadata: |
        {
          "credentialProperties":
            [
              { "name": "certificate", "format": "text"},
              { "name": "database_id", "format": "text"},
              { "name": "driver", "format": "text"},
              { "name": "hdi_password", "format": "text"},
              { "name": "hdi_user", "format": "text"},
              { "name": "host", "format": "text"},
              { "name": "password", "format": "text"},
              { "name": "port", "format": "text"},
              { "name": "schema", "format": "text"},
              { "name": "url", "format": "text"},
              { "name": "user", "format": "text"}
            ],
          "metaDataProperties":
            [
              { "name": "plan", "format": "text" },
              { "name": "label", "format": "text" },
              { "name": "type", "format": "text" },
              { "name": "tags", "format": "json" }
            ]
        }
      type: hana
      label: hana
      plan: hdi-shared
      tags: '[ "hana", "database", "relational" ]'
    stringData:
      # <…>
      .metadata: |
        {
          "credentialProperties":
            [
              { "name": "certificate", "format": "text"},
              { "name": "database_id", "format": "text"},
              { "name": "driver", "format": "text"},
              { "name": "hdi_password", "format": "text"},
              { "name": "hdi_user", "format": "text"},
              { "name": "host", "format": "text"},
              { "name": "password", "format": "text"},
              { "name": "port", "format": "text"},
              { "name": "schema", "format": "text"},
              { "name": "url", "format": "text"},
              { "name": "user", "format": "text"}
            ],
          "metaDataProperties":
            [
              { "name": "plan", "format": "text" },
              { "name": "label", "format": "text" },
              { "name": "type", "format": "text" },
              { "name": "tags", "format": "json" }
            ]
        }
      type: hana
      label: hana
      plan: hdi-shared
      tags: '[ "hana", "database", "relational" ]'
  4. Change the serviceInstanceName property inside the db binding in the srv section to fromSecret in chart/values.yaml file:

    yaml
    
    srv:
      bindings:
        db:
            fromSecret: <your secret>
    
    srv:
      bindings:
        db:
            fromSecret: <your secret>
  5. Change the serviceInstanceName property inside the hana binding in the hana-deployer section to fromSecret in chart/values.yaml file:

    yaml
    
    hana-deployer:
      bindings:
        hana:
          fromSecret: <your secret>
    
    hana-deployer:
      bindings:
        hana:
          fromSecret: <your secret>
  6. Delete hana property in chart/values.yaml file.

WARNING

Make sure that your HANA Cloud instance can be accessed from your Kyma cluster by setting the trusted source IP addresses.

You can find an example in the Kyma HANA Cloud Tutorial of the BTP End-To-End tutorial series.

Deploy using CAP Helm Chart

Before deployment, you need to set the container image and cluster specific settings.

Configure Access to Your Container Images

Add your container image settings to your chart/values.yaml:

yaml

global:
  imagePullSecret:
    name: [<image pull secret name>]
...
srv:
  image:
    repository: <your-container-registry>/<srv-image-name>
    tag: <srv-image-version>

global:
  imagePullSecret:
    name: [<image pull secret name>]
...
srv:
  image:
    repository: <your-container-registry>/<srv-image-name>
    tag: <srv-image-version>

If you use the SAP HANA deployer, you additionally need to configure:

yaml
hana-deployer:
  image:
    repository: <your-container-registry>/<db-deployer-image-name>
    tag: <db-deployer-image-version>
hana-deployer:
  image:
    repository: <your-container-registry>/<db-deployer-image-name>
    tag: <db-deployer-image-version>

If you use HTML5 applications, you additionally need to configure:

yaml
html5-apps-deployer:
  image:
    repository: <your-container-registry>/<html5-deployer-image-name>
    tag: <html5-deployer-image-version>
html5-apps-deployer:
  image:
    repository: <your-container-registry>/<html5-deployer-image-name>
    tag: <html5-deployer-image-version>

To use images on private container registries you need to create an image pull secret.

If you didn't specify a version in the image build, set the tag property to latest.

Configure Cluster Domain

Specify the domain of your cluster in the chart/values.yaml file so that the URL of your CAP service can be generated:

yaml
...
domain: <cluster domain>
...
domain: <cluster domain>

You can use the pre-configured domain name for your Kyma cluster:

yaml
kubectl get gateway -n kyma-system kyma-gateway \
        -o jsonpath='{.spec.servers[0].hosts[0]}'
kubectl get gateway -n kyma-system kyma-gateway \
        -o jsonpath='{.spec.servers[0].hosts[0]}'

Configure Approuter Specifications

  1. Configure access to your approuter image:

    yaml
    approuter:
      image:
        repository: <your-container-registry>/<approuter-image-name>
        tag: <approuter-image-version>
    approuter:
      image:
        repository: <your-container-registry>/<approuter-image-name>
        tag: <approuter-image-version>
  2. Replace <your-cluster-domain> with your cluster domain in xsuaa section of values.yaml:

    yaml
    xsuaa:
      serviceOfferingName: xsuaa
      servicePlanName: application
      parameters:
        xsappname: bookshop
        tenant-mode: dedicated
        oauth2-configuration:
          redirect-uris:
            - https://*.<your-cluster-domain>/**
    xsuaa:
      serviceOfferingName: xsuaa
      servicePlanName: application
      parameters:
        xsappname: bookshop
        tenant-mode: dedicated
        oauth2-configuration:
          redirect-uris:
            - https://*.<your-cluster-domain>/**
  3. Add the destinations under backendDestinations in the values.yaml file:

    yaml
    backendDestinations:
      backend:
        service: srv
    backendDestinations:
      backend:
        service: srv

    backend is the name of the destination. service points to the deployment name whose url will be used for this destination.

Deploy CAP Helm Chart

  1. Log in to your Kyma cluster

  2. Deploy using helm command:

    helm upgrade --install bookshop ./chart \
         --namespace bookshop-namespace
         --create-namespace
    helm upgrade --install bookshop ./chart \
         --namespace bookshop-namespace
         --create-namespace

    This installs the Helm chart from the chart folder with the release name bookshop in the namespace bookshop-namespace.

    TIP

    With the helm upgrade --install command you can install a new chart as well as upgrade an existing chart.

Learn more about using a private registry with your Kyma cluster. Learn more about the CAP Helm chart settings Learn more about using helm upgrade

TIP

Try out the CAP SFLIGHT and CAP for Java examples on Kyma.

Customize Helm Chart

About CAP Helm Chart

The following files are added to a chart folder by executing cds add helm:

File/PatternDescription
values.yamlConfiguration of the chart; The initial configuration is determined from your CAP project.
Chart.yamlChart metadata that is initially determined from the package.json file
templates/NOTES.txtMessage printed after installing or upgrading the Helm charts
templates/*.yamlTemplate files for the Kubernetes resources
templates/*.tplTemplate libraries used in the template resources

Learn how to create a Helm chart from scratch from the Helm documentation.

Configure

CAP's Helm chart can be configured by the settings as explained below. Mandatory settings are marked with .

You can change the configuration by editing the chart/values.yaml file. When you call cds add helm again, your changes will be persisted and only missing default values are added.

The general chart settings and used subcharts can be edited in the chart/Chart.yaml file.

The helm CLI also offers you other options to overwrite settings from chart/values.yaml file:

  • Overwrite properties using the --set parameter.
  • Overwrite properties from a YAML file using the -f parameter.

TIP

It is recommended to do the main configuration in the chart/values.yaml file and have additional YAML files for specific deployment types (dev, test, productive) and targets.

Global Properties

PropertyDescriptionMandatory
imagePullSecret → nameName of secret to access the container registry( ) 1
domainKubernetes cluster ingress domain (used for application URLs)

1: Mandatory only for private docker registries

Deployment Properties

The following properties are available for the srv key:

PropertyDescriptionMandatory
bindingsService Bindings
resourcesKubernetes Container resources
envMap of additional env variables
healthKubernetes Liveness, Readyness and Startup Probes
→ liveness → pathEndpoint for liveness and startup probe
→ readiness → pathEndpoint for readiness probe
→ startupTimeoutWait time in seconds until the health checks are started
imageContainer image

You can explore more configuration options in the subchart's directory chart/charts/web-application.

SAP BTP Services

The helm chart supports to create service instances for commonly used services. Services are pre-populated in the chart/values.yaml file based on the used services in the requires section of the CAP configuration (for example, package.json) file.

You can use the following services in your configuration:

PropertyDescriptionMandatory
xsuaaEnables the creation of a XSUAA service instance. See details for Node.js and Java projects.
parameters → xsappnameName of XSUAA application. Overwrites the value from the xs-security.json file. (unique per subaccount)
parameters → HTML5Runtime_enabledSet to true for use with Launchpad Service
connectivityEnables on-premise connectivity
event-meshEnables SAP Event Mesh; messaging guide, how to enable the SAP Event Mesh
html5-apps-repo-hostHTML5 Application Repository
hanaHDI Shared Container
service-managerService Manager Container
saas-registrySaaS Registry

Learn how to configure services in your Helm chart

SAP HANA

The deployment job of your database content to a HDI container can be configured using the hana-deployer section with the following properties:

PropertyDescriptionMandatory
bindingsService binding to the HDI container's secret
imageContainer image of the HDI deployer
resourcesKubernetes Container resources
envMap of additional environment variables

HTML5 Applications

The deployment job of HTML5 applications can be configured using the html5-apps-deployer section with the following properties:

PropertyDescriptionMandatory
imageContainer image of the HTML5 application deployer
bindingsService bindings to XSUAA, destinations and HTML5 Application Repository Host services
resourcesKubernetes Container resources
envMap of additional environment variables
→ SAP_CLOUD_SERVICEName for your business service (unique per subaccount)

TIP

Run cds add html5-repo to automate the setup for HTML5 application deployment.

Backend Destinations

Backend destinations maybe required for HTML5 applications or for approuter deployment. They can be configured using the backendDestinations section with the following properties:

PropertyDescription
(key)Name of backend destination
service: (value)Value is the target Kubernetes service (like srv)

Connectivity Service

Use cds add connectivity, to add a volume to your srv deployment.

WARNING

Create an instance of the SAP BTP Connectivity service with plan connectivity_proxy and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication. This instance must not be deleted as long as connectivity is used.

The volume you've added to your srv deployment is needed, to add additional connection information, compared to what's available from the service binding.

yaml
srv:
[...]
  additionalVolumes:
    - name: connectivity-secret
      volumeMount:
        mountPath: /bindings/connectivity
        readOnly: true
      projected:
        sources:
          - secret:
              name: <your-connectivity-binding>
              optional: false
          - secret:
              name: <your-connectivity-binding>
              optional: false
              items:
                - key: token_service_url
                  path: url
          - configMap:
              name: "RELEASE-NAME-connectivity-proxy-info"
              optional: false
srv:
[...]
  additionalVolumes:
    - name: connectivity-secret
      volumeMount:
        mountPath: /bindings/connectivity
        readOnly: true
      projected:
        sources:
          - secret:
              name: <your-connectivity-binding>
              optional: false
          - secret:
              name: <your-connectivity-binding>
              optional: false
              items:
                - key: token_service_url
                  path: url
          - configMap:
              name: "RELEASE-NAME-connectivity-proxy-info"
              optional: false

In the volumes added, replace the value of <your-connectivity-binding> with the binding that you created earlier. If the binding is created in a different namespace then you need to create a secret with details from the binding and use that secret.

TIP

You don't have to edit RELEASE-NAME in the configMap property. It is passed as a template string and will be replaced with your actual release name by Helm.

SaaS Registry Service

The configuration for saas-registry service is done by two keys in the values.yaml file:

PropertyDescriptionMandatory
saas-registryUsed to create SaaS registry service instance
saasRegistryParametersUsed to specify the parameters for SaaS registry service

Example:

yaml
[...]
saas-registry:
  serviceOfferingName: saas-registry
  servicePlanName: application
  parametersFrom:
    - secretKeyRef:
        name: "RELEASE-NAME-saas-registry-secret"
        key: parameters
saasRegistryParameters:
  xsappname: bookshop
  appName: bookshop
  displayName: bookshop
  description: A simple self-contained bookshop service.
  category: "CAP Application"
  appUrls:
    getDependencies: "/-/cds/saas-provisioning/dependencies"
    onSubscription: "/-/cds/saas-provisioning/tenant/{tenantId}"
    onSubscriptionAsync: true
    onUnSubscriptionAsync: true
    onUpdateDependenciesAsync: true
    callbackTimeoutMillis: 300000
[...]
saas-registry:
  serviceOfferingName: saas-registry
  servicePlanName: application
  parametersFrom:
    - secretKeyRef:
        name: "RELEASE-NAME-saas-registry-secret"
        key: parameters
saasRegistryParameters:
  xsappname: bookshop
  appName: bookshop
  displayName: bookshop
  description: A simple self-contained bookshop service.
  category: "CAP Application"
  appUrls:
    getDependencies: "/-/cds/saas-provisioning/dependencies"
    onSubscription: "/-/cds/saas-provisioning/tenant/{tenantId}"
    onSubscriptionAsync: true
    onUnSubscriptionAsync: true
    onUpdateDependenciesAsync: true
    callbackTimeoutMillis: 300000

TIP

You don't have to edit RELEASE-NAME in the secretKeyRef property. It is passed as a template string and will be replaced with your actual release name by Helm.

Arbitrary Service

These are the steps to create and bind to an arbitrary service, using the binding of the feature toggle service to the CAP application as an example:

  1. In the chart/Chart.yaml file, add an entry to the dependencies array.

    yaml
    dependencies:
      ...
      - name: service-instance
        alias: feature-flags
        version: 0.1.0
    dependencies:
      ...
      - name: service-instance
        alias: feature-flags
        version: 0.1.0
  2. Add the service configuration and the binding in the chart/values.yaml file:

    yaml
    feature-flags:
      serviceOfferingName: feature-flags
      servicePlanName: lite
    ...
    srv:
       bindings:
         feature-flags:
            serviceInstanceName: feature-flags
    feature-flags:
      serviceOfferingName: feature-flags
      servicePlanName: lite
    ...
    srv:
       bindings:
         feature-flags:
            serviceInstanceName: feature-flags

    The alias property in the dependencies array must match the property added in the root of chart/values.yaml and the value of serviceInstanceName in the binding.

WARNING

There should be at least one service instance created by cds add helm if you want to bind an arbitrary service.

Configuration Options for Services

Services have the following configuration options:

PropertyTypeDescriptionMandatory
fullNameOverridestringUse instead of the generated name
serviceOfferingNamestringTechnical service offering name from service catalog
servicePlanNamestringTechnical service plan name from service catalog
externalNamestringThe name for the service instance in SAP BTP
customTagsarray of stringList of custom tags describing the service instance, will be copied to ServiceBinding secret in the key called tags
parametersobjectObject with service parameters
jsonParametersstringSome services support the provisioning of additional configuration parameters. For the list of supported parameters, check the documentation of the particular service offering.
parametersFromarray of objectList of secrets from which parameters are populated.

The jsonParameters key can also be specified using the --set file flag while installing/upgrading Helm release. For example, jsonParameters for the xsuaa property can be defined using the following command:

sh
helm install bookshop ./chart --set-file xsuaa.jsonParameters=<path to a json file>
helm install bookshop ./chart --set-file xsuaa.jsonParameters=<path to a json file>

You can explore more configuration options in the subchart's directory chart/charts/service-instance.

Configuration Options for Service Bindings

PropertyDescriptionMandatory
(key)Name of the service binding
secretFromBind to Kubernetes secret()1
serviceInstanceNameBind to service instance within the Helm chart()1
serviceInstanceFullnameBind to service instance using the absolute name()1
parametersObject with service binding parameters

1: Exactly one of these properties need to be specified

Configuration Options for Container Images

PropertyDescriptionMandatory
repositoryFull container image repository name
tagContainer image version tag (default: latest)

Modify

Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template.

You can run cds add helm again to update your Helm chart. It has the following behavior for modified files:

  1. Your changes of the chart/values.yaml are persisted. Only missing or new properties will be added by cds add helm.
  2. If you modify any of the other generated files, they will not updated by cds add helm anymore. The command will issue a warning about that. To withdraw your changes just delete the modified files and run cds add helm again.

Extend

  1. Adding new files to the Helm chart does not conflict with cds add helm.
  2. A modification-free approach to change files is to use Kustomize as a post-processor for your Helm chart. This might be usable for small changes if you don't want to branch-out from the generated cds add helm content.

Additional Information

SAP BTP Services and Features

You can find a list of SAP BTP services in the Discovery Center. To find out if a service is supported in the Kyma and Kubernetes environment, goto to the Service Marketplace of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter.

You can find information about planned SAP BTP, Kyma Runtime features in the product road map.

About Cloud Native Buildpacks

Cloud Native Buildpacks provide advantages such as embracing best practices and secure standards like:

  • Resulting images use an unprivileged user.
  • Builds are reproducible.
  • Software Bill of Materials (SBoM) for all dependencies baked into the image.
  • Auto detection, no need to manually select base images.

Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the ca-certificates enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required.

Learn more about Cloud Native Buildpacks Concepts

One way of using Cloud Native Buildpacks in CI/CD is by utilizing the cnbBuild step of Project "Piper". This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines.

Learn more about Support for Cloud Native Buildpacks in Jenkins

Get Access to a Cluster

You can either purchase a Kyma cluster from SAP, create your personal trial account or sign-up for the free tier offering to get a SAP managed Kyma Kubernetes cluster.

Get Access to a Container Registry

SAP BTP doesn't provide a container registry.

You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network.

  • The use of a public container registry gives everyone access to your container images.
  • In a private container registry, your container images are protected. You will need to configure a pull secret to allow your cluster to access it.

Setup Your Cluster for a Public Container Registry

Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required.

Setup Your Cluster for a Private Container Registry

To use a docker image from a private repository, you need to create an image pull secret and configure this secret for your containers.

WARNING

It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily.