Skip to content

    Deploy to Kyma Runtime

    You can run your CAP application in the Kyma Runtime. This runtime of the SAP Business Technology Platform is the SAP managed offering for the Kyma project. This guide helps you to run your CAP applications on SAP BTP Kyma Runtime.

    There are still some limitations for the use of CAP on Kyma. Most notably is that if you’re using trial account, service instances for SAP HANA Cloud and HDI container need to be created on Cloud Foundry.


    As well as Kubernetes, Kyma is a platform to run containerized workloads. The service’s files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources.

    In consequence, two kinds of artifacts are needed to run applications on Kubernetes:

    1. Container images
    2. Kubernetes resources

    The following diagram shows the steps to run on the SAP BTP Kyma Runtime:

    1. Add a Helm chart
    2. Build container images
    3. Push container images to a container registry
    4. Deploy your application by applying Kubernetes resources


    If you’re new to this topic, we’ve got you covered with the End-to-End tutorial Deploy Your CAP Application on SAP BTP Kyma Runtime covering all the steps and how to fulfill the prerequisites in detail.

    Make yourself familiar with Kyma and Kubernetes. CAP doesn’t provide consulting on it.

    Prepare for Production

    The detailed procedure is described in the Deploy to Cloud Foundry guide. Run this command to fast-forward:

    cds add hana,xsuaa --for production

    Add Helm Chart

    CAP provides a configurable Helm chart for Node.js and Java applications.

    cds add helm

    This command adds the Helm chart to the chart folder of your project.

    The files in the charts folder support the deployment of your CAP service, database and UI content, and the creation of instances for BTP services.

    Learn more about CAP Helm chart.

    Build Images

    We recommend using Cloud Native Buildpacks to transform source code (or artifacts) into container images. For local development Cloud Native Buildpacks can be easily consumed by using the pack CLI that is a prerequisite for the next steps.

    Learn more about Cloud Native Buildpacks

    Build CAP Node.js Image

    Do the productive build for your application, which writes into the gen/srv folder:

    cds build --production

    Build the image:

    pack build bookshop-srv \
         --path gen/srv \
         --buildpack \
         --builder paketobuildpacks/builder:base \
         --env BP_NODE_RUN_SCRIPTS=

    The pack CLI builds the image that contains the build result in the gen/srv folder and the required npm packages by using the Paketo Node.js Buildpack that is based on the Paketo base builder.

    Find the resulting docker image bookshop-srv in your local docker registry.

    docker images

    Build CAP Java Image

    Add the cds-feature-k8s dependency to your pom.xml:

    	<!-- Features -->

    Build your Java project:

    mvn package

    Build the docker image using the SapMachine and Java buildpacks:

    pack build bookshop-srv \
            --path srv/target/*-exec.jar \
            --buildpack \
            --buildpack \
            --builder paketobuildpacks/builder:base \
            --env SPRING_PROFILES_ACTIVE=cloud \
            --env BP_JVM_VERSION=11

    We recommend SapMachine as Java buildpack.

    Build Approuter Image

    pack build bookshop-approuter \
         --path app \
         --buildpack \
         --builder paketobuildpacks/builder:base \
         --env BP_NODE_RUN_SCRIPTS=

    Build Database Image

    Do the productive build:

    cds build --production

    Build the docker image:

    pack build bookshop-hana-deployer \
         --path gen/db \
         --buildpack \
         --builder paketobuildpacks/builder:base \
         --env BP_NODE_RUN_SCRIPTS=
    pack build bookshop-hana-deployer \
         --path db \
         --buildpack \
         --builder paketobuildpacks/builder:base \
         --env BP_NODE_RUN_SCRIPTS=

    UI Deployment

    For UI access, you can use the standalone and the managed Approuter as explained in this blog.

    The cds add helm command supports deployment to the HTML5 application repository which can be used with both options.

    For that, create a container image with your UI files configured with the HTML5 application deployer.

    You can find an example with SAPUI5 applications in the Kyma Launchpad Tutorial of the BTP End-To-End tutorial series.

    The cds add helm command also supports deployment of standalone Approuter.

    For deploying standalone approuter, create a container image.

    To configure backend destinations, have a look at the approuter configuration section.

    Approuter deployment is only supported for @sap/approuter:12.0.1 and above.

    Push Images

    The Kyma runtime needs reliable access to the docker images you provide. Using a local registry only, this isn’t given. Therefore, upload the images to your container registry service.

    Log in to Your Container Registry

    docker login <your-registry> -u <your-user>

    Push Images to Your Container Registry

    Docker images can be identified by its hash, or one or multiple tags. You have already tagged your docker images by the build. Add tags starting with your container registry’s hostname to push the images to it.

    Upload your docker images by repeating the following steps for each image:

    1. Add a tag for the remote container registry to a local docker image:

        docker tag <image-name>[:<image-version>] \
       docker tag bookshop-srv[:<image-version>] \
    2. Push a docker image:

       docker push[:<image-version>]

    Deploy Helm Chart

    Once your Helm chart is created, your container images are uploaded to a registry and your cluster is prepared, you’re almost set for deploying your Kyma application.

    Create Service Instances for SAP HANA Cloud

    1. Enable SAP HANA for your project as explained in the CAP guide for SAP HANA.
    2. Create an SAP HANA database.
    3. To create HDI containers from Kyma, you need to create a mapping between your namespace and SAP HANA Cloud instance.

    Tools plan of SAP HANA Cloud service isn’t available in Trial Accounts. But you can still use HDI containers created from Cloud Foundry with Kyma:

    1. Create a HDI container for your application using a Cloud Foundry account.
    2. Create a Kubernetes secret with the credentials from a service key from the Cloud Foundry account.
    3. Add additional properties to the Kubernetes secret.

         # <...>
         .metadata: |
                 { "name": "certificate", "format": "text"},
                 { "name": "database_id", "format": "text"},
                 { "name": "driver", "format": "text"},
                 { "name": "hdi_password", "format": "text"},
                 { "name": "hdi_user", "format": "text"},
                 { "name": "host", "format": "text"},
                 { "name": "password", "format": "text"},
                 { "name": "port", "format": "text"},
                 { "name": "schema", "format": "text"},
                 { "name": "url", "format": "text"},
                 { "name": "user", "format": "text"}
                 { "name": "plan", "format": "text" },
                 { "name": "label", "format": "text" },
                 { "name": "type", "format": "text" },
                 { "name": "tags", "format": "json" }
         type: hana
         label: hana
         plan: hdi-shared
         tags: '[ "hana", "database", "relational" ]'
    4. Change the serviceInstanceName property inside the db binding in the srv section to fromSecret in chart/values.yaml file:

               fromSecret: <your secret>
    5. Change the serviceInstanceName property inside the hana binding in the hana-deployer section to fromSecret in chart/values.yaml file:

             fromSecret: <your secret>
    6. Delete hana property in chart/values.yaml file.

    Please make sure that your HANA Cloud instance can be accessed from your Kyma cluster by setting the trusted source IP addresses.

    You can find an example in the Kyma HANA Cloud Tutorial of the BTP End-To-End tutorial series.

    Deploy using CAP Helm Chart

    Before deployment, you need to set the container image and cluster specific settings.

    Configure Access to Your Container Images

    Add your container image settings to your chart/values.yaml:

        name: [<image pull secret name>]
        repository: <your-container-registry>/<srv-image-name>
        tag: <srv-image-version>

    If you use Hana Deployer, you additionally need to configure:

        repository: <your-container-registry>/<db-deployer-image-name>
        tag: <db-deployer-image-version>

    If you use HTML5 applications, you additionally need to configure:

        repository: <your-container-registry>/<html5-deployer-image-name>
        tag: <html5-deployer-image-version>

    To use images on private container registries you need to create an image pull secret.

    If you didn’t specify a version in the image build, set the tag property to latest.

    Configure Cluster Domain

    Specify the domain of your cluster in the chart/values.yaml file so that the URL of your CAP service can be generated:

    domain: <cluster domain>

    You can use the pre-configured domain name for your Kyma cluster:

    kubectl get gateway -n kyma-system kyma-gateway \
            -o jsonpath='{.spec.servers[0].hosts[0]}'

    Configure Approuter Specifications

    1. Configure access to your Approuter image:

           repository: <your-container-registry>/<approuter-image-name>
           tag: <approuter-image-version>
    2. Replace <your-cluster-domain> with your cluster domain in xsuaa section of values.yaml:

           serviceOfferingName: xsuaa
           servicePlanName: application
             xsappname: bookshop
             tenant-mode: dedicated
                 - https://*.<your-cluster-domain>/**
    3. Add the destinations under backendDestinations in the values.yaml file:

           service: srv

      backend is the name of the destination. service points to the deployment name whose url will be used for this destination.

    Deploy CAP Helm Chart

    1. Log in to your Kyma cluster
    2. Deploy using helm command:

       helm upgrade --install bookshop ./chart \
            --namespace bookshop-namespace

      This installs the Helm chart from the chart folder with the release name bookshop in the namespace bookshop-namespace.

      With the helm upgrade --install command you can install a new chart as well as upgrade an existing chart.

    Learn more about using a private registry with your Kyma cluster. Learn more about the CAP Helm chart settings Learn more about using helm upgrade

    Try out the CAP SFLIGHT and CAP for Java examples on Kyma.

    Customize Helm Chart

    About CAP Helm Chart

    The following files are added to a chart folder by executing cds add helm:

    File/Pattern Description
    values.yaml Configuration of the chart; The initial configuration is determined from your CAP project.
    Chart.yaml Chart metadata that is initially determined from the package.json file
    templates/NOTES.txt Message printed after installing or upgrading the Helm charts
    templates/*.yaml Template files for the Kubernetes resources
    templates/*.tpl Template libraries used in the template resources
    *.json Config files that are copied from your project folder or are generated by cds add

    Learn how to create a Helm chart from scratch from the Helm documentation.


    CAP’s Helm chart can be configured by the settings as explained below. Mandatory settings are marked with .

    You can change the configuration by editing the chart/values.yaml file. When you call cds add helm again, your changes will be persisted and only missing default values are added.

    The general chart settings and used subcharts can be edited in the chart/Chart.yaml file.

    The helm CLI also offers you other options to overwrite settings from chart/values.yaml file:

    • Overwrite properties using the --set parameter.
    • Overwrite properties from a YAML file using the -f parameter.

    It is recommended to do the main configuration in the chart/values.yaml file and have additional YAML files for specific deployment types (dev, test, productive) and targets.

    Service configuration files, such as the xs-security.json, need to be inside the chart folder to be accessible by Helm. If cds add helm finds these files in the project root folder, it will copy it into the chart folder. Otherwise, it will generate the file in the chart folder if this is supported for the file or display a warning.

    If you change content of the config files in the chart folder, it will no longer be updated by cds add helm.

    Global Properties

    Property Description Mandatory
    imagePullSecret → name Name of secret to access the container registry ( ) 1
    domain Kubernetes cluster ingress domain (used for application URLs)

    1: Mandatory only for private docker registries

    Deployment Properties

    The following properties are available for the srv key:

    Property Description Mandatory
    bindings Service Bindings  
    resources Kubernetes Container resources
    env Map of additional env variables  
    health_check Kubernetes Liveness, Readyness and Startup Probes  
    → liveness → path Endpoint for liveness and startup probe
    → readiness → path Endpoint for readiness probe
    → startupTimeout Wait time in seconds until the health checks are started  
    image Container image  

    You can explore more configuration options in the subchart’s directory chart/charts/web-application.

    SAP BTP Services

    The helm chart supports to create service instances for commonly used services. Services are pre-populated in the chart/values.yaml file based on the used services in the requires section of the CAP configuration (for example, package.json) file.

    You can use the following services in your configuration:

    Property Description Mandatory
    xsuaa Enables the creation of a XSUAA service instance. See details for Node.js and Java projects.  
    config Name of xs-security.json file in the chart folder.
    parameters → xsappname Name of XSUAA application. Overwrites the value from the xs-security.json file. (unique per subaccount)
    destinations Enables destination service; Use destinations  
    parameters → HTML5Runtime_enabled Set to true for use with Launchpad Service  
    connectivity Enables on-premise connectivity  
    event_mesh Enables SAP Event Mesh; messaging guide, how to enable the SAP Event Mesh  
    config Name of the event mesh configuration file  
    html5_apps_repo_host HTML5 Application Repository  

    Learn how to configure services in your Helm chart


    The deployment job of your database content to a HDI container can be configured using the hana-deployer section with the following properties:

    Property Description Mandatory
    bindings Service binding to the HDI container’s secret
    image Container image of the HDI deployer
    resources Kubernetes Container resources
    env Map of additional environment variables  

    HTML5 Applications

    The deployment job of HTML5 applications can be configured using the html5-apps-deployer section with the following properties:

    Property Description Mandatory
    image Container image of the HTML5 application deployer
    bindings Service bindings to XSUAA, destinations and HTML5 Application Repository Host services
    resources Kubernetes Container resources
    env Map of additional environment variables  
    → SAP_CLOUD_SERVICE Name for your business service (unique per subaccount)

    Run cds add html5-repo to automate the setup for HTML5 application deployment.

    Backend Destinations

    Backend destinations maybe required for HTML5 applications or for Approuter deployment. They can be configured using the backendDestinations section with the following properties:

    Property Description Mandatory
    (key) Name of backend destination  
    → service Target Kubernetes service (e.g. srv)  

    Connectivity Service

    Use cds add connectivity, to add a volume to your srv deployment and a connectivity section that creates the service instance of the SAP BTP Connectivity service with service plan proxy.

    Create an instance of the SAP BTP Connectivity service with plan connectivity_proxy and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication and therefore it must not be deleted as long as connectivity is used.

    The volume you’ve added to your srv deployment is needed, to add additional connection information, compared to what’s available from the service binding.

    Please note, that the Helm release name will not be prepended to the names of the connectivity service binding secret (<your app name>-connectivity-binding). It’s the same for the configuration map with the additional connection information (<your app name>-connectivity-proxy-info).

    Arbitrary Service

    These are the steps to create and bind to an arbitrary service, using the binding of the feature toggle service to the CAP application as an example:

    1. In the templates folder, create a file named feature-flags.yaml with the following content:
       {{- include "cap.service-instance" $ }}
    2. Add the service configuration and the binding in the values.yaml file:

        serviceOfferingName: feature-flags
        servicePlanName: lite
        config: feature-flags.json
               serviceInstanceName: feature-flags

      The file name has to match the value of the serviceInstanceName property.

    Multiple .yaml files need to be created to bind to multiple services.

    Configuration Options for Services

    Services have the following configuration options:

    Property Description Mandatory
    fullNameOverride Use instead of the generated name  
    enabled Service instance will be created (default: true)  
    serviceOfferingName Technical service offering name from service catalog
    servicePlanName Technical service plan name from service catalog
    config File name of JSON configuration file in chart folder  
    parameters Object with service parameters  

    If both parameters and config are specified, the values in parameters override those read from config.

    Configuration Options for Service Bindings

    Property Description Mandatory
    (key) Name of the service binding  
    secretFrom Bind to Kubernetes secret ()1
    serviceInstanceName Bind to service instance within the Helm chart ()1
    serviceInstanceFullname Bind to service instance using the absolute name ()1
    parameters Object with service binding parameters  

    1: Exactly one of these properties need to be specified

    Configuration Options for Container Images

    Property Description Mandatory
    repository Full container image repository name
    tag Container image version tag (default: latest)  


    Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template.

    You can run cds add helm again to update your Helm chart. It has the following behavior for modified files:

    1. Your changes of the chart/values.yaml are persisted. Only missing or new properties will be added by cds add helm.
    2. If you modify any of the other generated files, they will not updated by cds add helm anymore. The command will issue a warning about that. To withdraw your changes just delete the modified files and run cds add helm again.


    1. Adding new files to the Helm chart does not conflict with cds add helm.
    2. A modification-free approach to change files is to use Kustomize as a post-processor for your Helm chart. This might be usable for small changes if you don’t want to branch-out from the generated cds add helm content.

    Additional Information

    SAP BTP Services and Features

    You can find a list of SAP BTP services in the Discovery Center. To find out if a service is supported in the Kyma and Kubernetes environment, goto to the Service Marketplace of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter.

    You can find information about planned SAP BTP, Kyma Runtime features in the product road map.

    About Cloud Native Buildpacks

    Cloud Native Buildpacks provide advantages such as embracing best practices and secure standards like:

    • Resulting images use an unprivileged user.
    • Builds are reproducible.
    • Software Bill of Materials (SBoM) for all dependencies baked into the image.
    • Auto detection, no need to manually select base images.

    Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the ca-certificates enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required.

    Learn more about Cloud Native Buildpacks Concepts

    One way of using Cloud Native Buildpacks in CI/CD is by utilizing the cnbBuild step of Project “Piper”. This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines.

    Learn more about Support for Cloud Native Buildpacks in Jenkins

    Get Access to a Cluster

    You can either purchase a Kyma cluster from SAP, create your personal trial account or sign-up for the free tier offering to get a SAP managed Kyma Kubernetes cluster.

    Get Access to a Container Registry

    SAP BTP doesn’t provide a container registry.

    You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network.

    • The use of a public container registry gives everyone access to your container images.
    • In a private container registry, your container images are protected. You will need to configure a pull secret to allow your cluster to access it.

    Setup Your Cluster for a Public Container Registry

    Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required.

    Setup Your Cluster for a Private Container Registry

    To use a docker image from a private repository, you need to create an image pull secret and configure this secret for your containers.

    It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily.