You can run your CAP application in the Kyma Runtime. This runtime of the SAP Business Technology Platform is the SAP managed offering for the Kyma project. This guide helps you to run your CAP applications on SAP BTP Kyma Runtime.
This guide is available for Node.js and Java. Press v to switch, or use the toggle.
As well as Kubernetes, Kyma is a platform to run containerized workloads. The service's files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources.
In consequence, two kinds of artifacts are needed to run applications on Kubernetes:
The following diagram shows the steps to run on the SAP BTP Kyma Runtime:
We recommend using Cloud Native Buildpacks to transform source code (or artifacts) into container images. For local development Cloud Native Buildpacks can be easily consumed by using the pack CLI that is a prerequisite for the next steps.
Docker images can be identified by its hash, or one or multiple tags. You have already tagged your docker images by the build. Add tags starting with your container registry's hostname to push the images to it.
Upload your docker images by repeating the following steps for each image:
Add a tag for the remote container registry to a local docker image:
The helm chart supports to create service instances for commonly used services. Services are pre-populated in the chart/values.yaml file based on the used services in the requires section of the CAP configuration (for example, package.json) file.
You can use the following services in your configuration:
Enables the creation of a XSUAA service instance. See details for Node.js and Java projects.
parameters → xsappname
Name of XSUAA application. Overwrites the value from the xs-security.json file. (unique per subaccount)
Use cds add connectivity, to add a volume to your srv deployment.
Create an instance of the SAP BTP Connectivity service with plan connectivity_proxy and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication. This instance must not be deleted as long as connectivity is used.
The volume you've added to your srv deployment is needed, to add additional connection information, compared to what's available from the service binding.
In the volumes added, replace the value of <your-connectivity-binding> with the binding that you created earlier. If the binding is created in a different namespace then you need to create a secret with details from the binding and use that secret.
You don't have to edit RELEASE-NAME in the configMap property. It is passed as a template string and will be replaced with your actual release name by Helm.
Services have the following configuration options:
Use instead of the generated name
Technical service offering name from service catalog
Technical service plan name from service catalog
The name for the service instance in SAP BTP
array of string
List of custom tags describing the service instance, will be copied to ServiceBinding secret in the key called tags
Object with service parameters
Some services support the provisioning of additional configuration parameters. For the list of supported parameters, check the documentation of the particular service offering.
array of object
List of secrets from which parameters are populated.
The jsonParameters key can also be specified using the --set file flag while installing/upgrading Helm release. For example, jsonParameters for the xsuaa property can be defined using the following command:
Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template.
You can run cds add helm again to update your Helm chart. It has the following behavior for modified files:
Your changes of the chart/values.yaml are persisted. Only missing or new properties will be added by cds add helm.
If you modify any of the other generated files, they will not updated by cds add helm anymore. The command will issue a warning about that. To withdraw your changes just delete the modified files and run cds add helm again.
Adding new files to the Helm chart does not conflict with cds add helm.
A modification-free approach to change files is to use Kustomize as a post-processor for your Helm chart. This might be usable for small changes if you don't want to branch-out from the generated cds add helm content.
You can find a list of SAP BTP services in the Discovery Center. To find out if a service is supported in the Kyma and Kubernetes environment, goto to the Service Marketplace of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter.
You can find information about planned SAP BTP, Kyma Runtime features in the product road map.
Auto detection, no need to manually select base images.
Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the ca-certificates enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required.
One way of using Cloud Native Buildpacks in CI/CD is by utilizing the cnbBuild step of Project "Piper". This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines.
You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network.
The use of a public container registry gives everyone access to your container images.
In a private container registry, your container images are protected. You will need to configure a pull secret to allow your cluster to access it.
Setup Your Cluster for a Public Container Registry
Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required.
Setup Your Cluster for a Private Container Registry
To use a docker image from a private repository, you need to create an image pull secret and configure this secret for your containers.
It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily.