Deploy to Kyma Runtime
You can run your CAP application in the Kyma Runtime. This runtime of the SAP Business Technology Platform is the SAP managed offering for the Kyma project. This guide helps you to run your CAP applications on SAP BTP Kyma Runtime.
This guide is available for Node.js and Java.
Use the toggle in the title bar or press v to switch.
Overview
As well as Kubernetes, Kyma is a platform to run containerized workloads. The service's files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources.
In consequence, two kinds of artifacts are needed to run applications on Kubernetes:
- Container images
- Kubernetes resources
The following diagram shows the steps to run on the SAP BTP Kyma Runtime:
Prerequisites
- You prepared your project as described in the Deploy to Cloud Foundry guide.
- Use a Kyma enabled Trial Account or learn how to get access to a Kyma cluster.
- You need a Container Image Registry
- Get the required SAP BTP service entitlements
- Download and install the following command line tools:
WARNING
Make yourself familiar with Kyma and Kubernetes. CAP doesn't provide consulting on it.
Prepare for Production
The detailed procedure is described in the Deploy to Cloud Foundry guide. Run this command to fast-forward:
cds add hana,xsuaa --for production
Add Helm Chart
CAP provides a configurable Helm chart for Node.js and Java applications.
cds add helm
This command adds the Helm chart to the chart folder of your project with 3 files: values.yaml
, Chart.yaml
and values.schema.json
.
During cds build, the gen/chart folder is generated. This folder will have all the necessary files required to deploy the helm chart. Files from the chart folder in root of the project are copied to the folder generated in gen folder.
The files in the gen/chart folder support the deployment of your CAP service, database and UI content, and the creation of instances for BTP services.
Learn more about CAP Helm chart.
Build Images
We'll be using the Containerize Build Tool to build the images. The modules are configured in a containerize.yaml
descriptor file, which we generate with:
cds add containerize
Configure Image Repository
Specify the repository where you want to push the images:
...
repository: <your-container-registry>
WARNING
You should be logged in to the above repository to be able to push images to it. You can use docker login <your-container-registry> -u <your-user>
to login.
Now, we use the ctz
build tool to build all the images:
ctz containerize.yaml
This will start containerizing your modules based on the configuration in the specified file. After it is done, it will ask whether you want to push the images or not. Type
y
and press enter to push your images. You can also use the above command with--push
flag to skip this. If you want more logs, you can use the--log
flag with the above command.
Learn more about Containerize Build Tool
UI Deployment
For UI access, you can use the standalone and the managed App Router as explained in this blog.
The cds add helm
command supports deployment to the HTML5 application repository which can be used with both options.
For that, create a container image with your UI files configured with the HTML5 application deployer.
The cds add helm
command also supports deployment of standalone approuter.
To configure backend destinations, have a look at the approuter configuration section.
Deploy Helm Chart
Once your Helm chart is created, your container images are uploaded to a registry and your cluster is prepared, you're almost set for deploying your Kyma application.
Create Service Instances for SAP HANA Cloud
- Enable SAP HANA for your project as explained in the CAP guide for SAP HANA.
- Create an SAP HANA database.
- To create HDI containers from Kyma, you need to create a mapping between your namespace and SAP HANA Cloud instance.
Set trusted source IP addresses
Make sure that your SAP HANA Cloud instance can be accessed from your Kyma cluster by setting the trusted source IP addresses.
Deploy using CAP Helm Chart
Before deployment, you need to set the container image and cluster specific settings.
Configure Access to Your Container Images
Add your container image settings to your chart/values.yaml:
...
global:
domain: <your-kyma-domain>
imagePullSecret:
name: <your-imagepull-secret>
image:
registry: <your-container-registry>
tag: latest
You can use the pre-configured domain name for your Kyma cluster:
kubectl get gateway -n kyma-system kyma-gateway \
-o jsonpath='{.spec.servers[0].hosts[0]}'
To use images on private container registries you need to create an image pull secret.
For image registry, use the same value you mentioned in containerize.yaml
Configure Approuter Specifications
By default srv-api
and mtx-api
(only in Multi Tenant Application) are configured. If you're using any other destination or your xs-app.json
file has a different destination, update the destinations under the backendDestinations
key in values.yaml file:
backendDestinations:
backend:
service: srv
backend
is the name of the destination.service
points to the deployment name whose url will be used for this destination.
Deploy CAP Helm Chart
Execute
cds build --production
to generate the helm chart in gen folder.Deploy using
helm
command:shhelm upgrade --install bookshop ./gen/chart \ --namespace bookshop-namespace --create-namespace
This installs the Helm chart from the gen/chart folder with the release name
bookshop
in the namespacebookshop-namespace
.TIP
With the
helm upgrade --install
command you can install a new chart as well as upgrade an existing chart.
This process can take a few minutes to complete and create the log output:
[…]
The release bookshop is installed in namespace [namespace].
Your services are available at:
[workload] - https://bookshop-[workload]-[namespace].[configured-domain]
[…]
Copy and open this URL in your web browser. It's the URL of your application.
INFO
If a standalone approuter is present, the srv and sidecar aren't exposed and only the approuter URL will be logged. But if an approuter isn't present then srv and sidecar are also exposed and their URL will also be logged.
Learn more about using a private registry with your Kyma cluster. Learn more about the CAP Helm chart settings Learn more about using helm upgrade
TIP
Try out the CAP SFLIGHT and CAP for Java examples on Kyma.
Customize Helm Chart
About CAP Helm Chart
The following files are added to a chart folder by executing cds add helm
:
File/Pattern | Description |
---|---|
values.yaml | Configuration of the chart; The initial configuration is determined from your CAP project. |
Chart.yaml | Chart metadata that is initially determined from the package.json file |
values.schema.json | JSON Schema for values.yaml file |
The following files are added to a gen/chart folder along with all the files in the chart folder in the root of the project by executing cds build
after adding helm
:
File/Pattern | Description |
---|---|
templates/*.tpl | Template libraries used in the template resources |
templates/NOTES.txt | Message printed after installing or upgrading the Helm charts |
templates/*.yaml | Template files for the Kubernetes resources |
Learn how to create a Helm chart from scratch from the Helm documentation.
Configure
CAP's Helm chart can be configured by the settings as explained below. Mandatory settings are marked with ✓.
You can change the configuration by editing the chart/values.yaml file. When you call cds add helm
again, your changes will be persisted and only missing default values are added.
The helm
CLI also offers you other options to overwrite settings from chart/values.yaml file:
- Overwrite properties using the
--set
parameter. - Overwrite properties from a YAML file using the
-f
parameter.
TIP
It is recommended to do the main configuration in the chart/values.yaml file and have additional YAML files for specific deployment types (dev, test, productive) and targets.
Global Properties
Property | Description | Mandatory |
---|---|---|
imagePullSecret → name | Name of secret to access the container registry | (✓ ) 1 |
domain | Kubernetes cluster ingress domain (used for application URLs) | ✓ |
image → registry | Name of the container registry from where images are pulled | ✓ |
1: Mandatory only for private docker registries
Deployment Properties
The following properties are available for the srv
key:
Property | Description | Mandatory |
---|---|---|
bindings | Service Bindings | |
resources | Kubernetes Container resources | ✓ |
env | Map of additional env variables | |
health | Kubernetes Liveness, Readyness and Startup Probes | |
→ liveness → path | Endpoint for liveness and startup probe | ✓ |
→ readiness → path | Endpoint for readiness probe | ✓ |
→ startupTimeout | Wait time in seconds until the health checks are started | |
image | Container image |
You can explore more configuration options in the subchart's directory gen/chart/charts/web-application.
SAP BTP Services
The helm chart supports to create service instances for commonly used services. Services are pre-populated in the chart/values.yaml file based on the used services in the requires
section of the CAP configuration (for example, package.json) file.
You can use the following services in your configuration:
Property | Description | Mandatory |
---|---|---|
xsuaa | Enables the creation of a XSUAA service instance. See details for Node.js and Java projects. | |
parameters → xsappname | Name of XSUAA application. Overwrites the value from the xs-security.json file. (unique per subaccount) | ✓ |
parameters → HTML5Runtime_enabled | Set to true for use with Launchpad Service | |
connectivity | Enables on-premise connectivity | |
event-mesh | Enables SAP Event Mesh; messaging guide, how to enable the SAP Event Mesh | |
html5-apps-repo-host | HTML5 Application Repository | |
hana | HDI Shared Container | |
service-manager | Service Manager Container | |
saas-registry | SaaS Registry Service |
Learn how to configure services in your Helm chart
SAP HANA
The deployment job of your database content to a HDI container can be configured using the hana-deployer
section with the following properties:
Property | Description | Mandatory |
---|---|---|
bindings | Service binding to the HDI container's secret | ✓ |
image | Container image of the HDI deployer | ✓ |
resources | Kubernetes Container resources | ✓ |
env | Map of additional environment variables |
HTML5 Applications
The deployment job of HTML5 applications can be configured using the html5-apps-deployer
section with the following properties:
Property | Description | Mandatory |
---|---|---|
image | Container image of the HTML5 application deployer | ✓ |
bindings | Service bindings to XSUAA, destinations and HTML5 Application Repository Host services | ✓ |
resources | Kubernetes Container resources | ✓ |
env | Map of additional environment variables | |
→ SAP_CLOUD_SERVICE | Name for your business service (unique per subaccount) | ✓ |
TIP
Run cds add html5-repo
to automate the setup for HTML5 application deployment.
Backend Destinations
Backend destinations maybe required for HTML5 applications or for App Router deployment. They can be configured using the backendDestinations
section with the following properties:
Property | Description |
---|---|
(key) | Name of backend destination |
service: (value) | Value is the target Kubernetes service (like srv ) |
If you want to add an external destination, you can do so by providing the external
property like this:
...
backendDestinations:
srv-api:
service: srv
ui5:
external: true
name: ui5
Type: HTTP
proxyType: Internet
url: https://ui5.sap.com
Authentication: NoAuthentication
Our helm chart will remove the
external
key and add the rest of the keys as-is to the environment variable.
Connectivity Service
Use cds add connectivity
, to add a volume to your srv
deployment.
WARNING
Create an instance of the SAP BTP Connectivity service with plan connectivity_proxy
and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication. This instance must not be deleted as long as connectivity is used.
The volume you've added to your srv
deployment is needed, to add additional connection information, compared to what's available from the service binding.
srv:
...
additionalVolumes:
- name: connectivity-secret
volumeMount:
mountPath: /bindings/connectivity
readOnly: true
projected:
sources:
- secret:
name: <your-connectivity-binding>
optional: false
- secret:
name: <your-connectivity-binding>
optional: false
items:
- key: token_service_url
path: url
- configMap:
name: "RELEASE-NAME-connectivity-proxy-info"
optional: false
In the volumes added, replace the value of <your-connectivity-binding>
with the binding that you created earlier. If the binding is created in a different namespace then you need to create a secret with details from the binding and use that secret.
TIP
You don't have to edit RELEASE-NAME
in the configMap
property. It is passed as a template string and will be replaced with your actual release name by Helm.
Arbitrary Service
These are the steps to create and bind to an arbitrary service, using the binding of the feature toggle service to the CAP application as an example:
In the chart/Chart.yaml file, add an entry to the
dependencies
array.yamldependencies: ... - name: service-instance alias: feature-flags version: 0.1.0
Add the service configuration and the binding in the chart/values.yaml file:
yamlfeature-flags: serviceOfferingName: feature-flags servicePlanName: lite ... srv: bindings: feature-flags: serviceInstanceName: feature-flags
The
alias
property in thedependencies
array must match the property added in the root of chart/values.yaml and the value ofserviceInstanceName
in the binding.
WARNING
There should be at least one service instance created by cds add helm
if you want to bind an arbitrary service.
Configuration Options for Services
Services have the following configuration options:
Property | Type | Description | Mandatory |
---|---|---|---|
fullNameOverride | string | Use instead of the generated name | |
serviceOfferingName | string | Technical service offering name from service catalog | ✓ |
servicePlanName | string | Technical service plan name from service catalog | ✓ |
externalName | string | The name for the service instance in SAP BTP | |
customTags | array of string | List of custom tags describing the service instance, will be copied to ServiceBinding secret in the key called tags | |
parameters | object | Object with service parameters | |
jsonParameters | string | Some services support the provisioning of additional configuration parameters. For the list of supported parameters, check the documentation of the particular service offering. | |
parametersFrom | array of object | List of secrets from which parameters are populated. |
The jsonParameters
key can also be specified using the --set file
flag while installing/upgrading Helm release. For example, jsonParameters
for the xsuaa
property can be defined using the following command:
helm install bookshop ./chart --set-file xsuaa.jsonParameters=xs-security.json
You can explore more configuration options in the subchart's directory gen/chart/charts/service-instance.
Configuration Options for Service Bindings
Property | Description | Mandatory |
---|---|---|
(key) | Name of the service binding | |
secretFrom | Bind to Kubernetes secret | (✓)1 |
serviceInstanceName | Bind to service instance within the Helm chart | (✓)1 |
serviceInstanceFullname | Bind to service instance using the absolute name | (✓)1 |
parameters | Object with service binding parameters |
1: Exactly one of these properties need to be specified
Configuration Options for Container Images
Property | Description | Mandatory |
---|---|---|
repository | Full container image repository name | ✓ |
tag | Container image version tag (default: latest ) |
Modify
Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template.
You can run cds add helm
again to update your Helm chart. It has the following behavior for modified files:
- Your changes of the chart/values.yaml and chart/Chart.yaml will not be modified. Only new or missing properties will be added by
cds add helm
. - To modify any of the generated files such as templates or subcharts, copy the files from gen/chart folder and place it in the same level inside the chart folder. After the next
cds build
executions the generated chart will have the modified files. - If you want to have some custom files such as templates or subcharts, you can place them in the chart folder at the same level where you want them to be in gen/chart folder. They will be copied as is.
Extend
- Adding new files to the Helm chart does not conflict with
cds add helm
. - A modification-free approach to change files is to use Kustomize as a post-processor for your Helm chart. This might be usable for small changes if you don't want to branch-out from the generated
cds add helm
content.
Additional Information
SAP BTP Services and Features
You can find a list of SAP BTP services in the Discovery Center. To find out if a service is supported in the Kyma and Kubernetes environment, goto to the Service Marketplace of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter.
You can find information about planned SAP BTP, Kyma Runtime features in the product road map.
Using Service Instance created on Cloud Foundry
To bind service instances created on Cloud Foundry to a workload (srv
, hana-deployer
, html5-deployer
, approuter
or sidecar
) in Kyma environment, do the following:
In your cluster, create a secret with credentials from the service key of that instance.
Use the
fromSecret
property inside thebindings
key of the workload.
For example, if you want to use an hdi-shared
instance created on Cloud Foundry:
Create a Kubernetes secret with the credentials from a service key from the Cloud Foundry account.
Add additional properties to the Kubernetes secret.
yamlstringData: # <…> .metadata: | { "credentialProperties": [ { "name": "certificate", "format": "text"}, { "name": "database_id", "format": "text"}, { "name": "driver", "format": "text"}, { "name": "hdi_password", "format": "text"}, { "name": "hdi_user", "format": "text"}, { "name": "host", "format": "text"}, { "name": "password", "format": "text"}, { "name": "port", "format": "text"}, { "name": "schema", "format": "text"}, { "name": "url", "format": "text"}, { "name": "user", "format": "text"} ], "metaDataProperties": [ { "name": "plan", "format": "text" }, { "name": "label", "format": "text" }, { "name": "type", "format": "text" }, { "name": "tags", "format": "json" } ] } type: hana label: hana plan: hdi-shared tags: '[ "hana", "database", "relational" ]'
TIP
Update the values of the properties accordingly.
Change the
serviceInstanceName
property tofromSecret
from each workload which has that service instance inbindings
in chart/values.yaml file:yaml… srv: bindings: db: serviceInstanceName: fromSecret: <your secret>
yaml… hana-deployer: bindings: hana: serviceInstanceName: fromSecret: <your secret>
Delete
hana
property in chart/values.yaml file.yaml… hana: serviceOfferingName: hana servicePlanName: hdi-shared …
Make the following changes to chart/Chart.yaml file.
yaml… dependencies: … - name: service-instance alias: hana version: ">0.0.0" …
About Cloud Native Buildpacks
Cloud Native Buildpacks provide advantages such as embracing best practices and secure standards like:
- Resulting images use an unprivileged user.
- Builds are reproducible.
- Software Bill of Materials (SBoM) for all dependencies baked into the image.
- Auto detection, no need to manually select base images.
Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the ca-certificates enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required.
Learn more about Cloud Native Buildpacks Concepts
One way of using Cloud Native Buildpacks in CI/CD is by utilizing the cnbBuild
step of Project "Piper". This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines.
Learn more about Support for Cloud Native Buildpacks in Jenkins
Get Access to a Cluster
You can either purchase a Kyma cluster from SAP, create your personal trial account or sign-up for the free tier offering to get a SAP managed Kyma Kubernetes cluster.
Get Access to a Container Registry
SAP BTP doesn't provide a container registry.
You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network.
- The use of a public container registry gives everyone access to your container images.
- In a private container registry, your container images are protected. You will need to configure a pull secret to allow your cluster to access it.
Setup Your Cluster for a Public Container Registry
Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required.
Setup Your Cluster for a Private Container Registry
To use a docker image from a private repository, you need to create an image pull secret and configure this secret for your containers.
WARNING
It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily.