Deploy to Kyma Runtime
You can run your CAP application in the Kyma Runtime. This runtime of the SAP Business Technology Platform is the SAP managed offering for the Kyma project. This guide helps you to run your CAP applications on SAP BTP Kyma Runtime.
This guide is available for Node.js and Java. Press v to switch, or use the toggle.
Overview
As well as Kubernetes, Kyma is a platform to run containerized workloads. The service's files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources.
In consequence, two kinds of artifacts are needed to run applications on Kubernetes:
- Container images
- Kubernetes resources
The following diagram shows the steps to run on the SAP BTP Kyma Runtime:
- Add a Helm chart
- Build container images
- Push container images to a container registry
- Deploy your application by applying Kubernetes resources
⓪ Prerequisites
- You prepared your project as described in the Deploy to Cloud Foundry guide.
- Use a Kyma enabled Trial Account or learn how to get access to a Kyma cluster.
- You need a Container Image Registry
- Get the required SAP BTP service entitlements
- Download and install the following command line tools:
End-to-End tutorial
If you're new to this topic, we've got you covered with the End-to-End tutorial Deploy Your CAP Application on SAP BTP Kyma Runtime covering all the steps and explaining how to fulfill the prerequisites in detail.
WARNING
Make yourself familiar with Kyma and Kubernetes. CAP doesn't provide consulting on it.
① Prepare for Production
The detailed procedure is described in the Deploy to Cloud Foundry guide. Run this command to fast-forward:
cds add hana,xsuaa --for production
cds add hana,xsuaa --for production
② Add Helm Chart
CAP provides a configurable Helm chart for Node.js and Java applications.
cds add helm
cds add helm
This command adds the Helm chart to the chart folder of your project.
The files in the charts folder support the deployment of your CAP service, database and UI content, and the creation of instances for BTP services.
Learn more about CAP Helm chart.
③ Build Images
We recommend using Cloud Native Buildpacks to transform source code (or artifacts) into container images. For local development Cloud Native Buildpacks can be easily consumed by using the pack
CLI that is a prerequisite for the next steps.
Learn more about Cloud Native Buildpacks
Build CAP Node.js Image
Do the productive build for your application, which writes into the gen/srv folder:
cds build --production
cds build --production
Build the image:
pack build bookshop-srv \
--path gen/srv \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
pack build bookshop-srv \
--path gen/srv \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
The
pack
CLI builds the image that contains the build result in the gen/srv folder and the requirednpm
packages by using the Paketo Node.js Buildpack that is based on the Paketo base builder.
Find the resulting docker image bookshop-srv
in your local docker registry.
docker images
docker images
Build CAP Java Image
Add the cds-feature-k8s
dependency to your pom.xml
:
<dependencies>
<!-- Features -->
<dependency>
<groupId>com.sap.cds</groupId>
<artifactId>cds-feature-k8s</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
<dependencies>
<!-- Features -->
<dependency>
<groupId>com.sap.cds</groupId>
<artifactId>cds-feature-k8s</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
Build your Java project:
mvn package
mvn package
Build the docker image using the SapMachine and Java buildpacks:
pack build bookshop-srv \
--path srv/target/*-exec.jar \
--buildpack gcr.io/paketo-buildpacks/sap-machine \
--buildpack gcr.io/paketo-buildpacks/java \
--builder paketobuildpacks/builder:base \
--env SPRING_PROFILES_ACTIVE=cloud \
--env BP_JVM_VERSION=11
pack build bookshop-srv \
--path srv/target/*-exec.jar \
--buildpack gcr.io/paketo-buildpacks/sap-machine \
--buildpack gcr.io/paketo-buildpacks/java \
--builder paketobuildpacks/builder:base \
--env SPRING_PROFILES_ACTIVE=cloud \
--env BP_JVM_VERSION=11
We recommend SapMachine as Java buildpack.
Build Approuter Image
pack build bookshop-approuter \
--path app \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
pack build bookshop-approuter \
--path app \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
Build Database Image
Do the productive build:
cds build --production
cds build --production
Build the docker image:
pack build bookshop-hana-deployer \
--path gen/db \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
pack build bookshop-hana-deployer \
--path gen/db \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
pack build bookshop-hana-deployer \
--path db \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
pack build bookshop-hana-deployer \
--path db \
--buildpack gcr.io/paketo-buildpacks/nodejs \
--builder paketobuildpacks/builder:base \
--env BP_NODE_RUN_SCRIPTS=
UI Deployment
For UI access, you can use the standalone and the managed approuter as explained in this blog.
The cds add helm
command supports deployment to the HTML5 application repository which can be used with both options.
For that, create a container image with your UI files configured with the HTML5 application deployer.
You can find an example with SAPUI5 applications in the Kyma Launchpad Tutorial of the BTP End-To-End tutorial series.
The cds add helm
command also supports deployment of standalone approuter.
For deploying standalone approuter, create a container image.
To configure backend destinations, have a look at the approuter configuration section.
WARNING
Approuter deployment is only supported for @sap/approuter:12.0.1 and above.
④ Push Images
The Kyma runtime needs reliable access to the docker images you provide. Using a local registry only, this isn't given. Therefore, upload the images to your container registry service.
Log in to Your Container Registry
docker login <your-registry> -u <your-user>
docker login <your-registry> -u <your-user>
Push Images to Your Container Registry
Docker images can be identified by its hash, or one or multiple tags. You have already tagged your docker images by the build. Add tags starting with your container registry's hostname to push the images to it.
Upload your docker images by repeating the following steps for each image:
Add a tag for the remote container registry to a local docker image:
shdocker tag <image-name>[:<image-version>] \ <your-container-registry>/<image-name>[:<image-version>]
docker tag <image-name>[:<image-version>] \ <your-container-registry>/<image-name>[:<image-version>]
shdocker tag bookshop-srv[:<image-version>] \ your-sample-registry.com/bookshop-srv[:<image-version>]
docker tag bookshop-srv[:<image-version>] \ your-sample-registry.com/bookshop-srv[:<image-version>]
Push a docker image:
shdocker push your-sample-registry.com/bookshop-srv[:<image-version>]
docker push your-sample-registry.com/bookshop-srv[:<image-version>]
⑤ Deploy Helm Chart
Once your Helm chart is created, your container images are uploaded to a registry and your cluster is prepared, you're almost set for deploying your Kyma application.
Create Service Instances for SAP HANA Cloud
- Enable SAP HANA for your project as explained in the CAP guide for SAP HANA.
- Create an SAP HANA database.
- To create HDI containers from Kyma, you need to create a mapping between your namespace and SAP HANA Cloud instance.
Tools
plan of SAP HANA Cloud service isn't available in Trial Accounts. But you can still use HDI containers created from Cloud Foundry with Kyma:
Create an HDI container for your application using a Cloud Foundry account.
Create a Kubernetes secret with the credentials from a service key from the Cloud Foundry account.
Add additional properties to the Kubernetes secret.
yamlstringData: # <…> .metadata: | { "credentialProperties": [ { "name": "certificate", "format": "text"}, { "name": "database_id", "format": "text"}, { "name": "driver", "format": "text"}, { "name": "hdi_password", "format": "text"}, { "name": "hdi_user", "format": "text"}, { "name": "host", "format": "text"}, { "name": "password", "format": "text"}, { "name": "port", "format": "text"}, { "name": "schema", "format": "text"}, { "name": "url", "format": "text"}, { "name": "user", "format": "text"} ], "metaDataProperties": [ { "name": "plan", "format": "text" }, { "name": "label", "format": "text" }, { "name": "type", "format": "text" }, { "name": "tags", "format": "json" } ] } type: hana label: hana plan: hdi-shared tags: '[ "hana", "database", "relational" ]'
stringData: # <…> .metadata: | { "credentialProperties": [ { "name": "certificate", "format": "text"}, { "name": "database_id", "format": "text"}, { "name": "driver", "format": "text"}, { "name": "hdi_password", "format": "text"}, { "name": "hdi_user", "format": "text"}, { "name": "host", "format": "text"}, { "name": "password", "format": "text"}, { "name": "port", "format": "text"}, { "name": "schema", "format": "text"}, { "name": "url", "format": "text"}, { "name": "user", "format": "text"} ], "metaDataProperties": [ { "name": "plan", "format": "text" }, { "name": "label", "format": "text" }, { "name": "type", "format": "text" }, { "name": "tags", "format": "json" } ] } type: hana label: hana plan: hdi-shared tags: '[ "hana", "database", "relational" ]'
Change the
serviceInstanceName
property inside thedb
binding in thesrv
section tofromSecret
in chart/values.yaml file:yaml… srv: bindings: db: fromSecret: <your secret>
… srv: bindings: db: fromSecret: <your secret>
Change the
serviceInstanceName
property inside thehana
binding in thehana-deployer
section tofromSecret
in chart/values.yaml file:yaml… hana-deployer: bindings: hana: fromSecret: <your secret>
… hana-deployer: bindings: hana: fromSecret: <your secret>
Delete
hana
property in chart/values.yaml file.
WARNING
Make sure that your SAP HANA Cloud instance can be accessed from your Kyma cluster by setting the trusted source IP addresses.
You can find an example in the "Set Up SAP HANA Cloud for Kyma" of the BTP End-To-End tutorial series.
Deploy using CAP Helm Chart
Before deployment, you need to set the container image and cluster specific settings.
Configure Access to Your Container Images
Add your container image settings to your chart/values.yaml:
…
global:
imagePullSecret:
name: [<image pull secret name>]
...
srv:
image:
repository: <your-container-registry>/<srv-image-name>
tag: <srv-image-version>
…
global:
imagePullSecret:
name: [<image pull secret name>]
...
srv:
image:
repository: <your-container-registry>/<srv-image-name>
tag: <srv-image-version>
If you use the SAP HANA deployer, you additionally need to configure:
hana-deployer:
image:
repository: <your-container-registry>/<db-deployer-image-name>
tag: <db-deployer-image-version>
hana-deployer:
image:
repository: <your-container-registry>/<db-deployer-image-name>
tag: <db-deployer-image-version>
If you use HTML5 applications, you additionally need to configure:
html5-apps-deployer:
image:
repository: <your-container-registry>/<html5-deployer-image-name>
tag: <html5-deployer-image-version>
html5-apps-deployer:
image:
repository: <your-container-registry>/<html5-deployer-image-name>
tag: <html5-deployer-image-version>
To use images on private container registries you need to create an image pull secret.
If you didn't specify a version in the image build, set the tag
property to latest
.
Configure Cluster Domain
Specify the domain of your cluster in the chart/values.yaml file so that the URL of your CAP service can be generated:
...
domain: <cluster domain>
...
domain: <cluster domain>
You can use the pre-configured domain name for your Kyma cluster:
kubectl get gateway -n kyma-system kyma-gateway \
-o jsonpath='{.spec.servers[0].hosts[0]}'
kubectl get gateway -n kyma-system kyma-gateway \
-o jsonpath='{.spec.servers[0].hosts[0]}'
Configure Approuter Specifications
Configure access to your approuter image:
yamlapprouter: image: repository: <your-container-registry>/<approuter-image-name> tag: <approuter-image-version>
approuter: image: repository: <your-container-registry>/<approuter-image-name> tag: <approuter-image-version>
Replace
<your-cluster-domain>
with your cluster domain in xsuaa section of values.yaml:yamlxsuaa: serviceOfferingName: xsuaa servicePlanName: application parameters: xsappname: bookshop tenant-mode: dedicated oauth2-configuration: redirect-uris: - https://*.<your-cluster-domain>/**
xsuaa: serviceOfferingName: xsuaa servicePlanName: application parameters: xsappname: bookshop tenant-mode: dedicated oauth2-configuration: redirect-uris: - https://*.<your-cluster-domain>/**
Add the destinations under
backendDestinations
in the values.yaml file:yamlbackendDestinations: backend: service: srv
backendDestinations: backend: service: srv
backend
is the name of the destination.service
points to the deployment name whose url will be used for this destination.
Deploy CAP Helm Chart
Log in to your Kyma cluster
Deploy using
helm
command:shhelm upgrade --install bookshop ./chart \ --namespace bookshop-namespace --create-namespace
helm upgrade --install bookshop ./chart \ --namespace bookshop-namespace --create-namespace
This installs the Helm chart from the chart folder with the release name
bookshop
in the namespacebookshop-namespace
.TIP
With the
helm upgrade --install
command you can install a new chart as well as upgrade an existing chart.
Learn more about using a private registry with your Kyma cluster. Learn more about the CAP Helm chart settings Learn more about using helm upgrade
TIP
Try out the CAP SFLIGHT and CAP for Java examples on Kyma.
Customize Helm Chart
About CAP Helm Chart
The following files are added to a chart folder by executing cds add helm
:
File/Pattern | Description |
---|---|
values.yaml | Configuration of the chart; The initial configuration is determined from your CAP project. |
Chart.yaml | Chart metadata that is initially determined from the package.json file |
templates/NOTES.txt | Message printed after installing or upgrading the Helm charts |
templates/*.yaml | Template files for the Kubernetes resources |
templates/*.tpl | Template libraries used in the template resources |
Learn how to create a Helm chart from scratch from the Helm documentation.
Configure
CAP's Helm chart can be configured by the settings as explained below. Mandatory settings are marked with ✓.
You can change the configuration by editing the chart/values.yaml file. When you call cds add helm
again, your changes will be persisted and only missing default values are added.
The general chart settings and used subcharts can be edited in the chart/Chart.yaml file.
The helm
CLI also offers you other options to overwrite settings from chart/values.yaml file:
- Overwrite properties using the
--set
parameter. - Overwrite properties from a YAML file using the
-f
parameter.
TIP
It is recommended to do the main configuration in the chart/values.yaml file and have additional YAML files for specific deployment types (dev, test, productive) and targets.
Global Properties
Property | Description | Mandatory |
---|---|---|
imagePullSecret → name | Name of secret to access the container registry | (✓ ) 1 |
domain | Kubernetes cluster ingress domain (used for application URLs) | ✓ |
1: Mandatory only for private docker registries
Deployment Properties
The following properties are available for the srv
key:
Property | Description | Mandatory |
---|---|---|
bindings | Service Bindings | |
resources | Kubernetes Container resources | ✓ |
env | Map of additional env variables | |
health | Kubernetes Liveness, Readyness and Startup Probes | |
→ liveness → path | Endpoint for liveness and startup probe | ✓ |
→ readiness → path | Endpoint for readiness probe | ✓ |
→ startupTimeout | Wait time in seconds until the health checks are started | |
image | Container image |
You can explore more configuration options in the subchart's directory chart/charts/web-application.
SAP BTP Services
The helm chart supports to create service instances for commonly used services. Services are pre-populated in the chart/values.yaml file based on the used services in the requires
section of the CAP configuration (for example, package.json) file.
You can use the following services in your configuration:
Property | Description | Mandatory |
---|---|---|
xsuaa | Enables the creation of a XSUAA service instance. See details for Node.js and Java projects. | |
parameters → xsappname | Name of XSUAA application. Overwrites the value from the xs-security.json file. (unique per subaccount) | ✓ |
parameters → HTML5Runtime_enabled | Set to true for use with Launchpad Service | |
connectivity | Enables on-premise connectivity | |
event-mesh | Enables SAP Event Mesh; messaging guide, how to enable the SAP Event Mesh | |
html5-apps-repo-host | HTML5 Application Repository | |
hana | HDI Shared Container | |
service-manager | Service Manager Container | |
saas-registry | SaaS Registry |
Learn how to configure services in your Helm chart
SAP HANA
The deployment job of your database content to a HDI container can be configured using the hana-deployer
section with the following properties:
Property | Description | Mandatory |
---|---|---|
bindings | Service binding to the HDI container's secret | ✓ |
image | Container image of the HDI deployer | ✓ |
resources | Kubernetes Container resources | ✓ |
env | Map of additional environment variables |
HTML5 Applications
The deployment job of HTML5 applications can be configured using the html5-apps-deployer
section with the following properties:
Property | Description | Mandatory |
---|---|---|
image | Container image of the HTML5 application deployer | ✓ |
bindings | Service bindings to XSUAA, destinations and HTML5 Application Repository Host services | ✓ |
resources | Kubernetes Container resources | ✓ |
env | Map of additional environment variables | |
→ SAP_CLOUD_SERVICE | Name for your business service (unique per subaccount) | ✓ |
TIP
Run cds add html5-repo
to automate the setup for HTML5 application deployment.
Backend Destinations
Backend destinations maybe required for HTML5 applications or for approuter deployment. They can be configured using the backendDestinations
section with the following properties:
Property | Description |
---|---|
(key) | Name of backend destination |
service: (value) | Value is the target Kubernetes service (like srv ) |
Connectivity Service
Use cds add connectivity
, to add a volume to your srv
deployment.
WARNING
Create an instance of the SAP BTP Connectivity service with plan connectivity_proxy
and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication. This instance must not be deleted as long as connectivity is used.
The volume you've added to your srv
deployment is needed, to add additional connection information, compared to what's available from the service binding.
srv:
[...]
additionalVolumes:
- name: connectivity-secret
volumeMount:
mountPath: /bindings/connectivity
readOnly: true
projected:
sources:
- secret:
name: <your-connectivity-binding>
optional: false
- secret:
name: <your-connectivity-binding>
optional: false
items:
- key: token_service_url
path: url
- configMap:
name: "RELEASE-NAME-connectivity-proxy-info"
optional: false
srv:
[...]
additionalVolumes:
- name: connectivity-secret
volumeMount:
mountPath: /bindings/connectivity
readOnly: true
projected:
sources:
- secret:
name: <your-connectivity-binding>
optional: false
- secret:
name: <your-connectivity-binding>
optional: false
items:
- key: token_service_url
path: url
- configMap:
name: "RELEASE-NAME-connectivity-proxy-info"
optional: false
In the volumes added, replace the value of <your-connectivity-binding>
with the binding that you created earlier. If the binding is created in a different namespace then you need to create a secret with details from the binding and use that secret.
TIP
You don't have to edit RELEASE-NAME
in the configMap
property. It is passed as a template string and will be replaced with your actual release name by Helm.
SaaS Registry Service
The configuration for saas-registry
service is done by two keys in the values.yaml file:
Property | Description | Mandatory |
---|---|---|
saas-registry | Used to create SaaS registry service instance | ✓ |
saasRegistryParameters | Used to specify the parameters for SaaS registry service | ✓ |
Example:
[...]
saas-registry:
serviceOfferingName: saas-registry
servicePlanName: application
parametersFrom:
- secretKeyRef:
name: "RELEASE-NAME-saas-registry-secret"
key: parameters
saasRegistryParameters:
xsappname: bookshop
appName: bookshop
displayName: bookshop
description: A simple self-contained bookshop service.
category: "CAP Application"
appUrls:
getDependencies: "/-/cds/saas-provisioning/dependencies"
onSubscription: "/-/cds/saas-provisioning/tenant/{tenantId}"
onSubscriptionAsync: true
onUnSubscriptionAsync: true
onUpdateDependenciesAsync: true
callbackTimeoutMillis: 300000
[...]
saas-registry:
serviceOfferingName: saas-registry
servicePlanName: application
parametersFrom:
- secretKeyRef:
name: "RELEASE-NAME-saas-registry-secret"
key: parameters
saasRegistryParameters:
xsappname: bookshop
appName: bookshop
displayName: bookshop
description: A simple self-contained bookshop service.
category: "CAP Application"
appUrls:
getDependencies: "/-/cds/saas-provisioning/dependencies"
onSubscription: "/-/cds/saas-provisioning/tenant/{tenantId}"
onSubscriptionAsync: true
onUnSubscriptionAsync: true
onUpdateDependenciesAsync: true
callbackTimeoutMillis: 300000
TIP
You don't have to edit RELEASE-NAME
in the secretKeyRef
property. It is passed as a template string and will be replaced with your actual release name by Helm.
Arbitrary Service
These are the steps to create and bind to an arbitrary service, using the binding of the feature toggle service to the CAP application as an example:
In the chart/Chart.yaml file, add an entry to the
dependencies
array.yamldependencies: ... - name: service-instance alias: feature-flags version: 0.1.0
dependencies: ... - name: service-instance alias: feature-flags version: 0.1.0
Add the service configuration and the binding in the chart/values.yaml file:
yamlfeature-flags: serviceOfferingName: feature-flags servicePlanName: lite ... srv: bindings: feature-flags: serviceInstanceName: feature-flags
feature-flags: serviceOfferingName: feature-flags servicePlanName: lite ... srv: bindings: feature-flags: serviceInstanceName: feature-flags
The
alias
property in thedependencies
array must match the property added in the root of chart/values.yaml and the value ofserviceInstanceName
in the binding.
WARNING
There should be at least one service instance created by cds add helm
if you want to bind an arbitrary service.
Configuration Options for Services
Services have the following configuration options:
Property | Type | Description | Mandatory |
---|---|---|---|
fullNameOverride | string | Use instead of the generated name | |
serviceOfferingName | string | Technical service offering name from service catalog | ✓ |
servicePlanName | string | Technical service plan name from service catalog | ✓ |
externalName | string | The name for the service instance in SAP BTP | |
customTags | array of string | List of custom tags describing the service instance, will be copied to ServiceBinding secret in the key called tags | |
parameters | object | Object with service parameters | |
jsonParameters | string | Some services support the provisioning of additional configuration parameters. For the list of supported parameters, check the documentation of the particular service offering. | |
parametersFrom | array of object | List of secrets from which parameters are populated. |
The jsonParameters
key can also be specified using the --set file
flag while installing/upgrading Helm release. For example, jsonParameters
for the xsuaa
property can be defined using the following command:
helm install bookshop ./chart --set-file xsuaa.jsonParameters=xs-security.json
helm install bookshop ./chart --set-file xsuaa.jsonParameters=xs-security.json
You can explore more configuration options in the subchart's directory chart/charts/service-instance.
Configuration Options for Service Bindings
Property | Description | Mandatory |
---|---|---|
(key) | Name of the service binding | |
secretFrom | Bind to Kubernetes secret | (✓)1 |
serviceInstanceName | Bind to service instance within the Helm chart | (✓)1 |
serviceInstanceFullname | Bind to service instance using the absolute name | (✓)1 |
parameters | Object with service binding parameters |
1: Exactly one of these properties need to be specified
Configuration Options for Container Images
Property | Description | Mandatory |
---|---|---|
repository | Full container image repository name | ✓ |
tag | Container image version tag (default: latest ) |
Modify
Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template.
You can run cds add helm
again to update your Helm chart. It has the following behavior for modified files:
- Your changes of the chart/values.yaml are persisted. Only missing or new properties will be added by
cds add helm
. - If you modify any of the other generated files, they will not updated by
cds add helm
anymore. The command will issue a warning about that. To withdraw your changes just delete the modified files and runcds add helm
again.
Extend
- Adding new files to the Helm chart does not conflict with
cds add helm
. - A modification-free approach to change files is to use Kustomize as a post-processor for your Helm chart. This might be usable for small changes if you don't want to branch-out from the generated
cds add helm
content.
Additional Information
SAP BTP Services and Features
You can find a list of SAP BTP services in the Discovery Center. To find out if a service is supported in the Kyma and Kubernetes environment, goto to the Service Marketplace of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter.
You can find information about planned SAP BTP, Kyma Runtime features in the product road map.
About Cloud Native Buildpacks
Cloud Native Buildpacks provide advantages such as embracing best practices and secure standards like:
- Resulting images use an unprivileged user.
- Builds are reproducible.
- Software Bill of Materials (SBoM) for all dependencies baked into the image.
- Auto detection, no need to manually select base images.
Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the ca-certificates enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required.
Learn more about Cloud Native Buildpacks Concepts
One way of using Cloud Native Buildpacks in CI/CD is by utilizing the cnbBuild
step of Project "Piper". This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines.
Learn more about Support for Cloud Native Buildpacks in Jenkins
Get Access to a Cluster
You can either purchase a Kyma cluster from SAP, create your personal trial account or sign-up for the free tier offering to get a SAP managed Kyma Kubernetes cluster.
Get Access to a Container Registry
SAP BTP doesn't provide a container registry.
You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network.
- The use of a public container registry gives everyone access to your container images.
- In a private container registry, your container images are protected. You will need to configure a pull secret to allow your cluster to access it.
Setup Your Cluster for a Public Container Registry
Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required.
Setup Your Cluster for a Private Container Registry
To use a docker image from a private repository, you need to create an image pull secret and configure this secret for your containers.
WARNING
It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily.