Using Databases

This guide is about defining, providing, implementing, deploying, and publishing services — so about Service Providers in general.



Get an overview of your deployment options, which will be explained in the following section.


The fastest way to let your application run is using a local SQLite database via the npm module sqlite3, which is a devDependency of your project. Using the command line cds deploy --to sqlite helps you deploy the database parts of the project to a local SQLite database. It

  • Creates an SQLite database file in your project.
  • Drops existing tables and views, and re-creates them according the CDS model.
  • Deploys CSV files with initial data.

See it in action


Turning into a production-near mode, you can use SAP HANA Cloud in your project. There are two ways to include SAP HANA in your setup: Use SAP HANA in a hybrid mode, meaning running your services locally and connect to your database in the cloud, or running your whole application on SAP Cloud Platform. This is possible either in trial accounts or in productive accounts. To make the following configuration steps work, we assume that you’ve provisioned, set up, and started, for example, your SAP HANA Cloud instance in the trial environment. If you need to prepare your SAP HANA first, see How to Get an SAP HANA Instance on SAP Cloud Platform Cloud Foundry Environment to learn about your options.

Enhance Project Configuration for SAP HANA Cloud

On SAP HANA Cloud, CDS models are deployed through the hdbtable and hdbview formats instead of hdbcds. That way, you can still deploy on any SAP HANA but are ready to switch to SAP HANA Cloud at any time.

Add the following in the package.json file in the root folder of your project:

  "cds": {
    "hana" : { "deploy-format": "hdbtable" }

As an effect, .hdbtable and .hdbview files are generated in the (gen/)db/src/gen/ folder.

Do not use the db/package.json file. For Java projects, you can add this configuration to .cdsrc.json instead:
{ "hana" : { "deploy-format": "hdbtable" } }

For Java

See here for the rest of the configuration.

For Node.js

Back in package.json, make sure that there’s a db data source of kind sql. With that you declare the requirement for an SQL database. By default (development profile), this equals sqlite. With the production profile this equals hana.

"cds": {
  "requires": {
    "db": {
      "kind": "sql"

This way you don’t need to modify this file if you want to switch between the two databases. Use the --production parameter in commands like cds build to enforce the production profile.

For the Node.js runtime system to connect to an SAP HANA Cloud instance, add the @sap/hana-client driver for SAP HANA as a dependency to your project:

npm add @sap/hana-client --save

With hdb, there’s a leaner Node.js driver available, which can be installed in the same way. See here for a feature comparison.

Deploy using cds deploy

cds deploy --to hana Lets you deploy just the database parts of the project to an SAP HANA instance. The server application (the Node.js or Java part) still runs locally and connects to the remote database instance, allowing for fast development roundtrips.

Make sure that you’re logged in to Cloud Foundry. Then in the project root folder, just execute:

cds deploy --to hana

Behind the scenes, cds deploy --to hana does the following:

  • Compiles the CDS model to SAP HANA files (usually in gen/db, or db/gen)
  • Generates hdbtabledata files for the CSV files in the project. If an hdbtabledata file is already present next to the CSV files, no new file is generated.
  • Creates a Cloud Foundry service of type hdi-shared, which creates an HDI container. Also, you can explicitly specify the name like so: cds deploy --to hana:<myService>
  • Starts @sap/hdi-deploy locally. Should you need a tunnel to access the database, you can specify its address with --tunnel-address <host:port>.
  • Puts default-env.json in the project root. With this information cds watch/run can connect to the HDI container at runtime using the production profile (--production).

If you run into issues, see the Troubleshooting guide.

Deploy Using cf deploy or cf push

See the Deploying to Cloud Foundry guide for how to deploy the complete application to SAP Cloud Platform.

Providing Initial Data

CSV files in your project are picked up by deployments for both SQLite and SAP HANA. The following conventions apply:

  • The files must be located in folders db/csv, db/data/, or db/src/csv.
  • They contain data for one entity each. File names must follow the pattern <namespace>-<entity>.csv, for example, my.bookshop-Books.csv.
  • They must start with a header line that lists the needed element names.
Show/Hide Beta Features