Skip to content
Search

    Using Databases

    This guide will help you to learn about defining, providing, implementing, deploying, and publishing services — so it’s about Service Providers in general.

    Content

    Providing Initial Data

    CSV files in your project are picked up by deployments for both SQLite and SAP HANA. If you’ve accidentally deployed such data to a productive database, see this troubleshooting entry on how to recover from this situation.

    Please note though that for SAP HANA, initial data provided by reuse modules is not deployed automatically. Instead, copy the files manually or through a custom script that is executed before cds build.

    The following conventions apply:

    • The files must be located in the db/csv, db/data, or db/src/csv folders.
    • Each file contains data for one entity.
    • File names must follow the pattern namespace-entity.csv.
      Pattern for nested entities: namespace-entity.nestedEntity.csv.
      Examples: my.bookshop-Books.csv, or my.bookshop-Books.ISBN.csv.
    • They must start with a header line that lists the needed element names.

    🚫 Danger
    On SAP HANA, only use CSV files for configuration data that can’t be changed by application users. CSV files are deployed as hdbtabledata, which assumes exclusive ownership of the data. It’s overwritten with the next application deployment. To avoid such a situation, you can use the include_filter option that hdbtabledata offers.

    Deployment

    Get an overview of your deployment options.

    SQLite

    The fastest way to let your application run is using a local SQLite database via the sqlite3 npm module, which is a devDependency of your project. Using the cds deploy --to sqlite command line helps you deploy the database parts of the project to a local SQLite database. It does the following:

    • Creates an SQLite database file in your project.
    • Drops existing tables and views, and re-creates them according to the CDS model.
    • Deploys CSV files with initial data.

    See it in action

    SAP HANA

    When you’re moving from the development phase to the production phase, use SAP HANA Cloud as your database. There are 2 ways to include SAP HANA in your setup: Use SAP HANA in a hybrid mode, meaning running your services locally and connecting to your database in the cloud, or running your whole application on SAP Business Technology Platform. This is possible either in trial accounts or in productive accounts. To make the following configuration steps work, we assume that you’ve provisioned, set up, and started, for example, your SAP HANA Cloud instance in the trial environment. If you need to prepare your SAP HANA first, see How to Get an SAP HANA Cloud Instance for SAP Business Technology Platform, Cloud Foundry environment to learn about your options.

    Enhance Project Configuration for SAP HANA Cloud

    To prepare the project, execute:

    cds add hana
    

    This configures deployment for SAP HANA to use the hdbtable and hdbview formats. The default format of hdbcds isn’t available on SAP HANA Cloud.

    For Node.js

    No further configuration is necessary. For your information, this is what the previous command changed in package.json:

    • The hdb driver for SAP HANA is added as a dependency.
    • A datasource of type sql is added in the cds.requires.db block. See Node.js configuration for more details.
    For Java

    See the Use SAP HANA as the Database for a CAP Java Application tutorial for the rest of the configuration.

    Deploy using cds deploy

    cds deploy --to hana lets you deploy just the database parts of the project to an SAP HANA instance. The server application (the Node.js or Java part) still runs locally and connects to the remote database instance, allowing for fast development roundtrips.

    Make sure that you’re logged in to Cloud Foundry. Then in the project root folder, just execute:

    cds deploy --to hana
    

    Behind the scenes, cds deploy --to hana does the following:

    • Compiles the CDS model to SAP HANA files (usually in gen/db, or db/gen)
    • Generates hdbtabledata files for the CSV files in the project. If an hdbtabledata file is already present next to the CSV files, no new file is generated.
    • Creates a Cloud Foundry service of type hdi-shared, which creates an HDI container. Also, you can explicitly specify the name like so: cds deploy --to hana:<myService>.
    • Starts @sap/hdi-deploy locally. Should you need a tunnel to access the database, you can specify its address with --tunnel-address <host:port>.
    • Puts default-env.json in the project root. With this information, cds watch/run can connect to the HDI container at runtime using the production profile (--production).

    Learn more about the deployment using HDI

    If you run into issues, see the Troubleshooting guide.

    Deploy Using cf deploy or cf push

    See the Deploying to Cloud Foundry guide for information about how to deploy the complete application to SAP BTP.

    Native Features

    Using SQLite and SAP HANA Functions

    CAP samples demonstrate how you can use native database functions of SQLite and SAP HANA in one code base. There is also a code tour available for it.

    Functions

    You can use native database functions inside your CDS model, as long as they conform to the standard syntax of function(param1, param2). The compiler doesn’t do any expansions for such cases - references to structured elements remain structured. In addition to specifying the function with parameters, you also need to provide an alias for the select item and a type.

    entity Books {
        key id: Integer;
        name: String;
        description: String;
        author: String;
    };
    
    entity BookPreview as select from Books {
        id,
        // HANA function IFNULL
        IFNULL(description, name) as shorttext: String,
        author
    };
    

    Using native functions means that deployment to other databases where these functions don’t exist fails. These errors only come up during deployment time, not compile time. To use native SAP HANA function but still use SQLite for development, some configuration is needed. There’s a CAP sample illustrating this. In short: The database specific files are separated into a sqlite and hana folder. With the development profile, cds build is configured to pull in the sqlite sources - with production, sources from hana are used.

    For more complex functions, where the syntax isn’t supported by the compiler, a native .hdbview can be used. See Using Native SAP HANA Artifacts for more details.

    Schema Evolution

    CAP supports database schema updates by detecting changes to the CDS model when executing the CDS build. If the underlying database offers built-in schema migration techniques, compatible changes can be applied to the database without any data loss or the need for additional migration logic. Incompatible changes like deletions are also detected, but require manual resolution, as they would lead to data loss.

    Change Detected Automatically
    Adding fields
    Deleting fields
    Renaming fields X 1
    Changing datatype of fields
    Changing type parameters
    Changing associations/compositions
    Renaming associations/compositions X 1
    Renaming entities X

    1 Rename field or association operations aren’t detected as such. Instead, corresponding ADD and DROP statements are rendered requiring manual resolution activities.

    ❗ Warning
    Currently there’s no framework support for incompatible schema changes that require scripted data migration steps (like changing field constraints NULL > NOT NULL). However, the CDS build does detect those changes renders them as non-executable statements, requesting the user to take manual resolution steps. We recommend avoiding those changes in productive environments.

    Schema Evolution and Multitenancy/Extensibility

    There’s full support for schema evolution when the cds-mtx library is used for multitenancy handling. It ensures that all schema changes during base-model upgrades are rolled out to the tenant databases.

    ❗ Warning
    Tenant-specific extensibility using the cds-mtx library isn’t supported yet. Right now you can’t activate extensions on entities annotated with @cds.persistence.journal.

    Schema Updates with SAP HANA

    All schema updates in SAP HANA are applied using SAP HANA Deployment Infrastructure (HDI) design-time artifacts, which are auto-generated during CDS build execution. For backward compatibility, the default artifact type is still .hdbcds.

    This will be changed to .hdbtable/.hdbview artifact generation, as the support of .hdbcds has been discontinued in SAP HANA Cloud.

    Schema updates using .hdbtable deployments are a challenge for tables with large data volume compared to .hdbcds. Schema changes with .hdbtable are applied using temporary table generation to preserve the data. As this could lead to long deployment times, the support for .hdbmigrationtable artifact generation has been added. The Migration Table artifact type uses explicit versioning and migration tasks. Modifications of the database table are explicitly specified in the design-time file and carried out on the database table exactly as specified. This saves the cost of an internal table-copy operation. When a new version of an already existing table is deployed, HDI performs the migration steps that haven’t been applied.

    Deploy Artifact Transitions as Supported by HDI

    Current format hdbcds hdbtable hdbmigrationtable
    hdbcds   X
    hdbtable X  
    hdbmigrationtable X  

    ❗ Warning
    Direct migration from .hdbcds to .hdbmigrationtable isn’t supported by HDI. A deployment using .hdbtable is required upfront. Learn more in the Enhance Project Configuration for SAP HANA Cloud section. During the transition from .hdbtable to .hdbmigrationtable you have to deploy version=1 of the .hdbmigrationtable artifact which may not include any migration steps.

    HDI supports the hdbcds > hdbtable > hdbmigrationtable migration flow without data loss. Even going back from .hdbmigrationtable to .hdbtable is possible. Keep in mind that you lose the migration history in this case. For all transitions you want to execute in HDI, you need to specify an undeploy allowlist as described in HDI Delta Deployment and Undeploy Allow List in the SAP HANA documentation.

    Enabling hdbmigrationtable Generation for Selected Entities During CDS Build

    If you are migrating your already deployed scenario to .hdbmigrationtable deployment, you have to consider the remarks in Deploy Artifact Transitions as Supported by HDI

    By default, all entities are still compiled to .hdbtable and you only selectively choose the entities for which you want to build .hdbmigrationtable by annotating them with @cds.persistence.journal.

    Example:

    namespace data.model;
    
      @cds.persistence.journal
      entity LargeBook {
        key id : Integer;
        the_title : String(100);
        full_text : LargeString;
      }
    

    CDS build generates .hdbmigrationtable source files for annotated entities as well as a last-dev/csn.json source file representing the CDS model state of the last build.

    These source files have to be checked into the version control system.

    Subsequent model changes are applied automatically as respective migration versions including the required schema update statements to accomplish the new target state. There are cases where you have to resolve or refactor the generated statements, like for reducing field lengths. As they can’t be executed without data loss (for example, String(100) -> String(50)), the required migration steps are only added as comments for you to process explicitly.

    Example:

    >>>> Manual resolution required - DROP statements causing data loss are disabled
    >>>> by default.
    >>>> You may either:
    >>>>   uncomment statements to allow incompatible changes, or
    >>>>   refactor statements, e.g. replace DROP/ADD by single RENAME statement
    >>>> After manual resolution delete all lines starting with >>>>>
    -- generated by cds-compiler version 1.49.0
    -- ALTER TABLE my_bookshop_Books DROP (title);
    -- ALTER TABLE my_bookshop_Books ADD (title NVARCHAR(50));
    

    Changing the type of a field causes CDS build to create a corresponding ALTER TABLE statement. Data type conversion rules are applied by the SAP HANA database as part of the deployment step. This may cause the deployment to fail if the column contents can’t be converted to the new format.

    Examples:

    1. Changing the type of a field from String to Integer may cause tenant updates to fail if existing content exceeds the maximum Integer length.
    2. Changing the type of a field from Decimal to Integer can succeed, but decimal places are truncated. Conversion fails if the content exceeds the maximum Integer length.

    We recommend keeping .hdbtable deployment for entities where you expect low data volume. Every .hdbmigrationtable artifact becomes part of your versioned source code, creating a new migration version on every model change/build cycle. In turn, each such migration can require manual resolution. You can switch large-volume tables to .hdbmigrationtable at any time, keeping in mind that the existing .hdbtable design-time artifact needs to be undeployed.

    Sticking to .hdbtable for the actual application development phase avoids lots of initial migration versions that would need to be applied to the database schema.

    CDS build performs rudimentary checks on generated .hdmigrationtable files:

    • CDS build fails if inconsistencies are encountered between the generated .hdbmigrationtable files and the last-dev/csn.json model state. For example, the last migration version not matching the table version is such an inconsistency.
    • An error is logged if manual resolution comments starting with >>>>> exist in one of the generated .hdbmigrationtable files. This ensures that manual resolution is performed before deployment.

    Advanced Options

    The following CDS configuration options are supported to manage .hdbmigrationtable generation.

    This hasn’t been finalized yet.

    {
      "hana" : {
        "journal": {
          "enable-drop": false,
          "change-mode": "alter" // "drop"
        },
        // ...
      }
    }
    

    The "enable-drop" option determines whether incompatible model changes are rendered as is (true) or manual resolution is required (false). The default value is false.

    The change-mode option determines whether ALTER TABLE ... ALTER ("alter") or ALTER TABLE ... DROP ("drop") statements are rendered for data type related changes. To ensure that any kind of model change can be successfully deployed to the database, you can switch the "change-mode" to "drop", keeping in mind that any existing data will be deleted for the corresponding column. See hdbmigrationtable Generation for more details. The default value is "alter".

    Reserved Words and Keywords

    Reserved words have a special meaning and can’t be used, for example, as identifiers. The list of reserved words depends on the database.

    The CDS compiler issues warnings if a reserved word of SQLite or SAP HANA is used. However, for other databases this is not the case. Find here a collection of resources on selected databases and their reference documentation:

    There also reserved words related to SAP Fiori.