Skip to content
Search

    Using Databases

    This guide will help you to learn about defining, providing, implementing, deploying, and publishing services — so it’s about Service Providers in general.

    Content

    Providing Initial Data

    CSV files in your project are picked up by deployments for both SQLite and SAP HANA. If you’ve accidentally deployed such data to a productive database, see this troubleshooting entry on how to recover from this situation.

    Please note though that for SAP HANA, initial data provided by reuse modules is not deployed automatically. Instead, copy the files manually or through a custom script that is executed before cds build.

    The following conventions apply:

    • The files must be located in the db/csv, db/data, or db/src/csv folders.
    • Each file contains data for one entity.
    • File names must follow the pattern namespace-entity.csv.
      Pattern for nested entities: namespace-entity.nestedEntity.csv.
      Examples: my.bookshop-Books.csv, or my.bookshop-Books.ISBN.csv.
    • They must start with a header line that lists the needed element names.

    🚫 Danger
    On SAP HANA, only use CSV files for configuration data that can’t be changed by application users. CSV files are deployed as .hdbtabledata which assumes exclusive ownership of the data. It’s overwritten with the next application deployment. To avoid such a situation, you can use the include_filter option that .hdbtabledata offers.

    CSV Import of Array-Typed Elements

    Values for elements with arrayed types can be specified in CSV files as well. The values need to be presented as JSON arrays as shown in the following example:

    entity Samples : cuid {
        records : array of {
            index: Integer;
            flag: Boolean
        }
    }
    

    A CSV file Samples.csv in folder db/data containing a single entity instance to be imported could look like this:

    ID;records
    ...;[{"index": 1, "flag": true}, {"index": 2, "flag": false}]
    

    Deployment

    Get an overview of your deployment options.

    SQLite

    The fastest way to let your application run is using a local SQLite database via the sqlite3 npm module, which is a devDependency of your project. Using the cds deploy --to sqlite command line helps you deploy the database parts of the project to a local SQLite database. It does the following:

    • Creates an SQLite database file in your project.
    • Drops existing tables and views, and re-creates them according to the CDS model.
    • Deploys CSV files with initial data.

    See it in action

    SAP HANA

    When you’re moving from the development phase to the production phase, use SAP HANA Cloud as your database. There are 2 ways to include SAP HANA in your setup: Use SAP HANA in a hybrid mode, meaning running your services locally and connecting to your database in the cloud, or running your whole application on SAP Business Technology Platform. This is possible either in trial accounts or in productive accounts. To make the following configuration steps work, we assume that you’ve provisioned, set up, and started, for example, your SAP HANA Cloud instance in the trial environment. If you need to prepare your SAP HANA first, see How to Get an SAP HANA Cloud Instance for SAP Business Technology Platform, Cloud Foundry environment to learn about your options.

    Enhance Project Configuration for SAP HANA Cloud

    To prepare the project, execute:

    cds add hana
    

    This configures deployment for SAP HANA to use the hdbtable and hdbview formats. The default format of hdbcds isn’t available on SAP HANA Cloud.

    For Node.js

    No further configuration is necessary. For your information, this is what the previous command changed in package.json:

    • The hdb driver for SAP HANA is added as a dependency.
    • A datasource of type sql is added in the cds.requires.db block. See Node.js configuration for more details.
    For Java

    See the Use SAP HANA as the Database for a CAP Java Application tutorial for the rest of the configuration.

    Deploy using cds deploy

    cds deploy --to hana lets you deploy just the database parts of the project to an SAP HANA instance. The server application (the Node.js or Java part) still runs locally and connects to the remote database instance, allowing for fast development roundtrips.

    Make sure that you’re logged in to Cloud Foundry. Then in the project root folder, just execute:

    cds deploy --to hana
    

    Behind the scenes, cds deploy --to hana does the following:

    • Compiles the CDS model to SAP HANA files (usually in gen/db, or db/gen)
    • Generates .hdbtabledata files for the CSV files in the project. If an .hdbtabledata file is already present next to the CSV files, no new file is generated.
    • Creates a Cloud Foundry service of type hdi-shared, which creates an HDI container. Also, you can explicitly specify the name like so: cds deploy --to hana:<myService>.
    • Starts @sap/hdi-deploy locally. Should you need a tunnel to access the database, you can specify its address with --tunnel-address <host:port>.
    • Puts default-env.json in the project root. With this information, cds watch/run can connect to the HDI container at runtime using the production profile (--production).

    Learn more about the deployment using HDI

    If you run into issues, see the Troubleshooting guide.

    Deploy Using cf deploy or cf push

    See the Deploying to Cloud Foundry guide for information about how to deploy the complete application to SAP BTP.

    Native Features

    Using SQLite and SAP HANA Functions

    CAP samples demonstrate how you can use native database functions of SQLite and SAP HANA in one code base. There is also a code tour available for it.

    Functions

    You can use native database functions inside your CDS model, as long as they conform to the standard syntax of function(param1, param2). The compiler doesn’t do any expansions for such cases - references to structured elements remain structured. In addition to specifying the function with parameters, you also need to provide an alias for the select item and a type.

    entity Books {
        key id: Integer;
        name: String;
        description: String;
        author: String;
        price: Decimal;
    };
    
    entity BookPreview as select from Books {
        id,
        // HANA function IFNULL
        IFNULL(description, name) as shorttext: String,
        author
    };
    

    Using native functions means that deployment to other databases where these functions don’t exist fails. These errors only come up during deployment time, not compile time. To use native SAP HANA function but still use SQLite for development, some configuration is needed. See the section Using SQLite and SAP HANA Functions

    The SQL standard defines some functions that are called without parentheses: current_date, current_time, current_timestamp, current_user, session_user, system_user. They can be used in CDS without parentheses, too.

    SAP HANA functions with non-standard syntax

    SAP HANA defines some more functions that are called without parentheses (current_connection, current_schema, current_transaction_isolation_level, current_utcdate, current_utctime, current_utctimestamp, sysuuid). In CDS, you have to call them with the standard syntax with parentheses, like current_connection().

    CDS supports SAP HANA aggregate functions with an additional order by clause in the argument list, like

    first_value(name order by price desc)
    

    For other functions, where the syntax isn’t supported by the compiler (for example, replace_regexp(... in ... with ...)), a native .hdbview can be used. See Using Native SAP HANA Artifacts for more details.

    Native Database Clauses

    CDS generates CREATE TABLE or CREATE VIEW statements for entity definitions. Not all clauses supported by SQL in CREATE TABLE or CREATE VIEW can directly be written in CDS syntax. In particular not those clauses that are specific to a certain database. For using such native database clauses also in a CAP CDS model, you can provide arbitrary SQL snippets with the annotations @sql.prepend and @sql.append.

    For HDBCDS and HDBTABLE/HDBVIEW, the compiler does not check the SQL snippets in any way. For “normal” SQL, the characters ;, --, /*and */ are forbidden. The compiler does not evaluate or process these SQL snippets, but simply inserts them into the generated CREATE statements:

    • The value of @sql.append provided for an entity is added at the end of the statement generated by the compiler.
    • The value of @sql.append provided for an element of an entity is added at the end of the respective column definition, if a database table is generated for the entity. The annotation may not be used for elements of an entity that turns into a view.
    • Annotation @sql.prepend is only supported for entities that are translated to a table. The annotation must not be provided on element level. The annotation value is inserted immediately before the keyword TABLE. When generating HDBTABLE, there is an implicit @sql.prepend: 'COLUMN' that is overwritten by an explicitly provided @sql.prepend.

    Model:

    @sql.append: ```sql
                 GROUP TYPE foo
                 GROUP SUBTYPE bar
                 ```
    entity E {
      ...,
      @sql.append: 'FUZZY SEARCH INDEX ON'
      text: String(100);
    }
    
    @sql.append: 'WITH DDL ONLY'
    entity V as select from E { ... };
    

    Result:

    CREATE TABLE E (
      ...,
      text NVARCHAR(100) FUZZY SEARCH INDEX ON
    ) GROUP TYPE foo
    GROUP SUBTYPE bar;
    
    CREATE VIEW V AS SELECT ... FROM E WITH DDL ONLY;
    
    • If you refer to a column name in the annotation, you need to take care of a potential name mapping yourself, for example, for structured elements.
    • If you want to provide clauses for different databases, use separate annotation files with respective annotate statements and control them via build configurations, like explained previously for Using SQLite and SAP HANA Functions.
    • The annotations @sql.prepend and @sql.append are not allowed in a SaaS extension project.

    If you use native database clauses in entities where schema evolution has been enabled using annotation @cds.persistence.journal, see Schema Evolution Support of Native Database Clauses.

    ❗ Warning
    The compiler doesn’t check or process the provided SQL snippets in any way. You are responsible to ensure that the resulting statement is valid and doesn’t negatively impact your database or your application. We don’t provide support for problems caused by using this feature.

    Using Special Variables

    The CDS language supports some special variables that translate to database-specific expressions. When the compiler translates CQL to SQL, for some of these special variables there is a default translation to a suitable SQL expression. The translation can be controlled via the option variableReplacements. These variables start with a $ - supported are $user, $session, $now and $at.

    Configuring Variables

    Replacement of variables is possible using Compiler option variableReplacements - one way to pass that in is using the cdsc subsection of the cds-config in the package.json.

    The option expects an object, where the top-level keys are the “root”-variable name. The following object-keys are the path steps and the values of the leaf-element is the replacement value. The following sample illustrates this:

    {
      "variableReplacements": {
        "$user": {
          // replacement for $user and $user.id
          "id": "userid",
          // replacement for $user.locale
          "locale": "DE",
          "foo": {
            // replacement for $user.foo.bar
            "bar": "foobar"
          }
        },
        "$session": {
          "some": {
            "path": {
              // replacement for $session.some.path.firstName
              "firstName": "firstName",
              // replacement for $session.some.path.lastName
              "lastName": "lastName"
            }
          }
        }
      }
    }
    

    For different databases, different replacement values need to be defined. This can be achieved using profiles.

    $user

    The $user variable can be used as-is, or it’s child properties id and locale can be used. $user is a shortcut for $user.id as $user.

    The variables $user and $user.id are only supported for HANA out-of-the-box - for other database dialects, they need to be configured. For dialect plain and sqlite, the default locale en is used for $user.locale.

    There is also the possibility to use arbitrary child properties - but without a valid configuration, those will lead to an error when compiling into database artifacts.

      $user(.id) $user.locale
    hana SESSION_CONTEXT(‘APPLICATIONUSER’) SESSION_CONTEXT(‘LOCALE’)
    sqlite - ‘en’
    plain - ‘en’

    $session

    The $session variable (and any arbitrary child properties) are accepted during compilation, but without a valid configuration they will lead to errors when generating database artifacts.

    $now

    The $now variable can be used to reference the current timestamp. The variable does not have any child properties. Configuration of this variable is not possible.

      $now
    hana | sqlite | plain CURRENT_TIMESTAMP

    $at

    The $at variable is used in the context of temporal data, but it can also be used independently. $at is a structured element with two subelements from and to. Configuration of this variable is not possible.

      $at.from $at.to
    hana TO_TIMESTAMP(SESSION_CONTEXT(‘VALID-FROM’)) TO_TIMESTAMP(SESSION_CONTEXT(‘VALID-TO’))
    sqlite strftime(‘%Y-%m-%dT%H:%M:%S.000Z’, ‘now’) strftime(‘%Y-%m-%dT%H:%M:%S.000Z’, ‘now’)
    plain current_timestamp current_timestamp

    Schema Evolution

    CAP supports database schema updates by detecting changes to the CDS model when executing the CDS build. If the underlying database offers built-in schema migration techniques, compatible changes can be applied to the database without any data loss or the need for additional migration logic. Incompatible changes like deletions are also detected, but require manual resolution, as they would lead to data loss.

    Change Detected Automatically
    Adding fields
    Deleting fields
    Renaming fields X 1
    Changing datatype of fields
    Changing type parameters
    Changing associations/compositions
    Renaming associations/compositions X 1
    Renaming entities X

    1 Rename field or association operations aren’t detected as such. Instead, corresponding ADD and DROP statements are rendered requiring manual resolution activities.

    ❗ Warning Currently there’s no framework support for incompatible schema changes that require scripted data migration steps (like changing field constraints NULL > NOT NULL). However, the CDS build does detect those changes renders them as non-executable statements, requesting the user to take manual resolution steps. We recommend avoiding those changes in productive environments.

    Schema Evolution and Multitenancy/Extensibility

    There’s full support for schema evolution when the cds-mtx library is used for multitenancy handling. It ensures that all schema changes during base-model upgrades are rolled out to the tenant databases.

    ❗ Warning Tenant-specific extensibility using the cds-mtx library isn’t supported yet. Right now you can’t activate extensions on entities annotated with @cds.persistence.journal.

    Schema Updates with SAP HANA

    All schema updates in SAP HANA are applied using SAP HANA Deployment Infrastructure (HDI) design-time artifacts, which are auto-generated during CDS build execution. For backward compatibility, the default artifact type is still .hdbcds.

    This will be changed to .hdbtable/.hdbview artifact generation, as the support of .hdbcds has been discontinued in SAP HANA Cloud.

    Schema updates using .hdbtable deployments are a challenge for tables with large data volume compared to .hdbcds. Schema changes with .hdbtable are applied using temporary table generation to preserve the data. As this could lead to long deployment times, the support for .hdbmigrationtable artifact generation has been added. The Migration Table artifact type uses explicit versioning and migration tasks. Modifications of the database table are explicitly specified in the design-time file and carried out on the database table exactly as specified. This saves the cost of an internal table-copy operation. When a new version of an already existing table is deployed, HDI performs the migration steps that haven’t been applied.

    Deploy Artifact Transitions as Supported by HDI

    Current format hdbcds hdbtable hdbmigrationtable
    hdbcds   X
    hdbtable X  
    hdbmigrationtable X  

    ❗ Warning Direct migration from .hdbcds to .hdbmigrationtable isn’t supported by HDI. A deployment using .hdbtable is required upfront. Learn more in the Enhance Project Configuration for SAP HANA Cloud section. During the transition from .hdbtable to .hdbmigrationtable you have to deploy version=1 of the .hdbmigrationtable artifact which may not include any migration steps.

    HDI supports the hdbcds > hdbtable > hdbmigrationtable migration flow without data loss. Even going back from .hdbmigrationtable to .hdbtable is possible. Keep in mind that you lose the migration history in this case. For all transitions you want to execute in HDI, you need to specify an undeploy allowlist as described in HDI Delta Deployment and Undeploy Allow List in the SAP HANA documentation.

    Enabling hdbmigrationtable Generation for Selected Entities During CDS Build

    If you are migrating your already deployed scenario to .hdbmigrationtable deployment, you have to consider the remarks in Deploy Artifact Transitions as Supported by HDI

    By default, all entities are still compiled to .hdbtable and you only selectively choose the entities for which you want to build .hdbmigrationtable by annotating them with @cds.persistence.journal.

    Example:

    namespace data.model;
    
      @cds.persistence.journal
      entity LargeBook {
        key id : Integer;
        title : String(100);
        content : LargeString;
      }
    

    CDS build generates .hdbmigrationtable source files for annotated entities as well as a last-dev/csn.json source file representing the CDS model state of the last build.

    These source files have to be checked into the version control system.

    Subsequent model changes are applied automatically as respective migration versions including the required schema update statements to accomplish the new target state. There are cases where you have to resolve or refactor the generated statements, like for reducing field lengths. As they can’t be executed without data loss (for example, String(100) -> String(50)), the required migration steps are only added as comments for you to process explicitly.

    Example:

    >>>> Manual resolution required - DROP statements causing data loss are disabled
    >>>> by default.
    >>>> You may either:
    >>>>   uncomment statements to allow incompatible changes, or
    >>>>   refactor statements, e.g. replace DROP/ADD by single RENAME statement
    >>>> After manual resolution delete all lines starting with >>>>>
    -- ALTER TABLE my_bookshop_Books DROP (title);
    -- ALTER TABLE my_bookshop_Books ADD (title NVARCHAR(50));
    

    Changing the type of a field causes CDS build to create a corresponding ALTER TABLE statement. Data type conversion rules are applied by the SAP HANA database as part of the deployment step. This may cause the deployment to fail if the column contents can’t be converted to the new format.

    Examples:

    1. Changing the type of a field from String to Integer may cause tenant updates to fail if existing content cannot be converted.
    2. Changing the type of a field from Decimal to Integer can succeed, but decimal places are truncated. Conversion fails if the content exceeds the maximum Integer length.

    We recommend keeping .hdbtable deployment for entities where you expect low data volume. Every .hdbmigrationtable artifact becomes part of your versioned source code, creating a new migration version on every model change/build cycle. In turn, each such migration can require manual resolution. You can switch large-volume tables to .hdbmigrationtable at any time, keeping in mind that the existing .hdbtable design-time artifact needs to be undeployed.

    Sticking to .hdbtable for the actual application development phase avoids lots of initial migration versions that would need to be applied to the database schema.

    CDS build performs rudimentary checks on generated .hdmigrationtable files:

    • CDS build fails if inconsistencies are encountered between the generated .hdbmigrationtable files and the last-dev/csn.json model state. For example, the last migration version not matching the table version is such an inconsistency.
    • CDS build fails if manual resolution comments starting with >>>>> exist in one of the generated .hdbmigrationtable files. This ensures that manual resolution is performed before deployment.

    Native Database Clauses

    Not all clauses supported by SQL can directly be written in CDL syntax. To use native database clauses also in a CAP CDS model, you can provide arbitrary SQL snippets with the annotations @sql.prepend and @sql.append as described in Native Database Features. In this section we are focusing on schema evolution specific details.

    Schema evolution requires that any changes are applied by corresponding ALTER statements. See ALTER TABLE statement reference for more information. A new migration version will be generated whenever an @sql.append or @sql.prepend annotation is added, changed, or removed. ALTER statements define the individual changes that create the final database schema. This schema has to match the schema defined by the TABLE statement in the .hdbmigrationtable artifact. Please note that the compiler does not evaluate or process these SQL snippets. Any snippet will be taken as is and inserted into the TABLE statement and the corresponding ALTER statement. The deployment will fail in case of syntax errors.

    CDS Model:

    @cds.persistence.journal
    @sql.append: 'PERSISTENT MEMORY ON'
    entity E {
      ...,
      @sql.append: 'FUZZY SEARCH INDEX ON'
      text: String(100);
    }
    

    Result in hdbmigrationtable file:

    == version=2
    COLUMN TABLE E (
      ...,
      text NVARCHAR(100) FUZZY SEARCH INDEX ON
    ) PERSISTENT MEMORY ON
    
    == migration=2
    ALTER TABLE E PERSISTENT MEMORY ON;
    ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON);
    

    It’s important to understand that during deployment new migration versions will be applied on the existing database schema. If the resulting schema doesn’t match the schema as defined by the TABLE statement, deployment fails and any changes are rolled-back. In consequence, when removing or replacing an existing @sql.append annotation, the original ALTER statements need to be undone. As the required statements can’t automatically be determined, manual resolution is required. The CDS build generates comments starting with >>>> in order to provide some guidance and enforce manual resolution.

    Generated file with comments:

    == migration=3
    >>>>> Manual resolution required - insert ALTER statement(s) as described below.
    >>>>> After manual resolution delete all lines starting with >>>>>
    >>>>> Insert ALTER statement for: annotation @sql.append of artifact E has been removed (previous value: "PERSISTENT MEMORY ON")
    >>>>> Insert ALTER statement for: annotation @sql.append of element E:text has been removed (previous value: "FUZZY SEARCH INDEX ON")
    

    Manually resolved file:

    == migration=3
    ALTER TABLE E PERSISTENT MEMORY DEFAULT;
    ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX OFF);
    

    Appending text to an existing annotation is possible without manual resolution. A valid ALTER statement will be generated in this case. For example, appending the NOT NULL column constraint to an existing FUZZY SEARCH INDEX ON annotation generates the following statement:

    ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON NOT NULL);
    

    ❗ Warning You can use @sql.append to partition your table initially, but you cannot subsequently change the partitions using schema evolution techniques as altering partitions isn’t supported yet.

    Advanced Options

    The following CDS configuration options are supported to manage .hdbmigrationtable generation.

    This hasn’t been finalized yet.

    {
      "hana" : {
        "journal": {
          "enable-drop": false,
          "change-mode": "alter" // "drop"
        },
        // ...
      }
    }
    

    The "enable-drop" option determines whether incompatible model changes are rendered as is (true) or manual resolution is required (false). The default value is false.

    The change-mode option determines whether ALTER TABLE ... ALTER ("alter") or ALTER TABLE ... DROP ("drop") statements are rendered for data type related changes. To ensure that any kind of model change can be successfully deployed to the database, you can switch the "change-mode" to "drop", keeping in mind that any existing data will be deleted for the corresponding column. See hdbmigrationtable Generation for more details. The default value is "alter".

    Database Constraints

    The information about foreign key relations contained in the associations of CDS models can be used to generate foreign key constraints on the database tables.

    Switch on generation of foreign key constraints on the database with:

    cds.features.assert_integrity: 'db'
    

    On the Node.js stack, this setting automatically disables the runtime integrity checks.

    Database constraints are not intended for checking user input. Instead, they protect the integrity of your data in the database layer against programming errors.

    If a constraint violation occurs, the error messages coming from the database aren’t standardized by the runtimes but presented as-is.

    For Managed To-One Associations

    A foreign key constraint is generated for each managed to-one association or composition (including the generated backlinks of managed compositions of aspects) in an entity, if that entity is represented by a table in the database. The constraint ties the foreign key fields (that are automatically added for a managed association) to the respective primary key fields of the target table.

    In the following example, association author of entity Books triggers the generation of a foreign key constraint.

    Modeled association:

    entity Books {
      key ID : Integer;
      ...
      author : Association to Authors;
    }
    entity Authors {
      key ID : Integer;
      ...
    }
    

    Generated database tables with constraint:

    CREATE TABLE Authors (
      ID INTEGER NOT NULL,  -- primary key referenced by the constraint
      ...,
      PRIMARY KEY(ID)
    );
    CREATE TABLE Books (
      ID INTEGER NOT NULL,
      author_ID INTEGER,    -- generated foreign key field
      ...,
      PRIMARY KEY(ID),
      CONSTRAINT Books_author
        FOREIGN KEY(author_ID)  -- link generated foreign key field author_ID ...
        REFERENCES Authors(ID)  -- ... with primary key field ID of table Authors
        ON UPDATE RESTRICT
        ON DELETE RESTRICT
        VALIDATED           -- validate existing entries when constraint is created
        ENFORCED            -- validate changes by insert/update/delete
        INITIALLY DEFERRED  -- validate only at commit
    )
    

    No constraints are generated for associations annotated with @assert.integrity: false.

    For Compositions

    If an association <assoc> is the backlink of a composition <comp> (that means, the ON condition of <comp> is <comp>.<assoc> = $self), then the delete rule for the association’s constraint is changed to CASCADE. There is no constraint for the composition itself.

    entity Genres {
      key ID   : Integer;
      ...
      parent   : Association to Genres;
      children : Composition of many Genres on children.parent = $self;
    }
    

    A constraint is generated for parent, which is on delete cascade due to the composition children:

    CREATE TABLE Genres (
      ID INTEGER NOT NULL,  -- primary key referenced by the constraint
      ...
      parent_ID INTEGER,    -- generated foreign key field
      PRIMARY KEY(ID),
      CONSTRAINT Genres_parent
        FOREIGN KEY(parent_ID)  -- link generated foreign key field parent_ID ...
        REFERENCES Genres(ID)   -- ... with primary key field ID
        ON UPDATE RESTRICT
        ON DELETE CASCADE
        VALIDATED
        ENFORCED
        INITIALLY DEFERRED
    )
    

    SAP HANA-Specific Remarks

    For SAP HANA, the database constraints are generated as separate .hdbconstraint files. To enforce that the constraints are deleted on SAP HANA if they are no longer in the model, ensure that you have a file db/undeploy.json that contains an entry:

    "src/gen/**/*.hdbconstraint"
    

    Limitations

    Database constraints are only generated for managed to-one associations or to-one compositions. Everything else will not get constraints, like:

    • Unmanaged associations or compositions
    • Associations or compositions with cardinality to-many

    Constraints are only generated for SQL dialects hana and sqlite.

    Reserved Words and Keywords

    Reserved words have a special meaning and can’t be used, for example, as identifiers. The list of reserved words depends on the database.

    The CDS compiler issues warnings if a reserved word of SQLite or SAP HANA is used. However, for other databases this is not the case. Find here a collection of resources on selected databases and their reference documentation: