Using PostgreSQL
This guide focuses on the new PostgreSQL Service provided through @cap-js/postgres, which is based on the same new database services architecture as the new SQLite Service. This architecture brings significantly enhanced feature sets and feature parity, as documented in the Features section of the SQLite guide.
Learn about migrating from the former cds-pg
in the Migration chapter.
CAP Java SDK is tested on PostgreSQL 15. Most CAP features are supported on PostgreSQL.
Learn more about features and limitations of using CAP with PostgreSQL
Setup & Configuration
Run this to use PostgreSQL for production:
To run CAP Java on PostgreSQL, add a Maven dependency to the PostgreSQL feature in srv/pom.xml
:
<dependency>
<groupId>com.sap.cds</groupId>
<artifactId>cds-feature-postgresql</artifactId>
<scope>runtime</scope>
</dependency>
In order to use the CDS tooling with PostgreSQL, you also need to install the module @cap-js/postgres
:
npm add @cap-js/postgres
After that, you can use the cds deploy
command to deploy to a PostgreSQL database or to create a DDL script for PostgreSQL.
Auto-Wired Configuration
The @cap-js/postgres
package uses cds-plugin
technique to auto-configure your application and use a PostgreSQL database for production.
You can inspect the effective configuration using cds env
:
cds env requires.db --for production
Output:
{
impl: '@cap-js/postgres',
dialect: 'postgres',
kind: 'postgres'
}
See also the general information on installing database packages
Provisioning a DB Instance
To connect to a PostgreSQL offering from the cloud provider in Production, leverage the PostgreSQL on SAP BTP, hyperscaler option.
For local development and testing convenience, you can run PostgreSQL in a docker container.
Using Docker
You can use Docker to run a PostgreSQL database locally as follows:
Install and run Docker Desktop
Create the following file in your project root directory:
yamlservices: db: image: postgres:alpine environment: { POSTGRES_PASSWORD: postgres } ports: [ '5432:5432' ] restart: always
Create and run the docker container:
shdocker-compose -f pg.yml up -d
TIP
With the introduction of Testcontainers support in Spring Boot 3.1, you can create PostgreSQL containers on the fly for local development or testing purposes.
Service Bindings
You need a service binding to connect to the PostgreSQL database.
In the cloud, use given techniques to bind a cloud-based instance of PostgreSQL to your application.
For local development provide the credentials using a suitable cds env
technique, like one of the following.
Configure Connection Data
If a PostgreSQL service binding exists, the corresponding DataSource
is auto-configured.
You can also explicitly configure the connection data of your PostgreSQL database in the application.yaml file. If you run the PostgreSQL database in a docker container your connection data might look like this:
spring:
config.activate.on-profile: postgres-docker
datasource:
url: jdbc:postgresql://localhost:5432/postgres
username: postgres
password: postgres
driver-class-name: org.postgresql.Driver
To start the application with the new profile postgres-docker
, the spring-boot-maven-plugin
can be used: mvn spring-boot:run -Dspring-boot.run.profiles=postgres-docker
. Learn more about the configuration of a PostgreSQL database
Service Bindings for CDS Tooling
Using Defaults with [pg]
Profile
@cds-js/postgres
comes with a set of default credentials under the profile [pg]
that matches the defaults used in the docker setup. So, if you stick to these defaults you can skip to deploying your database with:
cds deploy --profile pg
In Your Private .cdsrc-private.json
If you don't use the default credentials and want to use just cds deploy
, you need to configure the service bindings (connection data) for the CDS tooling. Add the connection data to your private .cdsrc-private.json
:
{
"requires": {
"db": {
"kind": "postgres",
"credentials": {
"host": "localhost", "port": 5432,
"user": "postgres",
"password": "postgres",
"database": "postgres"
}
}
}
}
Configure Service Bindings
Using Defaults with [pg]
Profile
The @cds-js/postgres
comes with default credentials under profile [pg]
that match the defaults used in the docker setup. So, in case you stick to these defaults you can skip the next sections and just go ahead, deploy your database:
cds deploy --profile pg
Run your application:
cds watch --profile pg
Learn more about that in the Deployment chapter below.
In Your private ~/.cdsrc.json
Add it to your private ~/.cdsrc.json
if you want to use these credentials on your local machine only:
{
"requires": {
"db": {
"[pg]": {
"kind": "postgres",
"credentials": {
"host": "localhost", "port": 5432,
"user": "postgres",
"password": "postgres",
"database": "postgres"
}
}
}
}
}
In Project .env
Files
Alternatively, use a .env
file in your project's root folder if you want to share the same credentials with your team:
cds.requires.db.[pg].kind = postgres
cds.requires.db.[pg].credentials.host = localhost
cds.requires.db.[pg].credentials.port = 5432
cds.requires.db.[pg].credentials.user = postgres
cds.requires.db.[pg].credentials.password = postgres
cds.requires.db.[pg].credentials.database = postgres
Using Profiles
The previous configuration examples use the cds.env
profile [pg]
to allow selectively testing with PostgreSQL databases from the command line as follows:
cds watch --profile pg
The profile name can be freely chosen, of course.
Deployment
Using cds deploy
Deploy your database as usual with that:
cds deploy
Or with that if you used profile [pg]
as introduced in the setup chapter above:
cds deploy --profile pg
With a Deployer App
When deploying to Cloud Foundry, this can be accomplished by providing a simple deployer app, which you can construct as follows:
Create a new folder named
gen/pg/db
:shmkdir -p gen/pg/db
Generate a precompiled cds model:
shcds compile '*' > gen/pg/db/csn.json
Add required
.csv
files, for example:shcp -r db/data gen/pg/db/data
Add a package.json to
gen/pg
with this content:json{ "dependencies": { "@sap/cds": "*", "@cap-js/postgres": "*" }, "scripts": { "start": "cds-deploy" } }
Note the dash in
cds-deploy
, which is required as we don't use@cds-dk
for deployment and runtime, so thecds
CLI executable isn't available.Finally, package and deploy that, for example using MTA-based deployment.
Step-by-Step Instructions
Here's a step by step guide to add PostgreSQL to an existing project and deploy to SAP BTP. We assume that the following prerequiistes are fulfilled:
- An existing instance of PostgreSQL running. For this example the instance name
my-postgres-db
is used. - Service definition(s) and data model are in place (content in /srv and /db folder)
Add Postgres dependencies
npm install @cap-js/postgres
This automatically hooks itself into the production profile of CAP. Once the CAP service is deployed in the BTP and the production profile is active, the Postgres adapter is used.
Add Standard CAP Dependencies
cds add xsuaa,mta --for production
Modify the mta.yaml
Add the Postgres instance as existing service to the
resource
section:yaml- name: my-postgres-db type: org.cloudfoundry.existing-service
Add a deployer task/module, to deploy the data model to the Postgres instance as part of the standard deployment.
yaml- name: pg-db-deployer type: hdb path: gen/pg parameters: buildpack: nodejs_buildpack requires: - name: my-postgres-db
- Make sure to use the type
hdb
and NOTnodejs
as the nodejs type will try to restart the service over and over again. - The deployer path points to a gen/pg directory we need to create as part of the deployment process. See next step.
- The deployer also defines the dependency/binding to the postgres instance to have the credentials available at deploy time.
- Make sure to use the type
Add dependencies to your CAP service module
yamlrequires: - name: my-postgres-db - name: pg-db-deployer
This configuration creates a binding to the Postgres instance and waits for the deployer to finish before deploying the service.
To generate the content into the
gen/pg
folder, we reference a shell script in thecustom
builder section. The complete section should look like this:yamlbuild-parameters: before-all: - builder: custom commands: - npx cds build --production - ./scripts/pgbuild.sh
Create the Shell Script
The shell script specified in the previous step is a simple combination of all the commands outlined in the CAP documentation. It creates the necessary artifacts in the gen/pg directory. Here are the simple steps:
- Create a directory /scripts in the root of the project
- Create a file pgbuild.sh in the /scripts directory and change the permissions to make it executable:
chmod +x pgbuild.sh
- Add the following content to the pgbuild.sh file:bash
#!/usr/bin/env bash echo ** Starting Postgres build ** echo - creating dir gen/pg/db - mkdir -p gen/pg/db echo - compiling model - cds compile '*' > gen/pg/db/csn.json echo - copy .csv files - cp -r db/data gen/pg/db/data echo '{"dependencies": { "@sap/cds": "*", "@cap-js/postgres": "*"}, "scripts": { "start": "cds-deploy",}}' > gen/pg/package.json
Deploy
Package and deploy your project, for example using MTA-based deployment.
Automatic Schema Evolution
When redeploying after you changed your CDS models, like adding fields, automatic schema evolution is applied. Whenever you run cds deploy
(or cds-deploy
) it executes these steps:
- Read a CSN of a former deployment from table
cds_model
. - Calculate the delta to current model.
- Generate and run DDL statements with:
CREATE TABLE
statements for new entitiesCREATE VIEW
statements for new viewsALTER TABLE
statements for entities with new or changed elementsDROP & CREATE VIEW
statements for views affected by changed entities
- Fill in initial data from provided .csv files using
UPSERT
commands. - Store a CSN representation of the current model in
cds_model
.
You can disable automatic schema evolution, if necessary, by setting
cds.requires.db.schema_evolution = false
.
Limitations
Automatic schema evolution only allows changes without potential data loss.
Allowed
- Adding entities and elements
- Increasing the length of Strings
- Increasing the size of Integers
Disallowed
- Removing entities or elements
- Changes to primary keys
- All other type changes
For example the following type changes are allowed:
entity Foo {
anInteger : Int64; // from former: Int32
aString : String(22); // from former: String(11)
}
TIP
If you need to apply such disallowed changes during development, just drop and re-create your database, for example by killing it in docker and re-create it using the docker-compose
command, see Using Docker.
Dry-Run Offline
We can use cds deploy
with option --dry
to simulate and inspect how things work.
Capture your current model in a CSN file:
shcds deploy --dry --model-only > cds-model.csn
Change your models, for example in cap/samples/bookshop/db/schema.cds:
cdsentity Books { ... title : localized String(222); //> increase length from 111 to 222 foo : Association to Foo; //> add a new relationship bar : String; //> add a new element } entity Foo { key ID: UUID } //> add a new entity
Generate delta DDL script:
shcds deploy --dry --delta-from cds-model.csn > delta.sql
Inspect the generated SQL script, which should look like this:
sql-- Drop Affected Views DROP VIEW localized_CatalogService_ListOfBooks; DROP VIEW localized_CatalogService_Books; DROP VIEW localized_AdminService_Books; DROP VIEW CatalogService_ListOfBooks; DROP VIEW localized_sap_capire_bookshop_Books; DROP VIEW CatalogService_Books_texts; DROP VIEW AdminService_Books_texts; DROP VIEW CatalogService_Books; DROP VIEW AdminService_Books; -- Alter Tables for New or Altered Columns ALTER TABLE sap_capire_bookshop_Books ALTER title TYPE VARCHAR(222); ALTER TABLE sap_capire_bookshop_Books_texts ALTER title TYPE VARCHAR(222); ALTER TABLE sap_capire_bookshop_Books ADD foo_ID VARCHAR(36); ALTER TABLE sap_capire_bookshop_Books ADD bar VARCHAR(255); -- Create New Tables CREATE TABLE sap_capire_bookshop_Foo ( ID VARCHAR(36) NOT NULL, PRIMARY KEY(ID) ); -- Re-Create Affected Views CREATE VIEW AdminService_Books AS SELECT ... FROM sap_capire_bookshop_Books AS Books_0; CREATE VIEW CatalogService_Books AS SELECT ... FROM sap_capire_bookshop_Books AS Books_0 LEFT JOIN sap_capire_bookshop_Authors AS author_1 O ... ; CREATE VIEW AdminService_Books_texts AS SELECT ... FROM sap_capire_bookshop_Books_texts AS texts_0; CREATE VIEW CatalogService_Books_texts AS SELECT ... FROM sap_capire_bookshop_Books_texts AS texts_0; CREATE VIEW localized_sap_capire_bookshop_Books AS SELECT ... FROM sap_capire_bookshop_Books AS L_0 LEFT JOIN sap_capire_bookshop_Books_texts AS localized_1 ON localized_1.ID = L_0.ID AND localized_1.locale = session_context( '$user.locale' ); CREATE VIEW CatalogService_ListOfBooks AS SELECT ... FROM CatalogService_Books AS Books_0; CREATE VIEW localized_AdminService_Books AS SELECT ... FROM localized_sap_capire_bookshop_Books AS Books_0; CREATE VIEW localized_CatalogService_Books AS SELECT ... FROM localized_sap_capire_bookshop_Books AS Books_0 LEFT JOIN localized_sap_capire_bookshop_Authors AS author_1 O ... ; CREATE VIEW localized_CatalogService_ListOfBooks AS SELECT ... FROM localized_CatalogService_Books AS Books_0;
Note: If you use SQLite, ALTER TYPE commands are not necessary and so, are not supported, as SQLite is essentially typeless.
Deployment Using Liquibase
You can also use Liquibase to control when, where, and how database changes are deployed. Liquibase lets you define database changes in an SQL file, use cds deploy
to quickly generate DDL scripts which can be used by Liquibase.
Add a Maven dependency to Liquibase in srv/pom.xml
:
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<scope>runtime</scope>
</dependency>
Once liquibase-core
is on the classpath, Spring runs database migrations automatically on application startup and before your tests run.
① Initial Schema Version
Once you're ready to release an initial version of your database schema, you can create a DDL file that defines the initial database schema. Create a db/changelog
subfolder under srv/src/main/resources
, place the Liquibase change log file as well as the DDL scripts for the schema versions here. The change log is defined by the db/changelog/db.changelog-master.yml file:
databaseChangeLog:
- changeSet:
id: 1
author: me
changes:
- sqlFile:
dbms: postgresql
path: db/changelog/v1/model.sql
Use cds deploy
to create the v1/model.sql file:
cds deploy --profile pg --dry > srv/src/main/resources/db/changelog/v1/model.sql
Finally, store the CSN file, which corresponds to this schema version:
cds deploy --model-only --dry > srv/src/main/resources/db/changelog/v1/model.csn
The CSN file is needed as an input to compute the delta DDL script for the next change set.
If you start your application with mvn spring-boot:run
Liquibase initializes the database schema to version v1
, unless it has already been initialized.
WARNING
Don't change the model.sql after it has been deployed by Liquibase as the checksum of the file is validated. These files should be checked into your version control system. Follow step ② to make changes.
② Schema Evolution
If changes of the CDS model require changes on the database, you can create a new change set that captures the necessary changes.
Use cds deploy
to compute the delta DDL script based on the previous model versions (v1/model.csn) and the current model. Write the diff into a v2/delta.sql file:
cds deploy --profile pg --dry --delta-from srv/src/main/resources/db/changelog/v1/model.csn > \
srv/src/main/resources/db/changelog/v2/model.sql
Next, add a corresponding change set in the changelog/db.changelog-master.yml file:
databaseChangeLog:
- changeSet:
id: 1
author: me
changes:
- sqlFile:
dbms: postgresql
path: db/changelog/v1/model.sql
- changeSet:
id: 2
author: me
changes:
- sqlFile:
dbms: postgresql
path: db/changelog/v2/model.sql
Finally, store the CSN file, which corresponds to this schema version:
cds deploy --model-only --dry > srv/src/main/resources/db/changelog/v2/model.csn
If you now start the application, Liquibase executes all change sets, which haven't yet been deployed to the database.
For further schema versions, repeat step ②.
Migration
Thanks to CAP's database-agnostic cds.ql API, we're confident that the new PostgreSQL service comes without breaking changes. Nevertheless, please check the instructions in the SQLite Migration guide, with by and large applies also to the new PostgreSQL service.
cds deploy --model-only
Not a breaking change, but definitely required to migrate former cds-pg
databases, is to prepare it for schema evolution.
To do so run cds deploy
once with the --model-only
flag:
cds deploy --model-only
This will...:
- Create the
cds_model
table in your database. - Fill it with the current model obtained through
cds compile '*'
.
IMPORTANT:
Your .cds
models are expected to reflect the deployed state of your database.
With Deployer App
When you have a SaaS application, upgrade all your tenants using the deployer app with CLI option --model-only
added to the start script command of your package.json. After having done that, don't forget to remove the --model-only
option from the start script, to activate actual schema evolution.
MTX Support
WARNING
Multitenancy and extensibility aren't yet supported on PostgreSQL.