# Getting Started Your First Steps {.subtitle} Welcome to CAP, and to *capire*, its one-stop documentation. [**CAP** [ˈkap(m)] — (unofficial) abbreviation for the *"SAP Cloud Application Programming Model"*](https://translate.google.com/details?sl=en&text=cap){.learn-more .dict} [**capire** [ca·pì·re] — Italian for _"understand"_](https://translate.google.com/details?sl=it&tl=en&text=capire){.learn-more .dict} ## Initial Setup {#setup} Follow the steps after this for a minimalistic local setup. Alternatively, you can use CAP in [SAP Build Code](https://pages.community.sap.com/topics/build-code), or other cloud-based setups, such as GitHub codespaces. ### Prerequisites - [Node.js](https://nodejs.org) — required for installing the `cds` command line interface. - [SQLite](https://sqlite.org) — included in macOS and Linux → [install it](https://sqlite.org/download.html) on Windows. - **A Terminal**{} — for using the `cds` command line interface (CLI) - **A Text Editor**{} → we recommend [VS Code](https://code.visualstudio.com) with [CDS plugin](../tools/cds-editors#vscode). ### Installation - With the prerequisites met, install the [`cds` toolkit](../tools/cds-cli) *globally*: ```sh npm add -g @sap/cds-dk ``` [Visit the _Troubleshooting_ guide](troubleshooting.md) if you encounter any errors. {.learn-more} - Run `cds` to check whether installation was successful: ```sh cds ``` You see some output like that: ```sh USAGE cds [] cds = cds compile cds = cds help COMMANDS i | init jump-start cds-based projects a | add add a feature to an existing project c | compile compile cds models to different outputs s | serve run your services in local server w | watch run and restart on file changes r | repl read-eval-event loop e | env inspect effective configuration b | build prepare for deployment d | deploy deploy to databases or cloud v | version get detailed version information ? | help get detailed usage information Learn more about each command using: cds help or cds --help ``` ### Optional - [Java](https://sapmachine.io) & [Maven](https://maven.apache.org/download.cgi) — if you're going for Java development → [see instructions](../java/getting-started#local). - [git](https://git-scm.com) — if you go for more than just some quick trials. ## Starting Projects - Use `cds init` to start a CAP project, and then open it in VSCode: ```sh cds init bookshop ``` ```sh code bookshop ``` [Assumes you activated the `code` command on macOS as documented](../tools/cds-editors#vscode) {.learn-more} ## Project Structure The default file structure of CAP projects is as follows: ```zsh bookshop/ # Your project's root folder ├─ app/ # UI-related content ├─ srv/ # Service-related content ├─ db/ # Domain models and database-related content ├─ package.json # Configuration for cds + cds-dk └─ readme.md # A readme placeholder ``` CAP has defaults for many things that you'd have to configure in other frameworks. The goal is that things should just work out of the box, with zero configuration, whenever possible. You can override these defaults by a specific configuration if you need to do so. ::: details See an example for configuring custom project layouts... ::: code-group ```json [package.json] { ... "cds": { "folders": { "db": "database/", "srv": "services/", "app": "uis/" } } } ``` ```sh [Explore the defaults in your project] cds env ls defaults ``` [Learn more about project-specific configuration.](../node.js/cds-env){.learn-more} ::: ::: tip Convention over configuration We recommend sticking to CAP's way of [Convention over Configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) to benefit from things just working out of the box. Only override the defaults if you really need to do so. ::: ## Learning More {#next-steps} After the [initial setup](#setup), we recommend continuing as follows while you grow as you go...: | # | Guide | Description | |---|-------------------------------------------|--------------------------------------------------------| | 1 | [Introduction – What is CAP?](../about/) | Learn about key benefits and value propositions. | | 2 | [Bookshop by capire](in-a-nutshell) | Build your first CAP application within 4-44 minutes. | | 3 | [Best Practices](../about/best-practices) | Key concepts & rationales to understand → *must read*. | | 4 | [Anti Patterns](../about/bad-practices) | Misconceptions & bad practices to avoid → *must read*. | | 5 | [Learn More](learning-sources) | Find samples, videos, blogs, tutorials, and so on. | ## Grow as you go... After these getting started-level introductions, you would continuously revisit these guides to deepen your understanding and as reference docs: | # | Guides & References | Description | |---:|---------------------------------------------------------------------------------------|------------------------------------------------| | 6 | [Cookbook](../guides/) | Walkthroughs for the most common tasks. | | 7 | [CDS](../cds/)
[Java](../java/)
[Node.js](../node.js/)
[Tools](../tools/) | The reference docs for these respective areas. | | 8 | [Plugins](../plugins/) | Curated list of recommended Calesi plugins. | | 9 | [Releases](../releases/) | Your primary source to stay up to date. | | 10 | [Resources](../resources/) | About support channels, community, ... | This also reflects the overall structure of [this documentation](./learning-sources.md#this-documentation). # Introduction to CAP Value Propositions {.subtitle} ## What is CAP? The _Cloud Application Programming Model_ (CAP) is a framework of languages, libraries, and tools for building *enterprise-grade* cloud applications. It guides developers along a *golden path* of **proven best practices**, which are **served out of the box** by generic providers cloud-natively, thereby relieving application developers from tedious recurring tasks. In effect, CAP-based projects benefit from a primary **focus on domain**, with close collaboration of developers and domain experts, **rapid development** at **minimized costs**, as well as **avoiding technical debts** by eliminating exposure to, and lock-ins to volatile low-level technologies. Someone once said: "CAP is like ABAP for the non-ABAP world" {.quote} ... which is not completely true, of course
... ABAP is much older \:-) ## Jumpstart & Grow As You Go... ###### grow-as-you-go ### Jumpstarting Projects To get started with CAP, there's only a [minimalistic initial setup](../get-started/index.md) required. Starting a project is a matter of seconds. No tedious long lasting platform onboarding ceremonies are required; instead you can (and should): - Start new CAP projects within seconds. - Create functional apps with full-fledged servers within minutes. - Without prior onboarding to, or being connected to, the cloud. ```sh cds init cds watch ``` > [!tip] > > Following the principle of *convention over configuration*, CAP uses built-in configuration presets and defaults for different profiles. For the development profile, there's all set up for jumpstart development. In parallel, ops teams could set up the cloud, to be ready for first deployments later in time. ### Growing as You Go... Add things only when you need them or when you know more. Avoid any premature decisions or up-front overhead. For example, typical CAP projects adopt an *iterative* and *evolutionary* workflow like that: 1. **jumpstart a project** → no premature decisions made at that stage, just the name. 2. **rapidly create** fully functional first prototypes or proof-of-concept versions. 3. work in **fast inner loops** in airplane mode, and only occasionally go hybrid. 4. anytime **add new features** like Fiori UIs, message queues, different databases, etc. 5. do a first **ad-hoc deployment** to the cloud some days later 6. set up your **CI/CD pipelines** some weeks later 7. switch on **multitenancy** and **extensibility** for SaaS apps before going live 8. optionally cut out some **micro services** only if necessary and months later earliest ```sh cds add hana,redis,mta,helm,mtx,multitenancy,extensibility... ``` > [!tip] > > Avoid futile up-front setups and overhead and rather **get started rapidly**, having a first prototype up and running as fast as possible... by doing so, you might even find out soon that this product idea you or somebody else had doesn't work out anyways, so rather stop early ... ### Fast Inner Loops Most of your development happens in inner loops, where developers would **code**, **run**, and **test** in **fast iteration**. However, at least in mediocre cloud-based development approaches, this is slowed down drastically, for example, by the need to always be connected to platform services, up to the need to always deploy to the cloud to see and test the effects of recent changes. ![inner-loop](assets/inner-loop.png){.zoom75} CAP applications are [**agnostic by design**](best-practices#agnostic-by-design), which allows to stay in fast inner loops by using local mock variants as stand-ins for many platform services and features, thereby eliminating the need to always connect to or deploy to the cloud; developers can stay in fast inner loops, without connection to cloud – aka. ***airplane*** mode development. Only when necessary, they can test in ***hybrid*** mode or do ad-hoc deployments to the cloud. CAP provides mocked variants for several platform services out of the box, which are used automatically through default configuration presets in ***development*** profile, while the real services are automatically used in ***production*** profile. Examples are: | Platform Service | Development | Production | | ---------------- | -------------------- | -------------------------------- | | Database | SQLite, H2 in-memory | SAP HANA, PostgreSQL | | Authentication | Mocked Auth | SAP Identity Services | | App Gateway | None | SAP App Router | | Messaging | File-based Queues | SAP Event Hub, Kafka, Redis, ... | > [!tip] > > CAP's agnostic design, in combination with the local mock variants provided out of the box, not only retains **fast turnarounds** in inner loops, it also **reduces complexity**, and makes development **resilient** against unavailable platform services → thus promoting **maximized speed** at **minimized costs**. ### Agnostic Microservices CAP's thorough [agnostic design](best-practices#agnostic-by-design) not only allows to swap local mock variants as stand-ins for productive platform services, it also allows to do the same for your application services. Assumed you plan for a microservices architecture, the team developing microservice `A` would always have a dependency to the availability of microservice `B`, which they need to connect to, at least in a hybrid setup, worst case even ending up in the need to always have both deployed to the cloud. With CAP, you can (and should) instead just run both services in the same local process at development, basically by using `B` as a plain old library in `A`, and only deploy them to separate microservices in production, **without having to touch your models or code** (given `A` uses `B` through public APIs, which should always be the case anyways). ![modulith](assets/modulith.png){.zoom66} ![late-cut-microservices](assets/late-cut-microservices.png){.zoom66} If service `A` and `B` are developed in different runtimes, for example, Node.js and Java, you can't run them in the same process. But even then you can (and should) leverage CAP's ability to easily serve a service generically based on a service definition in CDS. So during development, `A` would use a mocked variant of `B` served automatically by CAP's generic providers. ### Late-cut Microservices You can (and should) also leverage the offered options to have CAP services co-deployed in a single *modulithic* process to delay the decision of whether and how to cut your application into microservices to a later phase of your project, when you know more about where to actually do the right cuts in the right way. In general, we always propose that approach: 1. **Avoid** premature cuts into microservices → ends up in lots of pain without gains 2. **Go for** a *modulith* approach instead → with CAP services for modularization 3. Cut into separate microservices **later on** → only when you really need to > [!tip] > > - **CAP services** are your primary means for modularization > - **Microservices** are **deployment units**. > - **Valid** reasons for microservices are: > 1. need to scale things differently > 2. different runtimes, for example, Node.js vs Java > 3. loosely coupled, coarse-grained subsystems with separate lifecycles > - **False** reasons are: distributed development, modularization, isolation, ... → there are well established and proven better ways to address these things, without the pain which comes with microservices. [See also the anti pattern of *Microservices Mania*](bad-practices#microservices-mania) {.learn-more} ### Parallelized Workflows As shown in the [*Bookshop by capire*](../get-started/in-a-nutshell) walkthrough, a simple service definition in CDS is all we need to run a full-fledged REST, or OData, or GraphQL server There are more options to parallelize workflows. Fueled by service definition is all that is required to get a full-fledged REST or OData service **served out of the box** by generic providers. So, projects could spawn two teams in parallel: one working on the frontend using automatically served backends, while the other one works on the actual implementations of the backend part. ## Proven Best Practices ### Served Out Of The Box The CAP runtimes in Node.js and Java provide many generic implementations for recurring tasks and best practices, distilled from proven SAP applications. This is a list of the most common tasks covered by the core frameworks: - [Serving CRUD Requests](../guides/providing-services#generic-providers) - [Serving Nested Documents](../guides/providing-services#deep-reads-and-writes) - [Serving Variable Data](../releases/oct24#basic-support-for-cds-map) - [Serving (Fiori) Drafts](../advanced/fiori#draft-support) - [Serving Media Data](../guides/providing-services#serving-media-data) - [Searching Data](../guides/providing-services#searching-data) - [Pagination](../guides/providing-services#implicit-pagination) - [Sorting](../guides/providing-services#implicit-sorting) - [Authentication](../node.js/authentication) - [Authorization](../guides/security/authorization) - [Localization / i18n](../guides/i18n) - [Basic Input Validation](../guides/providing-services#input-validation) - [Auto-generated Keys](../guides/providing-services#auto-generated-keys) - [Concurrency Control](../guides/providing-services#concurrency-control)
> [!tip] > > This set of automatically served requests and covered related requirements, means that CAP's generic providers automatically serve the vast majority, if not all of the requests showing up in your applications, without you having to code anything for that, except for true custom domain logic. [See also the *Features Overview*](./features) {.learn-more} ### Enterprise Best Practices On top of the common request-serving related things handled by CAP's generic providers, we provide out of the box solutions for these higher-level topic fields: - [Common Reuse Types & Aspects](../cds/common) - [Managed Data](../guides/domain-modeling#managed-data) - [Localized Data](../guides/localized-data) - [Temporal Data](../guides/temporal-data) - [Data Federation](https://github.com/SAP-samples/teched2022-AD265/wiki) → hands-on tutorial; capire guide in the making... - [Verticalization & Extensibility](../guides/extensibility/)
> [!tip] > > These best practice solutions mostly stem from close collaborations with as well as contributions by real, successful projects and SAP products, and from ABAP. Which means they've been proven in many years of adoption and real business use. ### The 'Calesi' Effect '**Calesi**' stands for "**CA**P-**le**vel **S**ervice **I**ntegrations" as well as for an initiative we started late 2023 by rolling out the *CAP Plugins* technique, which promotes plugins and add-ons contributions not only by the CAP team, but also by - **SAP BTP technology units** and service teams (beyond CAP team) - **SAP application teams** - **Partners** & **Customers**, as well as - **Contributors** from the CAP community That initiative happened to be successful, and gave a boost to a steadily **growing ecosystem** around CAP with an active **inner source** and **open source** community on the one hand side. On the other hand, it resulted into an impressive collection of production-level add-ons. Here are some highlights **maintained by SAP teams**: - [GraphQL Adapter](../plugins/#graphql-adapter) - [OData V2 Adapter](../plugins/#odata-v2-proxy) - [WebSockets Adapter](../plugins/#websocket) - [UI5 Dev Server](../plugins/#ui5-dev-server) - [Open Telemetry → SAP Cloud Logging, Dynatrace, ...](../plugins/#telemetry) - [Attachments → SAP Object Store /S3](../plugins/#attachments) - [Attachments → SAP Document Management Service](../plugins/#@cap-js/sdm) - [Messaging → SAP Cloud Application Event Hub](../plugins/#event-broker-plugin) - [Change Tracking](../plugins/#change-tracking) - [Notifications](../plugins/#notifications) - [Audit Logging → SAP Audit Logging](../plugins/#audit-logging) - [Personal Data Management → SAP DPI Services](../guides/data-privacy/) - [Open Resource Discovery (ORD)](../plugins/#ord-open-resource-discovery) > [!tip] > > This is just a subset and a snapshot of the growing number of plugins
→ find more in the [***CAP Plugins***](../plugins/) page, as well in the [***CAP Commmunity***](../resources/community-sap) spaces. ### Intrinsic Extensibility SaaS customers, verticalization partners, or your teams can... - Add/override annotations, translations, initial data - Add extension fields, entities, relationships - Add custom logic → in-app + side-by-side - Bundle and share that as reuse extension packages - Feature-toggle such pre-built extension packages per tenant All of these tasks are done in [the same way as you do in your own projects](best-practices.md#intrinsic-extensibility): - Using the same techniques of CDS Aspects and Event Handlers - Including adaption and extensions of reuse types/models - Including extensions to framework-provided services And all of that is available out of the box, that is, without you having to create extension points. You would want to restrict who can extend what, though. ### Cloud-Native by Design CAP's [service-centric paradigm](best-practices#services) is designed from the ground up for cloud-scale enterprise applications. Its core design principles of flyweight, stateless services processing passive, immutable data, complemented by an intrinsic, ubiquitous [events-based](best-practices#events) processing model greatly promote scalability and resilience. On top of that, several built-in facilities address many things to care about in cloud-based apps out of the box, such as: - **Multitenancy** → tenant *isolation* at runtime; *deploy*, *subscribe*, *update* handled by MTX - **Extensibility** → for customers to tailor SaaS apps to their needs → [see Intrinsic...](#intrinsic-extensibility) - **Security** → CAP+plugins do authentications, certificates, mTLS, ... - **Scalability** → by stateless services, passive data, messaging, ... - **Resilience** → by messaging, tx outbox, outboxed audit logging, ... - **Observability** → by logging + telemetry integrated to BTP services
> [!tip] > > Application developers don't have to and **should not have to care** about these complex non-functional requirements. Instead they should [focus on domain](#focus-on-domain), that is, their functional requirements, as much as possible. > [!caution] > > Many of these crucial cloud qualities are of complex and critical nature, for example, **multitenancy**, **isolation** and **security**, but also scalability and resilience isn't that easy to do right → it's a **high risk** to assume each application developer in each project is doing everything in the right ways ### Open _and_ Opinionated That might sound like a contradiction, but it isn't: While CAP certainly gives *opinionated* guidance, we do so without sacrificing openness and flexibility. At the end of the day, you stay in control of which tools or technologies to choose, or which architecture patterns to follow as depicted in the following table. | CAP is *Opinionated* in... | CAP is *Open* as... | | ------------------------------------------------------------ | ------------------------------------------------------------ | | **Platform-agnostic APIs** to avoid lock-ins to low-level stuff. | All abstractions follow a glass-box pattern that allows unrestricted access to lower-level things, if necessary | | **Best practices**, served out of the box by generic providers | You're free to do things your way in [custom handlers](../guides/providing-services#custom-logic), ... while CAP simply tries to get the tedious tasks out of your way. | | **Out-of-the-box support** for
**[SAP Fiori](https://developers.sap.com/topics/ui-development.html)** and **[SAP HANA](https://developers.sap.com/topics/hana.html)** | You can also choose other UI technologies, like [Vue.js](../get-started/in-a-nutshell#vue). Other databases are supported as well. | | **Tools support** in [SAP Build Code](../tools/cds-editors#bas) or [VS Code](../tools/cds-editors#vscode). | Everything in CAP can be done using the [`@sap/cds-dk`](../tools/cds-cli) CLI and any editor or IDE of your choice. |
> [!tip] > > And most important: As CAP itself is designed as an open framework, everything what's not covered by CAP today can be solved in application projects, in specific custom code, or by [generic handlers](best-practices.md#extensible-framework) ... or by [plugins](../plugins/index.md) that you could build and contribute.
⇒ **Contributions *are* welcome!** ## Focus on Domain CAP places **primary focus on domain**, by capturing _domain knowledge_ and _intent_ instead of imperative coding — that means, _What, not How_ — which promotes the following: - Close collaboration of _developers_ and _domain experts_ in domain modeling. - _Out-of-the-box_ implementations for _best practices_ and recurring tasks. - _Platform-agnostic_ approach to _avoid lock-ins_, hence _protecting investments_. ### Conceptual Modeling by CDS ### Domain-Driven Design ### Rapid Development ### Minimal Distraction ## Avoid Technical Debt There are several definitions of technical debt found in media which all boil down to: Technical debt arises when speed of delivery is prioritized over quality.
The results must later be revised, thoroughly refactored, or completely rebuilt. {.quote} So, how could CAP help to avoid, or reduce the risks of piling up technical debt? ### Less Code → Less Mistakes Every line of code not written is free of errors. {.quote} Moreover: - Relieving app dev teams from overly technical disciplines not only saves efforts and time, it also avoids **severe mistakes** which can be made in these fields, for example, in tenant isolation and security. - **Best practices** reproduce proven solution patterns to recurring tasks, found and refined in successful application projects; your peers. - Having them **served out of the box** paves the path for their adoption, and hence reduces the likelihood of picking anti patterns instead. ### Single Points to Fix Of course, we also make mistakes and errors in CAP, but ... - We can fix them centrally and all CAP users benefit from that immediately. - Those bugs are frequently found and fixed by your peers in crime, before you even encounter them yourselves. - And this effect increases with steadily growing adoption of CAP that we see, ... - And with the open culture we established successfully, for example, **open issue reports** in GitHub, that is, the standard out there, instead of private support tickets — a relict of the past. > Note that all of this is in contrast to code generators, where you can't fix code generated in the past → see also [*Avoid Code Generators*](bad-practices#code-generators) in the anti patterns guide. ### Minimized Lock-Ins Keeping pace with a rapidly changing world of volatile cloud technologies and platforms is a major challenge, as today's technologies that might soon become obsolete. CAP avoids such lock-ins and shields application developers from low-level things like: - **Authentication** and **Authorization**, incl. things like Certificates, mTLS, OAuth, ... - **Service Bindings** like K8s secrets, VCAP_SERVICES, ... - **Multitenancy**-related things, especially w.r.t. tenant isolation - **Messaging** protocols or brokers such as AMQP, MQTT, Webhooks, Kafka, Redis, ... - **Networking** protocols such as HTTP, gRCP, OData, GraphQL, SOAP, RFC, ... - **Audit Logging** → use the *Calesi* variant, which provides ultimate resilience - **Logs**, **Traces**, **Metrics** → CAP does that behind the scenes + provides *Calesi* variants - **Transaction Management** → CAP manages all transactions → don't mess with that! > [!tip] > > CAP not only abstracts these things at scale, but also does most things automatically in the background. In addition, it allows us to provide various implementations that encourage *Evolution w/o Disruption*, as well as fully functional mocks used in development. > [!caution] > > Things get dangerous when application developers have to deal with low-level security-related things like authentication, certificates, tenant isolation, and so on. Whenever this happens, it's a clear sign that something is seriously wrong. ## What about AI? - AI provides tremendous boosts to productivity → for example: - **Coding Assists** → for example, by [Copilot](https://en.wikipedia.org/wiki/Microsoft_Copilot) in `.cds`, `.js`, even `.md` sources - **Code Analysis** → detecting [bad practices](bad-practices) → guiding to [best practices](best-practices) - **Code Generation** → for example, for tests, test data, ... - **Project Scaffolding** → for quick head starts - **Search & Learning Assists** → like Maui, ... - But this doesn't replace the need for **Human Intelligence**! - There's a different between a GPT-generated one-off thesis and long-lived enterprise software, which needs to adapt and scale to new requirements. - **CAP itself** is a major contribution to AI → its simple, clear concepts, uniform ways to implement and consume services, capire, its openness and visibility in the public world, ... # Getting Started in a Nutshell Build Your First App with CAP {.subtitle} ## Jumpstart a Project {#jumpstart} After you completed the [*Initial Setup*](./), you jumpstart a project as follows: - Create a new project using `cds init` ```sh [Node.js] cds init bookshop ``` ```sh [Java] cds init bookshop --java --java:mvn -DgroupId=com.sap.capire ``` - Open the project in VS Code ```sh code bookshop ``` [Assumes you activated the `code` command on macOS as documented](/tools/cds-editors#vscode) {.learn-more} For Java development in VS Code you need to [install extensions](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack). {.learn-more .java} - Run the following command in an [*Integrated Terminal*](https://code.visualstudio.com/docs/terminal/basics) ```sh [Node.js] cds watch ``` ```sh [Java] cd srv && mvn cds:watch ``` ::: details `cds watch` is waiting for things to come... ```log [dev] cds w cds serve all --with-mocks --in-memory? live reload enabled for browsers ___________________________ No models found in db/,srv/,app/,schema,services. // [!code focus] Waiting for some to arrive... // [!code focus] ``` So, let's go on feeding it... ::: ::: details Optionally clone sample from GitHub ... The sections below describe a hands-on walkthrough, in which you'd create a new project and fill it with content step by step. Alternatively, you can get the final sample content from GitHub as follows: ::: code-group ```sh [Node.js] git clone https://github.com/sap-samples/cloud-cap-samples samples cd samples npm install ``` ```sh [Java] git clone https://github.com/sap-samples/cloud-cap-samples-java bookshop ``` Note: When comparing the code from the *cap/samples* on GitHub to the snippets given in the sections below you will recognise additions showcasing enhanced features. So, what you find in there is a superset of what we describe in this getting started guide. ::: ## Capture Domain Models Let's feed our project by adding a simple domain model. Start by creating a file named _db/schema.cds_ and copy the following definitions into it: ::: code-group ```cds [db/schema.cds] using { Currency, managed, sap } from '@sap/cds/common'; namespace sap.capire.bookshop; // [!code focus] entity Books : managed { // [!code focus] key ID : Integer; title : localized String(111); descr : localized String(1111); author : Association to Authors; genre : Association to Genres; stock : Integer; price : Decimal(9,2); currency : Currency; } entity Authors : managed { // [!code focus] key ID : Integer; name : String(111); books : Association to many Books on books.author = $self; } /** Hierarchically organized Code List for Genres */ entity Genres : sap.common.CodeList { // [!code focus] key ID : Integer; parent : Association to Genres; children : Composition of many Genres on children.parent = $self; } ``` ::: _Find this source also in `cap/samples` [for Node.js](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/db/schema.cds), and [for Java](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/db/books.cds)_{ .learn-more} [Learn more about **Domain Modeling**.](../guides/domain-modeling){ .learn-more} [Learn more about **CDS Modeling Languages**.](../cds/){ .learn-more} ### Deployed to Databases {#deployed-in-memory} As soon as you save the *schema.cds* file, the still running `cds watch` reacts immediately with new output like this: ```log [cds] - connect to db > sqlite { database: ':memory:' } /> successfully deployed to in-memory database. ``` This means that `cds watch` detected the changes in _db/schema.cds_ and automatically bootstrapped an in-memory _SQLite_ database when restarting the server process. As soon as you save your CDS file, the still running `mvn cds:watch` command reacts immediately with a CDS compilation and reload of the CAP Java application. The embedded database of the started application will reflect the schema defined in your CDS file. ### Compiling Models {#cli} We can optionally test-compile models individually to check for validity and produce a parsed output in [CSN format](../cds/csn). For example, run this command in a new terminal: ```sh cds db/schema.cds ``` This dumps the compiled CSN model as a plain JavaScript object to stdout.
Add `--to ` (shortcut `-2`) to produce other outputs, for example: ```sh cds db/schema.cds -2 json cds db/schema.cds -2 yml cds db/schema.cds -2 sql ``` [Learn more about the command line interface by executing `cds help`.](../tools/cds-cli#cds-help){.learn-more} ## Providing Services {#services} After the recent changes, `cds watch` also prints this message: ```log No service definitions found in loaded models. Waiting for some to arrive... ``` After the recent changes, the running CAP Java application is still not exposing any service endpoints. So, let's go on feeding it with two service definitions for different use cases: An `AdminService` for administrators to maintain `Books` and `Authors`. A `CatalogService` for end users to browse and order `Books` under path `/browse`. To do so, create the following two files in folder _./srv_ and fill them with this content: ::: code-group ```cds [srv/admin-service.cds] using { sap.capire.bookshop as my } from '../db/schema'; service AdminService @(requires:'authenticated-user') { // [!code focus] entity Books as projection on my.Books; entity Authors as projection on my.Authors; } ``` ```cds [srv/cat-service.cds] using { sap.capire.bookshop as my } from '../db/schema'; service CatalogService @(path:'/browse') { // [!code focus] @readonly entity Books as select from my.Books {*, author.name as author } excluding { createdBy, modifiedBy }; @requires: 'authenticated-user' action submitOrder (book: Books:ID, quantity: Integer); } ``` ::: *Find this source also on GitHub [for Node.js](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/srv), and [for Java](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv)*{.learn-more} [Learn more about **Defining Services**.](../guides/providing-services){ .learn-more} ### Served via OData This time `cds watch` reacted with additional output like this: ```log [cds] - serving AdminService { at: '/odata/v4/admin' } [cds] - serving CatalogService { at: '/browse' } [cds] - server listening on { url: 'http://localhost:4004' } ``` As you can see, the two service definitions have been compiled and generic service providers have been constructed to serve requests on the listed endpoints _/odata/v4/admin_ and _/browse_. In case the CDS service definitions were compiled correctly the Spring Boot runtime is reloaded automatically and should output a log line like this: ```log c.s.c.services.impl.ServiceCatalogImpl : Registered service AdminService c.s.c.services.impl.ServiceCatalogImpl : Registered service CatalogService ``` As you can see in the log output, the two service definitions have been compiled and generic service providers have been constructed to serve requests on the listed endpoints _/odata/v4/AdminService_ and _/odata/v4/browse_. ::: warning Add the dependency to spring-boot-security-starter Both services defined above contain security annotations that restrict access to certain endpoints. Please add the dependency to spring-boot-security-starter to the _srv/pom.xml_ in order to activate mock user and authentication support: ```sh mvn com.sap.cds:cds-maven-plugin:add -Dfeature=SECURITY ``` ::: > [!tip] > > CAP-based services are full-fledged OData services out of the box. Without adding any provider implementation code, they translate OData request into corresponding database requests, and return the results as OData responses. [Learn more about **Generic Providers**.](../guides/providing-services){.learn-more} ### Generating APIs We can optionally also compile service definitions explicitly, for example to [OData EDMX metadata documents](https://docs.oasis-open.org/odata/odata/v4.0/odata-v4.0-part3-csdl.html): ```sh cds srv/cat-service.cds -2 edmx ``` Essentially, using a CLI, this invokes what happened automatically behind the scenes in the previous steps. While we don't really need such explicit compile steps, you can do this to test correctness on the model level, for example. ### Generic *index.html* Open __ in your browser and see the generic _index.html_ page: ![Generic welcome page generated by CAP that list all endpoints. Eases jumpstarting development and is not meant for productive use.](assets/welcome.png){} > Note: User `alice` is a [default user with admin privileges](../node.js/authentication#mocked). Use it to access the _/admin_ service. You don't need to enter a password. Open __ in your browser and see the generic _index.html_ page: ![Generic welcome page generated by CAP that list all endpoints. Eases jumpstarting development and is not meant for productive use.](./assets/welcome-java.png) > Note: User `authenticated` is a [prepared mock user](../java/security#mock-users) which will be authenticated by default. Use it to access the _/admin_ service. You don't need to enter a password. ## Using Databases {#databases} ### SQLite In-Memory {.impl .node} As [previously shown](#deployed-in-memory), `cds watch` automatically bootstraps an SQLite in-process and in-memory database by default — that is, unless told otherwise. While this **isn't meant for productive use**, it drastically speeds up development turn-around times, essentially by mocking your target database, for example, SAP HANA. {.impl .node} ### H2 In-Memory {.impl .java} As [previously shown](#deployed-in-memory), `mvn cds:watch` automatically bootstraps an H2 in-process and in-memory database by default — that is, unless told otherwise. While this **isn't meant for productive use**, it drastically speeds up turn-around times in local development and furthermore allows self-contained testing. {.impl .java} ### Adding Initial Data Now, let's fill your database with initial data by adding a few plain CSV files under _db/data_ like this: ::: code-group ```csvc [db/data/sap.capire.bookshop-Books.csv] ID,title,author_ID,stock 201,Wuthering Heights,101,12 207,Jane Eyre,107,11 251,The Raven,150,333 252,Eleonora,150,555 271,Catweazle,170,22 ``` ```csvc [db/data/sap.capire.bookshop-Authors.csv] ID,name 101,Emily Brontë 107,Charlotte Brontë 150,Edgar Allen Poe 170,Richard Carpenter ``` ::: [Find a full set of `.csv` files in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/db/data){ .learn-more target="_blank"} After you've added these files, `cds watch` restarts the server with output, telling us that the files have been detected and their content has been loaded into the database automatically: ```log [cds] - connect to db { database: ':memory:' } > filling sap.capire.bookshop.Authors from bookshop/db/data/sap.capire.bookshop-Authors.csv > filling sap.capire.bookshop.Books from bookshop/db/data/sap.capire.bookshop-Books.csv > filling sap.capire.bookshop.Books_texts from bookshop/db/data/sap.capire.bookshop-Books_texts.csv > filling sap.capire.bookshop.Genres from bookshop/db/data/sap.capire.bookshop-Genres.csv > filling sap.common.Currencies from common/data/sap.common-Currencies.csv > filling sap.common.Currencies_texts from common/data/sap.common-Currencies_texts.csv /> successfully deployed to in-memory database. ``` > Note: This is the output when you're using the [samples](https://github.com/sap-samples/cloud-cap-samples). It's less if you've followed the manual steps here. After you've added these files, `mvn cds:watch` restarts the server with output, telling us that the files have been detected and their content has been loaded into the database automatically: ```log c.s.c.s.impl.persistence.CsvDataLoader : Filling sap.capire.bookshop.Books from db/data/sap.capire.bookshop-Authors.csv c.s.c.s.impl.persistence.CsvDataLoader : Filling sap.capire.bookshop.Books from db/data/sap.capire.bookshop-Books.csv ``` [Learn more about **Using Databases**.](../guides/databases){.learn-more} ### Querying via OData Now that we have a connected, fully capable SQL database, filled with some initial data, we can send complex OData queries, served by the built-in generic providers: - _[…/Books?$select=ID,title](http://localhost:4004/browse/Books?$select=ID,title)_ {.impl .node} - _[…/Authors?$search=Bro](http://localhost:4004/odata/v4/admin/Authors?$search=Bro)_ {.impl .node} - _[…/Authors?$expand=books($select=ID,title)](http://localhost:4004/odata/v4/admin/Authors?$expand=books($select=ID,title))_ {.impl .node} - _[…/Books?$select=ID,title](http://localhost:8080/odata/v4/browse/Books?$select=ID,title)_ {.impl .java} - _[…/Authors?$search=Bro](http://localhost:8080/odata/v4/AdminService/Authors?$search=Bro)_ {.impl .java} - _[…/Authors?$expand=books($select=ID,title)](http://localhost:8080/odata/v4/AdminService/Authors?$expand=books($select=ID,title))_ {.impl .java} > Note: Use [_alice_](../node.js/authentication#mocked) as user to query the `admin` service. You don't need to enter a password. {.impl .node} > Note: Use [_authenticated_](../java/security#mock-users) to query the `admin` service. You don't need to enter a password. {.impl .java} [Learn more about **Generic Providers**.](../guides/providing-services){.learn-more} [Learn more about **OData's Query Options**.](../advanced/odata){.learn-more} ### Persistent Databases {.impl .node} Instead of in-memory databases we can also use persistent ones. For example, still with SQLite, add the following configuration: ::: code-group ```json [package.json] { "cds": { "requires": { "db": { "kind": "sqlite", "credentials": { "url": "db.sqlite" } // [!code focus] } } } } ``` ::: Then deploy: ```sh cds deploy ``` The difference from the automatically provided in-memory database is that we now get a persistent database stored in the local file _./db.sqlite_. This is also recorded in the _package.json_. ::: details To see what that did, use the `sqlite3` CLI with the newly created database. ```sh sqlite3 db.sqlite .dump sqlite3 db.sqlite .tables ``` ::: [Learn how to install SQLite on Windows.](troubleshooting#how-do-i-install-sqlite-on-windows){.learn-more} :::details You could also deploy to a provisioned SAP HANA database using this variant. ```sh cds deploy --to hana ``` ::: [Learn more about deploying to SAP HANA.](../guides/databases){.learn-more .impl .node} ## Serving UIs {#uis} You can consume the provided services, for example, from UI frontends, using standard AJAX requests. Simply add an _index.html_ file into the _app/_ folder, to replace the generic index page. ### SAP Fiori UIs {#fiori} CAP provides out-of-the-box support for SAP Fiori UIs, for example, with respect to SAP Fiori annotations and advanced features such as search, value helps and SAP Fiori Draft. ![Shows the famous bookshop catalog service in an SAP Fiori UI.](assets/fiori-app.png) [Learn more about **Serving Fiori UIs**.](../advanced/fiori){.learn-more} ### Vue.js UIs {#vue .impl .node} Besides Fiori UIs, CAP services can be consumed from any UI frontends using standard AJAX requests. For example, you can [find a simple Vue.js app in **cap/samples**](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/app/vue), which demonstrates browsing and ordering books using OData requests to [the `CatalogService` API we defined above](#services). {.impl .node} ![Shows the famous bookshop catalog service in a simple Vue.js UI.](assets/vue-app.png){ .impl .node .adapt} ## Adding Custom Logic While the generic providers serve most CRUD requests out of the box, you can add custom code to deal with the specific domain logic of your application. ### Service Implementations In Node.js, the easiest way to provide implementations for services is through equally named _.js_ files placed next to a service definition's _.cds_ file: {.impl .node} ```zsh bookshop/ ├─ srv/ │ ├─ ... │ ├─ cat-service.cds # [!code focus] │ └─ cat-service.js # [!code focus] └─ ... ``` [See these files also in **cap/samples**/bookshop/srv folder.](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/srv){.learn-more} [Learn more about providing service implementations **in Node.js**.](../node.js/core-services#implementing-services){.learn-more .impl .node} [Learn also **how to do that in Java** using Event Handler Classes.](../java/event-handlers/#handlerclasses){.learn-more .impl .java} You can have this _.js_ file created automatically with [`cds add handler`](../tools/cds-cli#handler). {.learn-more} In CAP Java, you can add custom handlers for your service as so called EventHandlers. As CAP Java integrates with Spring Boot, you need to provide your custom code in classes, annotated with `@Component`, for example. Use your favorite Java IDE to add a class like the following to the `srv/src/main/java/` folder of your application. {.impl .java} ::: code-group ```java [srv/src/main/java/com/sap/capire/bookshop/handlers/CatalogServiceHandler.java] @Component @ServiceName(CatalogService_.CDS_NAME) public class CatalogServiceHandler implements EventHandler { // your custom code will go here } ``` ::: ::: tip Place the code in your package of choice and use your IDE to generate the needed `import` statements. ::: ### Adding Event Handlers Service implementations essentially consist of one or more event handlers. Copy this into _srv/cat-service.js_ to add custom event handlers: ::: code-group ```js [srv/cat-service.js] const cds = require('@sap/cds') class CatalogService extends cds.ApplicationService { init() { const { Books } = cds.entities('CatalogService') // Register your event handlers in here, for example: // [!code focus] this.after ('each', Books, book => { // [!code focus] if (book.stock > 111) { // [!code focus] book.title += ` -- 11% discount!` // [!code focus] } // [!code focus] }) // [!code focus] return super.init() }} module.exports = CatalogService ``` ::: [Learn more about adding **event handlers** using `.on/before/after`.](../node.js/core-services#srv-on-before-after){.learn-more} Now that you have created the classes for your custom handlers it's time to add the actual logic. You can achieve this by adding methods annotated with CAP's `@Before`, `@On`, or `@After` to your new class. The annotation takes two arguments: the event that shall be handled and the entity name for which the event is handled. ::: code-group ```java [srv/src/main/java/com/sap/capire/bookshop/handlers/CatalogServiceHandler.java] @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) public void addDiscountIfApplicable(List books) { for (Books book : books) { if (book.getStock() != null && book.getStock() > 111) { book.setTitle(book.getTitle() + " -- 11% discount!"); } } } ``` ::: :::details Code including imports ::: code-group ```java [srv/src/main/java/com/sap/capire/bookshop/handlers/CatalogServiceHandler.java] package com.sap.capire.bookshop.handlers; import java.util.List; import org.springframework.stereotype.Component; import com.sap.cds.services.cds.CqnService; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.After; import com.sap.cds.services.handler.annotations.ServiceName; import cds.gen.catalogservice.Books; import cds.gen.catalogservice.Books_; import cds.gen.catalogservice.CatalogService_; @Component @ServiceName(CatalogService_.CDS_NAME) public class CatalogServiceHandler implements EventHandler { @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) public void addDiscountIfApplicable(List books) { for (Books book : books) { if (book.getStock() != null && book.getStock() > 111) { book.setTitle(book.getTitle() + " -- 11% discount!"); } } } } ``` ::: [Learn more about **event handlers** in the CAP Java documentation.](../java/event-handlers/#handlerclasses){.learn-more} ### Consuming Other Services Quite frequently, event handler implementations consume other services, sending requests and queries, as in the completed example below. ::: code-group ```js [srv/cat-service.js] const cds = require('@sap/cds') class CatalogService extends cds.ApplicationService { async init() { const db = await cds.connect.to('db') // connect to database service const { Books } = db.entities // get reflected definitions // Reduce stock of ordered books if available stock suffices this.on ('submitOrder', async req => { const {book,quantity} = req.data const n = await UPDATE (Books, book) .with ({ stock: {'-=': quantity }}) .where ({ stock: {'>=': quantity }}) n > 0 || req.error (409,`${quantity} exceeds stock for book #${book}`) }) // Add some discount for overstocked books this.after ('each','Books', book => { if (book.stock > 111) book.title += ` -- 11% discount!` }) return super.init() }} module.exports = CatalogService ``` ::: ::: code-group ```java [srv/src/main/java/com/sap/capire/bookshop/handlers/SubmitOrderHandler.java] @Component @ServiceName(CatalogService_.CDS_NAME) public class SubmitOrderHandler implements EventHandler { private final PersistenceService persistenceService; public SubmitOrderHandler(PersistenceService persistenceService) { this.persistenceService = persistenceService; } @On public void onSubmitOrder(SubmitOrderContext context) { Select byId = Select.from(cds.gen.catalogservice.Books_.class).byId(context.getBook()); Books book = persistenceService.run(byId).single().as(Books.class); if (context.getQuantity() > book.getStock()) throw new IllegalArgumentException(context.getQuantity() + " exceeds stock for book #" + book.getTitle()); book.setStock(book.getStock() - context.getQuantity()); persistenceService.run(Update.entity(Books_.CDS_NAME).data(book)); context.setCompleted(); } } ``` ::: :::details Code including imports ::: code-group ```java [srv/src/main/java/com/sap/capire/bookshop/handlers/CatalogService.java] package com.sap.capire.bookshop.handlers; import org.springframework.stereotype.Component; import com.sap.cds.ql.Select; import com.sap.cds.ql.Update; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.On; import com.sap.cds.services.handler.annotations.ServiceName; import com.sap.cds.services.persistence.PersistenceService; import cds.gen.catalogservice.Books; import cds.gen.catalogservice.Books_; import cds.gen.catalogservice.CatalogService_; import cds.gen.catalogservice.SubmitOrderContext; @Component @ServiceName(CatalogService_.CDS_NAME) public class SubmitOrderHandler implements EventHandler { private final PersistenceService persistenceService; public SubmitOrderHandler(PersistenceService persistenceService) { this.persistenceService = persistenceService; } @On public void onSubmitOrder(SubmitOrderContext context) { Select byId = Select.from(cds.gen.catalogservice.Books_.class).byId(context.getBook()); Books book = persistenceService.run(byId).single().as(Books.class); if (context.getQuantity() > book.getStock()) throw new IllegalArgumentException(context.getQuantity() + " exceeds stock for book #" + book.getTitle()); book.setStock(book.getStock() - context.getQuantity()); persistenceService.run(Update.entity(Books_.CDS_NAME).data(book)); context.setCompleted(); } } ``` ::: [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/srv/cat-service.js){ .learn-more .impl .node target="_blank"} [Find this source also in **cap/samples**.](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/main/java/my/bookshop/handlers/CatalogServiceHandler.java#L166){ .impl .java .learn-more target="_blank"} [Learn more about **connecting to services** using `cds.connect`.](../node.js/cds-connect){ .learn-more .impl .node} [Learn more about **connecting to services** using `@Autowired`, `com.sap.cds.ql`, etc.](../java/services){.learn-more .impl .java} [Learn more about **reading and writing data** using `cds.ql`.](../node.js/cds-ql){ .learn-more .impl .node} [Learn more about **reading and writing data** using `cds.ql`.](../java/working-with-cql/query-api){ .learn-more .impl .java} [Learn more about **using reflection APIs** using `.entities`.](../node.js/core-services#entities){ .learn-more .impl .node} [Learn more about **typed access to data** using the CAP Java SDK.](../java/cds-data#typed-access){ .learn-more .impl .java} **Test this implementation**, [for example using the Vue.js app](#vue), and see how discounts are displayed in some book titles. {.impl .node} ### Sample HTTP Requests Test the implementation by submitting orders until you see the error messages. Create a file called _test.http_ and copy the request into it. ::: code-group ```http [test.http] ### Submit Order POST http://localhost:4004/browse/submitOrder Content-Type: application/json Authorization: Basic alice: { "book": 201, "quantity": 2 } ``` ::: ::: code-group ```http [test.http] ### Submit Order POST http://localhost:8080/odata/v4/browse/submitOrder Content-Type: application/json Authorization: Basic authenticated: { "book": 201, "quantity": 2 } ``` ::: ## Summary With this getting started guide we introduced many of the basics of CAP, such as: - [Domain Modeling](../guides/domain-modeling) - [Providing Services](../guides/providing-services) - [Consuming Services](../guides/using-services) - [Using Databases](../guides/databases) - [Serving UIs](../advanced/fiori) Visit the [***Cookbook***](../guides/) for deep dive guides on these topics and more. Also see the reference documentations for [***CDS***](../cds/), as well as [***Node.js***](../node.js/) and [***Java***](../java/) Service SDKs and runtimes. # Best Practices by CAP Key Concepts & Rationales {.subtitle} ## Introduction ### Primary Building Blocks The CAP framework features a mix of proven and broadly adopted open-source and SAP technologies. The following figure depicts CAP's place and focus in a stack architecture. ![Vertically CAP services are placed between database and UI. Horizontally, CDS fuels CAP services and is closer to the core than, for example, toolkits and IDEs. Also shown horizontally is the integration into various platform services.](assets/architecture.drawio.svg){} The major building blocks are as follows: - [**Core Data Services** (CDS)](../cds/) — CAP's universal modeling language, and the very backbone of everything; used to capture domain knowledge, generating database schemas, translating to and from various API languages, and most important: fueling generic runtimes to automatically serve request out of the box. - [**Service Runtimes**](../guides/providing-services.md) for [Node.js](../node.js/) and [Java](../java/) — providing the core frameworks for services, generic providers to serve requests automatically, database support for SAP HANA, SQLite, and PostgreSQL, and protocol adaptors for REST, OData, GraphQL, ... - [**Platform Integrations**](../plugins/) — providing CAP-level service interfaces (*'[Calesi](#the-calesi-pattern)'*) to cloud platform services in platform-agnostic ways, as much as possible. Some of these are provided out of the box, others as plugins. - [**Command-Line Interface** (CLI)](../tools/) — the Swiss army knife on the tools and development kit front, complemented by integrations and support in [*SAP Build Code*](https://www.sap.com/germany/products/technology-platform/developer-tools.html), *Visual Studio Code*, *IntelliJ*, and *Eclipse*. In addition, there's a fast-growing number of [plugins](../plugins/) contributed by open-source and inner-source [communities](/resources/#public-resources) that enhance CAP in various ways, and integrate with additional tools and environments; the [*Calesi* plugins](./index.md#the-calesi-effect) are among them. ### Models fuel Runtimes CDS models play a prevalent role in CAP applications. They're ultimately used to fuel generic runtimes to automatically serve requests, without any coding for custom implementations required. ![Models fuel Generic Services](assets/fueling-services.drawio.svg){} CAP runtimes bootstrap *Generic Service Providers* for services defined in service models. They use the information at runtime to translate incoming requests from a querying protocol, such as OData, into SQL queries sent to the database. :::tip Models fuel Runtimes CAP uses the captured declarative information about data and services to **automatically serve requests**, including complex deep queries, with expands, where clauses and order by, aggregations, and so forth... ::: ### Concepts Overview The following sections provide an overview of the core concepts and design principles of CAP. The following illustration is an attempt to show all concepts, how they relate to each other, and to introduce the terminology. ![Service models declare service interfaces, events, facades, and services. Service interfaces are published as APIs and are consumed by clients. Clients send requests which trigger events. Services are implemented in service providers, react on events, and act as facades. Facades are inferred to service interfaces and are views on domain models. Service providers are implemented through event handlers which handle events. Also, service providers read/write data which has been declared in domain models.](assets/key-concepts.drawio.svg){} Start reading the diagram from the _Service Models_ bubble in the middle, then follow the arrows to the other concepts. We dive into each of these concepts in the following sections, starting with _Domain Models_, the other grey bubble in the previous illustration. ## Domain Models [CDS](../cds/) is CAP's universal modeling language to declaratively capture knowledge about an application's domain. Data models capture the *static* aspects of a domain, using the widely used technique of [*entity-relationship modeling*](https://en.wikipedia.org/wiki/Entity–relationship_model#:~:text=An%20entity–relationship%20model%20(or,instances%20of%20those%20entity%20types).). For example, a simple domain model as illustrated in this ER diagram: ![bookshop-erm.drawio](assets/bookshop-erm.drawio.svg) In a first iteration, it would look like this in CDS, with some fields added: ::: code-group ```cds [Domain Data Model] using { Country, cuid, managed } from '@sap/cds/common'; entity Books : cuid, managed { title : localized String; author : Association to Authors; } entity Authors : cuid, managed { name : String; books : Association to many Books on books.author = $self; country : Country; } ``` ::: [Type `Country` is declared to be an association to `sap.common.Countries`.](../cds/common#type-country) {.learn-more} ### Definition Language (CDL) We use CDS's [*Conceptual Definition Language (CDL)*](../cds/cdl) as a *human-readable* way to express CDS models. Think of it as a *concise*, and more *expressive* derivate of [SQL DDL](https://wikipedia.org/wiki/Data_definition_language). For processing at runtime CDS models are compiled into a *machine-readable* plain object notation, called *CSN*, which stands for [*Core Schema Notation (CSN)*](../cds/csn). For deployment to databases, CSN models are translated into native SQL DDL. Supported databases are [*SQLite*](../guides/databases-sqlite.md) and *[H2](../guides/databases-h2.md)* for development, and [_SAP HANA_](../guides/databases-hana.md) and [_PostgreSQL_](../guides/databases-postgres.md) for production. ![cdl-csn.drawio](assets/cdl-csn.drawio.svg) See also *[On the Nature of Models](../cds/models)* in the CDS reference docs. {.learn-more} ### Associations Approached from an SQL angle, CDS adds the concepts of (managed) *[Associations](../cds/cdl#associations)*, and [path expressions](../cds/cql#path-expressions) linked to that, which greatly increases the expressiveness of domain data models. For example, we can write queries, and hence declare views like that: ```cds [Using Associations] entity EnglishBooks as select from Books where author.country.code = 'GB'; ``` This is an even more compact version, using *[infix filters](../cds/cql#with-infix-filters)* and *navigation*. ```cds entity EnglishBooks as select from Authors[country.code='GB']:books; ``` ::: details See how that would look like in SQL... From a plain SQL perspective, think of *Associations* as the like of 'forward-declared joins', as becomes apparent in the following SQL equivalents of the preceding view definitions. Path expressions in `where` clauses become *INNER JOINs*: ```sql CREATE VIEW EnglishBooks AS SELECT * FROM Books -- for Association Books:author: INNER JOIN Authors as author ON author.ID = Books.author_ID -- for Association Authors:country: INNER JOIN Countries as country ON country.code = author.country_code -- the actual filter condition: WHERE country.code = 'GB'; ``` Path expressions in *infix filters* become *SEMI JOINs*, e.g.using `IN`: ```sql CREATE VIEW EnglishBooks AS SELECT * FROM Books -- for Association Books:author: WHERE Books.author_ID IN (SELECT ID from Authors as author -- for Association Authors:country: WHERE author.country_code IN (SELECT code from Countries as country -- the actual filter condition: WHERE country.code = 'GB'; ) ) ``` ... same with `EXISTS`, which is faster with some databases: ```sql CREATE VIEW EnglishBooks AS SELECT * FROM Books -- for Association Books:author: WHERE EXISTS (SELECT 1 from Authors as author WHERE author.ID = Books.author_ID -- for Association Authors:country: AND EXISTS (SELECT 1 from Countries as country WHERE country.code = author.country_code -- the actual filter condition: AND country.code = 'GB'; ) ) ``` ::: ### Aspects A distinctive feature of CDS is its intrinsic support for [_Aspect-oriented Modeling_](../cds/aspects), which allows to factor out separate concerns into separate files. It also allows everyone to adapt and extend everything anytime, including reuse definitions you don't own, but have imported to your models. ::: code-group ```cds [Separation of Concerns] // All authorization rules go in here, the domain models are kept clean using { Books } from './my/clean/schema.cds'; annotate Books with @restrict: [{ grant:'WRITE', to:'admin' }]; ``` ```cds [Verticalization] // Everyone can extend any definitions, also ones they don't own: using { sap.common.Countries } from '@sap/cds/common'; extend Countries with { county: String } // for UK, ... ``` ```cds [Customization] // SaaS customers can do the same for their private usage: using { Books } from '@capire/bookshop'; extend Books with { ISBN: String } ``` :::
:::tip Key features & qualities CDS greatly promotes **Focus on Domain** by a *concise* and *comprehensible* language. Intrinsic support for *aspect-oriented modeling* fosters **Separation of Concerns**, as well as **Extensibility** in customization, verticalization, and composition scenarios. ::: ## Services Services are the most central concept in CAP when it comes to an application's behavior. They're declared in CDS, frequently as views on underlying data, and implemented by services providers in the CAP runtimes. This ultimately establishes a **Service-centric Paradigm** which manifests in these **key design principles**: - **Every** active thing is a **service** → _yours, and framework-provided ones_{.grey} - Services establish **interfaces** → *declared in service models*{.grey} - Services react to **events** → *in sync and async ones*{.grey} - Services run **queries** → *pushed down to database*{.grey} - Services are **agnostic** → *platforms and protocols*{.grey} - Services are **stateless** → *process passive data*{.grey} ![Key Design Principles](assets/paradigm.drawio.svg) :::tip Design principles and benefits The design principles - and adherence to them - are crucial for the key features & benefits. ::: ### Services as Interfaces Service models capture the *behavioral* aspects of an application. In its simplest form a service definition, focusing on the *interface* only, could look like that: ::: code-group ```cds [Service Definition in CDS] service BookshopService { entity Books : cuid { title: String; author: Association to Authors } entity Authors :cuid { name: String; } action submitOrder ( book: UUID, quantity: Integer ); } ``` ::: ### Services as Facades Most frequently, services expose denormalized views of underlying domain models. They act as facades to an application's core domain data. The service interface results from the _inferred_ element structures of the given projections. For example, if we take the [*bookshop* domain model](../get-started/in-a-nutshell#capture-domain-models) as a basis, we could define a service that exposes a flattened view on books with authors names as follows (note and click on the *⇒ Inferred Interface* tab): ::: code-group ```cds [Service as Facade] using { sap.capire.bookshop as underlying } from '../db/schema'; service CatalogService { @readonly entity ListOfBooks as projection on underlying.Books { ID, title, author.name as author // flattened } } ``` ```cds [⇒   Inferred Interface] service CatalogService { @readonly entity ListOfBooks { key ID : UUID; title : String; author : String, // flattened authors.name } } ``` [Learn more about `as projection on` in the **Querying** section below](#querying). {.learn-more} ::: ::: tip **Single-purposed Services** The previous example follows the recommended best practice of a *[single-purposed service](../guides/providing-services#single-purposed-services)* which is specialized on *one* specific use case and group of users. Learn more about that in the [Providing Services](../guides/providing-services) guide. ::: ### Service Providers As we'll learn in the next chapter after this, service providers, that is the implementations of services, react to events, such as a request from a client, by registering respective event handlers. At the end of the day, a service implementation is **the sum of all event handlers** registered with this service. [More about service implementations through *Event Handlers* in the next chapter](#events) {.learn-more} ### Not Microservices Don't confuse CAP services with Microservices: - **CAP services** are modular software components, while ... - **Microservices** are deployment units. CAP services are important for how you *design* and *implement* your applications in clean and modularized ways on a fine-granular use case-oriented level. The primary focus of Microservices is on how to cut your whole application into independent coarse-grained(!) deployment units, to release and scale them independently. [Learn more about that in the *Anti Patterns* section on Microservices](bad-practices#microservices-mania) {.learn-more} ## Events While services are the most important concept for models and runtime, events are equally, if not more, important to the runtime. CAP has a *ubiquitous* notion of events: they show up everywhere, and everything is an event, and everything happening at runtime is in reaction to events. We complement our [*Service-centric Paradigm*](#services) by these additional **design principles**: - **Everything** happening at runtime is triggered by / in reaction to **events** - **Providers** subscribe to, and *handle* events, as their implementations - **Observers** subscribe to, and *listen* to events 'from the outside' - Events can be of ***local*** or ***remote*** origin, and be... - Delivered via ***synchronous*** or ***asynchronous*** channels ### Event Handlers Services react to events by registering *event handlers*. ![event-handlers.drawio](assets/event-handlers.drawio.svg) This is an example of that in Node.js: ```js class BookshopService extends cds.ApplicationService { init() { const { Books } = this.entities this.before ('UPDATE', Books, req => validate (req.data)) this.after ('READ', Books, books => ... ) this.on ('SubmitOrder', req => this.emit ('BookOrdered',req.data)) }} ``` You can also register *generic* handlers, acting on classes of similar events: ```js this.before ('READ','*', ...) // for READ requests to all entities this.before ('*','Books', ...) // for all requests to Books this.before ('*', ...) // for all requests served by this srv ``` ::: info What constitutes a service implementation? The service's implementation consists of all event handlers registered with it. ::: ### Event Listeners The way we register event handlers that *implement* a service looks similar to how we register similar handlers for the purpose of just *listening* to what happens with other services. At the end of the day, the difference is only to *whom* we register event listeners. ::: code-group ```js [Service Provider] class SomeServiceProvider { async init() { this.on ('SomeEvent', req => { ... }) }} ``` ```js [Observer] class Observer { async init() { const that = await cds.connect.to ('SomeService') that.on ('SomeEvent', req => { ... }) }} ``` ::: ::: info Service provider and observer Everyone/everything can register event handlers with a given service. This is not limited to the service itself, as its implementation, but also includes *observers* or *interceptors* listening to events 'from the outside'. ::: ### Sync / Async From an event handler's perspective, there's close to no difference between *synchronous requests* received from client like UIs, and *asynchronous event messages* coming in from respective message queues. The arrival of both, or either of which, at the service's interface is an event, to which we can subscribe to and react in the same uniform way, thus blurring the lines between the synchronous and the asynchronous world. ![events.drawio](assets/events.drawio.svg) Handling synchronous requests vs asynchronous event messages: ::: code-group ```js [Handling sync Requests] class CatalogService { async init() { this.on ('SubmitOrder', req => { // sync action request const { book, quantity } = msg.data // process it... }) }} ``` ```js [Handling async Events] class AnotherService { async init() { const cats = await cds.connect.to ('CatalogService') cats.on ('BookOrdered', msg => { // async event message const { book, quantity } = msg.data // process it... }) }} ``` ::: Same applies to whether we *send* a request or *emit* an asynchronous event: ```js await cats.send ('SubmitOrder', { book:201, quantity:1 }) await this.emit ('BookOrdered', { book:201, quantity:1 }) ``` ### Local / Remote Services cannot only be used and called remotely, but also locally, within the same process. The way we connect to and interact with *local* services is the same as for *remote* ones, via whatever protocol: ```js const local_or_remote = await cds.connect.to('SomeService') await local_or_remote.send ('SomeRequest', {...data}) await local_or_remote.read ('SomeEntity').where({ID:4711}) ``` Same applies to the way we subscribe to and react to incoming events: ```js this.on ('SomeRequest', req => {/* process req.data */}) this.on ('READ','SomeEntity', req => {/* process req.query */}) ``` > [!note] > > The way we *connect* to and *consume* services, as well as the way we *listen* and *react* to events, and hence *implement* services, are *agnostic* to whether we deal with *local* or *remote* services, as well as to whatever *protocols* are used.
→ see also [*Agnostic by Design*](#agnostic-by-design) ## Data All data processed and served by CAP services is *passive*, and represented by *plain simple* data structures as much as possible. In Node.js it's plain JavaScript record objects, in Java it's hash maps. This is of utmost importance for the reasons set out in the following sections. ![passive-data.drawio](assets/passive-data.drawio.svg) ### Extensible Data Extensibility, in particular in a SaaS context, allows customers to tailor a SaaS application to their needs by adding extension fields. These fields are not known at design time but need to be served by your services, potentially through all interfaces. CAP's combination of dynamic querying and passive data this is intrinsically covered and extension fields look and feel no different than pre-defined fields. For example, an extension like that can automatically be served by CAP: ```cds extend Books with { some_extension_field : String; } ``` > [!warning] > > In contrast to that, common *DAOs*, *DTOs*, *Repositories*, or *Active Records* approaches which use static classes can't transport such extension data, not known at the time these classes are defined. Additional means would be required, which is not the case for CAP. ### Queried Data As detailed out in the next chapter, querying allows service clients to ask exactly for the data they need, instead of always reading full data records, only to display a list of books titles. For example, querying allows that: ```js let books = await GET `Books { ID, title, author.name as author }` ``` While a static DAO/DTO-based approach would look like that: ```js let books = await GET `Books` // always read in a SELECT * fashion ``` In effect, when querying is used the shape of records in result sets vary very much, even in denormalized ways, which is hardly possible to achieve with static access or transfer objects. ### Passive Data As, for the previously mentioned reasons, we can't use static classes to represent data at runtime, there's also no reasonable way to add any behavior to data objects. So in consequence, all data has to be passive, and hence all logic, such as for validation or field control 'determinations' has to go somewhere else → into event handlers. > [!tip] > > Adhering to the principle of passive data also has other positive effects. For example: > > **(1)** Passive data can be easily cached in content delivery networks.   **(2)** Passive data is more lightweight than active objects.   **(3)** Passive data is *immutable* → which allows to apply parallelization as known from functional programming. ## Querying As a matter of fact, business applications tend to be *data-centric*. That is, the majority of operations deal with the discipline of reading and writing data in various ways. Over the decades, querying, as known from SQL, as well as from web protocols like OData or GraphQL, became the prevalent and most successful way for this discipline. ### Query Language (CQL) As already introduced in the [*Domain Models*](#domain-models) section, CAP uses queries in CDS models, for example to declare service interfaces by projections on underlying entities, here's an excerpt of what was mentioned earlier: ```cds entity ListOfBooks as projection on underlying.Books { ID, title, author.name as author } ``` We use [CDS's *Conceptual Query Language (CQL)*](../cds/cql) to write queries in a human-readable way. For reasons of familiarity, CQL is designed as a derivate of SQL, but used in CAP independent of SQL and databases. For example to derive new types as projections on others, or sending OData or GraphQL queries to remote services. Here's a rough comparison of [CQL](../cds/cql.md) with [GraphQL](http://graphql.org), [OData](https://www.odata.org), and [SQL](https://en.wikipedia.org/wiki/SQL): | Feature | CQL | GraphQL | OData | SQL | | ------------------ | :-----: | :-------: | :-----: | :-----: | | CRUD | ✓ | ✓ | ✓ | ✓ | | Flat Projections | ✓ | ✓ | ✓ | ✓ | | Nested Projections | ✓ | ✓ | ✓ | | | Navigation | ✓ | (✓) | ✓ | | | Filtering | ✓ | | ✓ | ✓ | | Sorting | ✓ | | ✓ | ✓ | | Pagination | ✓ | | ✓ | ✓ | | Aggregation | ✓ | | ✓ | ✓ | | Denormalization | ✓ | | | ✓ | | Native SQL | ✓ | | | ✓ | As apparent from this comparison, we can regard CQL as a superset of the other query languages, which enables us to translate from and to all of them. ### Queries at Runtime CAP also uses queries at runtime: an OData or GraphQL request is essentially a query which arrives at a service interface. Respective protocol adapters translate these into *machine-readable* runtime representations of CAP queries (→ see [*Core Query Notation, CQN*](../cds/cqn)), which are then forwarded to and processed by target services. Here's an example, including CQL over http: ::: code-group ```sql [CQL] SELECT from Books { ID, title, author { name }} ``` ```graphql [CQL /http] GET Books { ID, title, author { name }} ``` ```graphql [GraphQL] POST query { Books { ID, title, author { name } } } ``` ```http [OData] GET Books?$select=ID,title&$expand=author($select=name) ``` ```js [⇒  CAP Query (in CQN)] { SELECT: { from: {ref:['Books']}, columns: [ 'ID', 'title', {ref:['author']}, expand:[ 'name' ] }] }} ``` ::: Queries can also be created programmatically at runtime, for example to send queries to a database. For that we're using *human-readable* language bindings, which in turn create CQN objects behind the scenes. For example, like that in Node.js (both creating the same CQN object as described earlier): ::: code-group ```js [Using TTL] let books = await SELECT `from Books { ID, title, author { name } }` ``` ```js [Using Fluent API] let books = await SELECT.from (Books, b => { b.ID, b.title, b.author (a => a.name) }) ``` ::: ### Push-Down to Databases The CAP runtimes automatically translate incoming queries from the protocol-specific query language to CQN and then to native SQL, which is finally sent to underlying databases. The idea is to push down queries to where the data is, and execute them there with best query optimization and late materialization. ![cql-cqn.drawio](assets/cql-cqn.drawio.svg) CAP queries are **first-class** objects with **late materialization**. They're captured in CQN, kept in standard program variables, passed along as method arguments, are transformed and combined with other queries, translated to other target query languages, and finally sent to their targets for execution. This process is similar to the role of functions as first-class objects in functional programming languages. ## Agnostic by Design In [Introduction - What is CAP](../about/index) we learned that your domain models, as well as the services, and their implementations are **agnostic to protocols**, as well whether they're connected to and consume other services **locally or remotely**. In this chapter, we complement this by CAP-level integration of platform services and vendor-independent database support. So, in total, and in effect, we learn: > [!tip] Your domain models and application logic stays... > > - Agnostic to *Local vs Remote* > - Agnostic to *Protocols* > - Agnostic to *Databases* > - Agnostic to *Platform Services* and low-level *Technologies* > > **This is *the* key enabling quality** for several major benefits and value propositions of CAP, such as [*Fast Inner Loops*](./index#fast-inner-loops), [*Agnostic Services*](./index#agnostic-microservices), [*Late-cut Microservices*](./index.md#late-cut-microservices), and several more... ### Hexagonal Architecture The *[Hexagonal Architecture](https://alistair.cockburn.us/hexagonal-architecture/)* (aka *Ports and Adapters Architecture/Pattern*) as first proposed by Alistair Cockburn in 2005, is quite famous and fancied these days (rightly so). As he introduces it, its intent is to: *"Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases"* {.indent} ... and he illustrated that like this: ![Hexagonal architecture basic.gif](https://alistair.cockburn.us/wp-content/uploads/2018/02/Hexagonal-architecture-basic-1.gif) In a nutshell, this introduction to the objectives of hexagonal architecture translates to that in our world of cloud-based business applications: > [!tip] Objectives of Hexagonal Architecture > > - Your *Application* (→ the inner hexagon) should stay ***agnostic*** to *"the outside"* > - Thereby allowing to replace *"the outside"* met in production by *mocked* variants > - To reduce complexity and speed up turnaround times at *development*, and in *tests* >
→ [*'Airplane Mode' Development & Tests*](index.md#fast-inner-loops) > > **In contrast to that**, if you (think you) are doing Hexagonal Architecture, but still find yourself trapped in a slow and expensive always-connected development experience, you might have missed a point... → the *Why* and *What*, not *How*. #### CAP as an implementation of Hexagonal Architecture CAP's [agnostic design principles](#agnostic-by-design) are very much in line with the goals of Hexagonal Architecture, and actually give you exactly what these are aiming for: as your applications greatly stay *agnostic* to protocols, and other low-level details, which could lock them in to one specific execution environment, they can be "*developed and tested in isolation*", which in fact is one of CAP's [key value propositions](./index#fast-inner-loops). Moreover, they become [*resilient* to disrupting changes](./index#minimized-lock-ins) in "the outside". Not only do we address the very same goals, we can also identify several symmetries in the way we address and achieve these goals as follows: | Hexagonal Architecture | CAP | | ---------------------- | ------------------------------------------------------------ | | "The Outside" | Remote *Clients* of Services (inbound)
Databases, Platform Services (outbound) | | Adapters | Protocol ***Adapters*** (inbound + outbound),
Framework Services (outbound) | | Ports | Service ***Interfaces*** + Events (inbound + outbound) | | Application Model | Use-case ***Services*** + Event Handlers | | Domain Model | Domain ***Entities*** (w/ essential invariants) |
> [!tip] > > CAP is very much in line with both, the intent and goals of Hexagonal Architecture, as well as with the fundamental concepts. Actually, CAP *is an implementation* of Hexagonal Architecture, in particular with respect to the [*Adapters*](#protocol-adapters) in the outer hexagon, but also regarding [*Application Models*](#application-domain) and [*(Core) Domain Models*](#application-domain) in the inner hexagon. [Also take notice of the *Squared Hexagons* section in the Anti Patterns guide](bad-practices#squared-hexagons) {.learn-more} ### Application Domain Looking at the things in the inner hexagon, many protagonists distinct between *application model* and *domain model* living in there. In his initial post about [*Hexagonal Architecture*](https://wiki.c2.com/?HexagonalArchitecture) in the in [*c2 wiki*](https://wiki.c2.com) Cockburn already highlighted that as follows in plain text: ​ *OUTSIDE <-> transformer <--> ( **application** <-> **domain** )* {} ::: details Background from MVC and *Four Layers Architecture* ... That distinction didn't come by surprise to the patterns community in c2, as Cockburn introduced his proposal as a *"symmetric"* evolution of the [*Four Layers Architecture*](https://wiki.c2.com/?FourLayerArchitecture) by Kyle Brown, which in turn is an evolution of the [*Model View Controller*](https://wiki.c2.com/?ModelViewController) pattern, invented by Trygve Reenskaug et al. at Xerox PARC. The first MVC implementations in [*Smalltalk-80*](https://en.wikipedia.org/wiki/Smalltalk) already introduced the notion of an *[Application Model](https://wiki.c2.com/?ApplicationModel)* which acts as a *mediator* between use case-oriented application logic, and the core [*Domain Model*](https://wiki.c2.com/?DomainModel) classes, which primarily represent an application's data objects, with only the most central invariants carved in stone. Yet, **both are agnostic** to wire protocols or ['UI widgetry'](https://wiki.c2.com/?FourLayerArchitecture) → the latter being covered and abstracted from by *Views* and *Controllers* in MVC. ::: #### See Also... - The [*Model Model View Controller*](https://wiki.c2.com/?ModelModelViewController) pattern in c2 wiki, in which Randy Stafford points out the need for such twofold models: *"... there have always been two kinds of model: [DomainModel](https://wiki.c2.com/?DomainModel), and [ApplicationModel](https://wiki.c2.com/?ApplicationModel)."* {.indent} - [*Hexagonal Architecture and DDD (Domain Driven Design)*](https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture/#hexagonal-architecture-and-ddd-domain-driven-design) by Sven Woltmann, a great end-to-end introduction to the topic, which probably has the best, and most correct illustrations, like this one: ![Hexagonal architecture and DDD (Domain Driven Design)](https://www.happycoders.eu/wp-content/uploads/2023/01/hexagonal-architecture-ddd-domain-driven-design-600x484.png){.zoom75} #### Entities ⇒ Core Domain Model {#core-domain-model} Your core domain model is largely covered by CDS-declared entities, enriched with invariant assertions, which are deployed to databases and automatically served by generic service providers out of the box. Even enterprise aspects like common code lists, localized data, or temporal data are simple to add and served out of the box as well. #### Services ⇒ Application Model Your application models are your services, also served automatically by generic providers, complemented with your domain-specific application logic you added in custom event handlers. The services are completely agnostic to inbound and outbound protocols: they react on events in agnostic ways, and use other services in equally agnostic ways — including framework-provided ones, like database services or messaging services. > [!tip] > > Your ***Core Domain Model*** is largely captured in respective [CDS data models](#domain-models), including annotations for invariants, and served automatically by CAP's generic providers. > > Your ***Application Model*** are CAP services, which are [also declared in CDS](#services) and served by generic providers, complemented with your domain-specific [**custom event handlers**](#event-handlers). ### Protocol Adapters Behind the scenes, i.e., in the **outer hexagon** containing stuff, you as an application developer should not see, the CAP runtime employs Protocol Adapters, which translate requests from (and to) low-level protocols like HTTP, REST, OData, GraphQL, ... to protocol-agnostic CAP requests and queries. - for ***inbound*** communication → i.e., requests your application *receives*, as well as as... - for ***outbound*** communication → i.e., requests your application *sends* to other services. In effect your service implementations stay agnostic to (wire) protocols, which allows us to exchange protocols, replace targets by mocks, do fast inner loop development in airplane mode, ... even change topologies from a monolith to micro services and vice versa late in time. ![protocol-adapters.drawio](assets/protocol-adapters.drawio.svg) The inbound and outbound adapters (and the framework services) effectively provide your inner core with the ***ports*** to the outside world, which always provide the same, hence *agnostic* style of API (indicated by the green arrows used in the previous graphic), as already introduced in [Local /Remote](#local-remote). Inbound: ```js this.on ('SomeEvent', msg => {/* process msg.data */}) this.on ('SomeRequest', req => {/* process req.data */}) this.on ('READ','SomeEntity', req => {/* process req.query */}) ``` Outbound: ```js const any = await cds.connect.to('SomeService') await any.emit ('SomeEvent', {...data}) await any.send ('SomeRequest', {...data}) await any.read ('SomeEntity').where({ID:4711}) ``` > In the latter, `any` can be any service your application needs to talk to. Local application services, remote services, CAP-based and non-CAP-based ones, as well as framework-provided services, such as database services, or messaging services → more on that in the next section... ### Framework Services In the figure above we see boxes for *Framework Services* and *Database Services*. Both are CAP framework-provided services, which — following our [guiding principle](#services) of *"Every active thing in CAP is a CAP service"* — are implemented as CAP services itself, and hence are also consumed via the same agnostic API style, as any other CAP service. Overall, this is the class hierarchy implemented in the CAP runtimes: ![service-classes.drawio](assets/service-classes.drawio.svg) The *RemoteService* box at the bottom is a CAP service proxy for remote services, which in turn used the outbound *Protocol Adapters* behind the scenes to translate outgoing requests to the target wire protocols. The *DatabaseService* subclasses provide implementations for the different databases, thereby trying to provide a consistent, portable usage, without falling into a common denominator syndrome pit. Same for the *MessagingServices*. ## Intrinsic Extensibility SaaS customers of CAP applications use the very same techniques than any developer can use to adapt given models or service implementations to their needs. That applies to both, models and service implementations. ### Extending Models Everyone can extend every model definition: SaaS customers can add extension fields or new entities to respective definitions of as SaaS application's models. In the same way, you can extend any reuse definition that you might consume from reuse packages, including the reuse models shipped with CAP itself. For example: ```cds using { User, managed } from '@sap/cds/common'; extend managed with { ChangeNotes : Composition of many { key timestamp : DateTime; author : User; note : String(1000); } } ``` This would extend the common reuse type `managed` obtained from `@sap/cds/common` to not only capture latest modifications, but a history of commented changes, with all entities inheriting from that aspect, own or reused ones, receiving this enhancement automatically. > [!tip] > > Not only can your SaaS customers extend *your* definitions, but also you can extend any definitions that you *reuse* to adapt it to your needs. Adapting widely used reuse definitions, as in this example, has the advantage that you reach many existing usages. [Learn more about these options in the CDS guide about *Aspect-oriented Modeling*](../cds/aspects). {.learn-more} ### Extension Logic As introduced in the section on [*Event Listeners*](#event-listeners) above, everyone can add event handlers to every service. Similar to aspect-oriented modeling, this allows to extend reuse services. For example, assumed you're using a reuse package that provides a service to manage reviews, as show-cased in the [*cap/samples* *reviews*](https://github.com/sap-samples/cloud-cap-samples/tree/main/reviews) package. And whenever a new review is added you want to do something in addition. To accomplish this, simply add a respective event handler to the reuse service like so: ```js const ReviewsService = await cds.connect.to('ReviewsService') ReviewsService.after ('CREATE', 'Reviews', req => { // do something in addition... }) ``` As a service provider you can also introduce explicitly defined business-level extension points, instead of allowing your clients to hook in to your technical event. For example as the owner of the reviews service, you could add and event like that to your service definition: ```cds service ReviewsService { ... event ReviewAdded { subject : ReviewedSubject; title : String; message : String; reviewer : User; } } ``` And in your implementation you would emit such events like so: ```js class ReviewsService { init() { this.after ('CREATE','Reviews', req => this.emit('ReviewAdded', req.data)) }} ``` With that your clients can hook in to that extension point like that: ```js const ReviewsService = await cds.connect.to('ReviewsService') ReviewsService.on ('ReviewAdded', msg => { // do something in addition... }) ``` ### Extensible Framework As stated in the introduction: "*Every active thing is a Service*". This also applies to all framework features and services, like databases, messaging, remote proxies, MTX services, and so on. And as everyone can add event handlers to every service, you can also add event handlers to framework services, and thereby extend the core framework. For example, you could extend CAP's primary **database service** like this: ```js cds.db .before ('*', req => { console.log (req.event, req.target.name) }) ``` In the same way you could add handlers to **remote service proxies**: ```js const proxy = await cds.connect.to ('SomeRemoteService') proxy.on ('READ', 'Something', req => { // handle that remote call yourself }) proxy.before ('READ', '*', req => { // modify requests before they go out }) proxy.after ('READ', '*', result => { // post-process recieved responses }) ``` ## The Calesi Pattern 'Calesi' stands for CAP-level Service Interfaces, and refers to the increasing numbers of BTP platform services which offer a CAP-level client library. These drastically reduce the boilerplate code applications would have to write. For example, adding attachments required thousands of lines of code, caring for the UI, streaming of large data, size limiting, malware scanning, multitenancy, and so forth... after we provided the [Attachments plugin](../plugins/#attachments), all an application needs to do now is to add that line to an entity: ```cds entity Foo { //... attachments : Composition of many Attachments; // [!code focus] } ``` Whenever you have to integrate external services, you should follow the Calesi patterns. For example, let's take an audit logging use case: Data privacy regulations require to write audit logs whenever personal data is modified. 1. **Declare the service interface** — provide a CAP service that encapsulates outbound communication with the audit log service. Start by defining the respective service interface in CDS: ```cds service AuditLogService { event PersonalDataModified : LogEntry { subject : DataSubject; changes : many { field : String; old : String; new : String; }; tenant : Tenant; user : User; timestamp : DateTime: } } ``` 2. **Implement a mock variant** — Add a first service implementation: one for mocked usage during development: ```js class AuditLogService {init(){ this.on('PersonalDataModified', msg => { console.log('Received audit log message', red.data) }) }} ``` > [!tip] > > With that, you already fulfilled a few goals and guidelines from Hexagonal Architecture: The interface offered to your clients is agnostic and follows CAP's uniform service API style. Your consumers can use this mock implementation at development to speed up their [inner loop development](./#fast-inner-loops) phases. 3. **Provide the real impl** — Start working on the 'real' implementation that translates received audit log messages into outbound calls to the real audit log service. > [!note] > > You bought yourself some time for doing that, as your clients already got a working mock solution, which they can use for their development. 4. **Plug and play** —Add profile-aware configuration presets, so your consumers don't need to do any configuration at all: ```js { cds: { requires: { 'audit-log': { "[development]": { impl: ".../audit-log-mock.js" }, "[production]": { impl: ".../the-reasl-audit-log-srv.js" }, } } } } ``` 5. **Served automatically?** — Check if you could automate things even more instead of having your use your service programmatically. For example, we could introduce an annotation *@PersonalData*, and write audit log entries automatically whenever an entity or element is tagged with that: ```js :line-numbers cds.on('served', async services => { const auditlog = await cds.connect.to('AuditLog') for (let each of services) { for (let e of each.entities) if (e['@PersonalData']) { each.on('UPDATE',e, auditlog.emit('PersonalDataModified', {...})) } } }) ``` That example was an *outbound* communication use case. Basically, we encapsulate outbound channels with CAP Services, as done in CAP for messaging service interfaces and database services. For *inbound* integrations, we would create an adapter, that is, a service endpoint which translates incoming messages into CAP event messages which it forwards to CAP services. With that, the actual service provider implementation is again a protocol-agnostic CAP service, which could as well be called locally, for example in development and tests. > [!tip] > > Essentially, the 'Calesi' pattern is about encapsulating any external communication within a CAP-service-based interface, so that the actual consumption and/or implementation benefits from the related advantages, such as an agnostic consumption, intrinsic extensibility, automatic mocking, and so on. # Bad Practices ## Questionable Prior Arts ### DAOs, DTOs, Active Records - → see [Best Practices / Passive Data](best-practices#data) ### Object-Relational Mappers - → see [Best Practices / Querying](best-practices#querying) ### BO-centric Frameworks ... which bypass or are in conflict with CAP's [key design principles](bad-practices.md), for example: - ORM techniques like Spring repositories - Active Records, DAOs These would be in conflict with CAP's focus on stateless services processing passive data, as well as with the querying-based approach to read and write data. ### Determinations & Validations - This might be a special thing if you come from a background where these terms were prominently positioned, accompanied by corresponding frameworks. - Quite likely that is an SAP background, as we didn't find the term "determination" used outside of these SAP circles in that context. - CAP is actually an offspring of a performance firefighting taskforce project, which identified such frameworks and their overly fragmented and fine-granular element level approach as one of a few root causes for framework-induced performance overheads. - Hence CAP intentionally does not offer element-level call-level validation or determination framework, and strongly discourages combining your use of CAP with such. - CAP does provide declarative element-level validations though → these are advisable, as we can optimize the implementations behind the scenes, which is just not possible in the imperative call-level frameworks. - ### Sticking to DIY (or NIH) Such as... - Low-level http or OData requests - Low-level integration with message brokers - Database-specific things without need - Non-CAP client libraries for BTP services Doing so would spoil the party, for example regarding rapid local development at minimized costs, fast test pipelines, and late-cut µ services. It would also expose your projects to risks of disruptions by changes in those rather volatile technologies. ### Always done it this way - and CAP is different... for a reason, or more... ;-) ## Abstracting from CAP - CAP already provides abstractions from the underlying database, the protocols, the deployment target, the client technology, and more. - CAP is also an implementation of Hexagonal Architecture, which is an abstraction of the same kind. - So, abstracting from CAP would be abstracting from an abstraction, which is a bad idea in general, and certainly will ensure that you won't benefit from the full power of CAP, any longer. ### Squared Hexagons - As documented in the best practices guide, CAP is not only very much in line with Hexagonal Architecture, it actually *is an implementation* of it. - So there's little need to invest into the outer hexagon → focus on the inner one - Yet, we saw projects insisting on doing Hexagonal Architecture their own way, or maybe the very way that was discussed in some other paper, done with some other framework ... - ... Hexagonal Arch ** 2 = ? ### Same for DDD... - Focus on Domain is exactly what domain-driven design is also striving for... and there are some many commonalities in concepts and approaches. - Yet, we saw projects insisting on doing DDD a very specific way, for example using Active Records, Spring repositories, etc.... → things [we list as bad practices above](#daos-dtos-active-records) ## Code Generators ### The Swagger Textbook Alternative frameworks or toolsets follow code generation approaches. Swagger does so for example: One write OpenAPI documents in YAML in the [Swagger Editor](https://editor.swagger.io), and have a server package generated, for example for Node.js, which, as the included readme tells us *"... leverages the mega-awesome [swagger-tools](https://github.com/apigee-127/swagger-tools) middleware which does most all the work."* → it does so as follows: | Feature | Swagger | CAP | |--------------------------------------|:--------------------:|:---------------------:| | Lines of code for service definition | **~555**{.h3}{.red} | **~11**{.h3} {.green} | | Lines of code for implementation | **~500**{.h3} {.red} | **0**{.h3} {.green} | | Size of framework library | 16 MB {.red} | 10 MB {.green} | | CRUDQ served on DB, including... | | ✓ | | Deep Reads & Writes | | ✓ | | Deep Hierarchies | | ✓ | | Aggregations | | ✓ | | Pagination | | ✓ | | Sorting | | ✓ | | Search | | ✓ | | Filtering | | ✓ | | Primary Keys | | ✓ | | Access Control | | ✓ | | Localized Data | | ✓ | | Managed Data | | ✓ | | Media Data | | ✓ | | Temporal Data | | ✓ | | Fiori Draft Handling | | ✓ | | Exclusive Locking | | ✓ | | Conflict Detection (via ETags) | | ✓ | | Data Replication (upcoming) | | ✓ | | Data Privacy | | ✓ | | ... | | ✓ | While code generators also have you writing less code yourself, the code is still there (to cover all that CAP covers, we could extrapolate the 500 lines of code to end up in ~5,000, maybe 50,000 ...?). To mention only the most critical consequence out of this: **No single points to fix**, as you simply can't fix code generated in the past. ::: details CDS-based service definitions vs OpenAPI documents ... Even if we'd ignore all the other things, there still remains the difference between writing ~11 lines of concise and comprehensible CDS declarations, or ~333 lines of YAML. While the former allows to involve and closely collaborate with domain experts, the latter certainly doesn't. (And technocratic approaches like Hexagonal Architecture or Domain-Driven Design the way it's frequently done by developers don't really help either.) ::: ### Code-Generating AI - Don't confuse "[*Generative AI*](https://en.wikipedia.org/wiki/Generative_artificial_intelligence)" with '*Code-generating AI*' ... - Even though it's AI-generated the usual drawbacks for generated code apply: - **No single points to fix** all that code that was generated last year - One off approach → doesn't help much in evolutionary, iterative development - ... - There's a different between a GPT-generated one-off thesis and long-lived enterprise software, which needs to adapt and scale to new requirements. ## Overly Generic Approaches ### The 'ODatabase' Anti Pattern - Assume you have a domain model with 123 entities - Then the easiest thing is to add a single service with 123 1:1 projections...? - As all the rest can be done by CAP's and OData's powerful query languages, right? - → that service is the exact opposite of a use case-oriented facade - if you want that, don't use CAP, don't use any layered architecture at all ...s - just connect your client directly to a SQL database in a two tier model ;-) ### Tons of Glue Code - as stated, while CAP cares about the vast majority of non-functional requirements, qualities, wire protocols, low-level stuff... so that you, as an application developer should be able to put primary focus on domain. - if you still find yourself lost in a high ratio of glue code, something has certainly gone wrong ## Microservices Mania Avoid eager fragmentation into microservices. Instead, start with a monolith and cut out microservices later, when you really need them. This is what we call "late-cut microservices". See also... - [Microservices Mania: Are Moduliths the Saner Path to Scalable Architecture?](https://blog.payara.fish/microservices-mania-are-moduliths-the-saner-path-to-scalable-architecture) - [Mainstream Microservices Mania Challenges Increasing with Adoption](https://www.f5.com/de_de/company/blog/mainstream-microservices-mania-challenges-increasing-with-adoption) - [What is Better: Modular Monolith vs. Microservices](https://medium.com/codex/what-is-better-modular-monolith-vs-microservices-994e1ec70994) - [Architecture Style: Modulith vs. Microservices](https://dzone.com/articles/architecture-style-modulith-vs-microservices) - [Death by a Thousand Microservices](https://www.reddit.com/r/programming/comments/18crnmz/death_by_a_thousand_microservices/). ## Ignorance When writing these guides we frequently wonder whether it is worth the effort, because we likely have to understand and to accept that we're living in times of ... - Too long; didn't read (TL;DR) - Too busy (→ an [anti pattern on it's own](assets/too-busy) \;-) - Not required, as we've AI now - I don't need to read that, as I already know (better) ... If against all odds you are indeed just reading these lines, please leave a trace about that in [blue sky](https://bsky.app) with this content (including link): *[I read it! ☺️](#ignorance)
#sapcap* {.indent} ... to let the others out there know that there's hope, and some hi, left... \:-) And in case you are just reading these lines, because of these posts, we strongly encourage you to read these new guides, even if (you think) you already know CAP: - *[Introduction – What is CAP?](./index) → Value Propositions* - *[Best Practices](best-practices) → Key Concepts & Rationales* - *[Anti Patterns](bad-practices) → Do's and **don'ts*** And after you did that, would be great if you'd leave another trace about that in [blue sky](https://bsky.app) with this content (including link): *[I really read it! 🤓](#ignorance)
#sapcap* {.indent} ... as a motivation for us to keep on writing, and that it is worth the effort. # Learning Sources ## This Documentation This documentation — named _'capire'_, italian for understand — is the primary source of information for the SAP Cloud Application Programming Model. It's organized as follows: | Section | Description | |------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| | [Getting Started](./)
[Cookbook](../guides/)
[Advanced](../advanced/) | **Task-oriented guides** that walk you through the most common tasks and advanced topics in CAP-based development. | | [CDS](../cds/)
[Java](../java/)
[Node](../node.js/)
[Tools](../tools/) | **Reference docs** for respective areas. | | [Plugins](../plugins/) | **Curated list of plugins** that extend the capabilities of the CAP framework. | | [Releases](../releases/) | The place where you can stay up to date with the most recent information about new features and changes in CAP. | ### Node/Java Toggles ### Feature Status Badges Within the docs, you find badges that indicate the status of a feature, or API. Here's a list of the badges and their meanings: | Badge | Description | |-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | The marked feature is available with the given version or higher | | | Alpha features are experimental. They may never be generally available. If released subsequently, the APIs and behavior might change | | | Beta features are planned to be generally available in subsequent releases, however, APIs and their behavior are not final and may change in the general release | | | Concept features are ideas for potential future enhancements and an opportunity for you to give feedback. This is not a commitment to implement the feature though | | | SAP specific features, processes, or infrastructure. Examples are _Deploy with Confidence_, _SAP product standards_, or _xMake_ | ### CAP Notebooks Integration ## Sample Projects In here, we collected several interesting sample projects for you. Not all of them are maintained by the CAP team, not all of them cover CAP in its entirety, but they are well-prepared sources we can recommend for your learning. From the short description we provide for every resource, you're hopefully able to tell if that fits to the need you're currently having. ### Bookshop by capire {.github} > [![]()](https://github.com/sap-samples/cloud-cap-samples-java){.java} > [![]()](https://github.com/sap-samples/cloud-cap-samples){.node} The bookshop sample is our original sample provided by the CAP team and featured in the [getting started guides](../get-started/in-a-nutshell). It's available in both Node.js and Java. The Node.js variant contains additional samples besides bookshop that demonstrate various features of CAP. ### Incidents Mgmt {.github} > [![]()](https://github.com/cap-js/incidents-app){.node} A reference sample application for CAP and the SAP BTP Developer Guide. ### CAP SFlight {.github} > [![]()](https://github.com/sap-samples/cap-sflight){.java} > [![]()](https://github.com/sap-samples/cap-sflight){.node} This sample is a CAP adaptation of the popular [SFLIGHT](https://blog.sap-press.com/what-is-sflight-and-the-flight-and-booking-data-model-for-abap) sample app in ABAP. It's a great source for how to add SAP **Fiori** applications to a CAP project, including adding UI test suites on various stacks. ### Star Wars App {.github} > [![]()](https://github.com/SAP-samples/cloud-cap-hana-swapi){.node} SWAPI - the Star Wars API. This sample is based upon the sample at [swapi.dev](https://swapi.dev) which in turn was based upon [swapi.co](https://swapi.dev/about). The original source can be found at https://github.com/Juriy/swapi. The projects described previously have fallen out of maintenance but still offered the opportunity for a fun yet challenging learning experience from a non-trivial data model. The many bi-directional, many-to-many relationships with the data provide a good basis for an SAP Cloud Application Programming Model and Fiori Draft UI sample. {.indent} ### BTP SusaaS App {.github} > [![]()](https://github.com/SAP-samples/btp-cap-multitenant-saas){.node} The Sustainable SaaS (SusaaS) sample application has been built in a partner collaboration to help interested developers, partners, and customers in developing multitenant Software as a Service applications using CAP and deploying them to the SAP Business Technology Platform (SAP BTP). ### Partner Reference App {.github} > [![]()](https://github.com/SAP-samples/partner-reference-application){.node} The Partner Reference Application repository provides you with a “golden path” to becoming a SaaS provider of multitenant applications based on the SAP Business Technology Platform (SAP BTP). The guidance covers building, running, and integrating scalable full-stack cloud applications. It includes an ERP-agnostic design that lets you deliver your application as a side-by-side extension to consumers using any SAP solution, such as SAP S/4HANA Cloud, SAP Business One, and SAP Business ByDesign. By using BTP services and the SAP Cloud Application Programming Model (CAP), your application meets SAP standards for enterprise-class business solutions. It offers a harmonized user experience and seamless integration, including: - centralized identity and access management, - a common launchpad, - cross-application front-end navigation, - and secure back-channel integration. The repository includes the “Poetry Slam Manager” application as a ready-to-run example. It also provides tutorials on how to build the application from scratch using an incremental development approach. Based on this sample application, you find the bill of materials and a sizing example. This addresses the question "Which BTP resources do I need to subscribe to and in what quantities?" and serves as a basis for cost calculation. ## Open Source Projects - Plugins by SAP + CAP Teams - Plugins by Community - ... ## Learning Journeys - [Getting started with SAP Cloud Application Programming Model](https://learning.sap.com/learning-journeys/getting-started-with-sap-cloud-application-programming-model) (Beginner) - [Building side-by-side extensions on SAP BTP](https://learning.sap.com/learning-journeys/build-side-by-side-extensions-on-sap-btp) (Intermediate) ## SAP Discovery Center Missions - [Develop a Full-Stack CAP Application Following the SAP BTP Developer's Guide](https://discovery-center.cloud.sap/missiondetail/4327/4608/) - [Develop a Side-by-Side CAP-Based Extension Application Following the SAP BTP Developer's Guide](https://discovery-center.cloud.sap/missiondetail/4426/4712/) - [Implement Observability in a Full-Stack CAP Application Following SAP BTP Developer's Guide](https://discovery-center.cloud.sap/missiondetail/4432/4718/) ## Tutorials - [TechEd 2023 Hands-On Session AD264 – Build Extensions with CAP](https://github.com/SAP-samples/teched2023-AD264/) - [Build a Business Application Using CAP for Node.js](https://developers.sap.com/mission.cp-starter-extensions-cap.html) - [Build a Business Application Using CAP for Java](https://developers.sap.com/mission.cap-java-app.html) - [CAP Service Integration CodeJam](https://github.com/sap-samples/cap-service-integration-codejam) by DJ Adams ## Videos - [Back to basics: CAP Node.js](https://www.youtube.com/playlist?list=PL6RpkC85SLQBHPdfHQ0Ry2TMdsT-muECx)
by DJ Adams - [Hybrid Testing and Alternative DBs](https://youtu.be/vqub4vJbZX8?si=j5ZkPR6vPb59iBBy)
by Thomas Jung - [Consume External Services](https://youtu.be/rWQFbXFEr1M)
by Thomas Jung - [Building a CAP app in 60 min](https://youtu.be/zoJ7umKZKB4)
by Martin Stenzig - [Integrating an external API into a CAP service](https://youtu.be/T_rjax3VY2E)
by DJ Adams ## Blogs - [Surviving and Thriving with the SAP Cloud Application Programming Model](https://community.sap.com/t5/tag/CAPTricks/tg-p/board-id/technology-blog-sap)
by Max Streifeneder (2023) - [Multitenant SaaS applications on SAP BTP using CAP? Tried-and-True!](https://community.sap.com/t5/technology-blogs-by-sap/multitenant-saas-applications-on-sap-btp-using-cap-tried-and-true/ba-p/13541907)
by Martin Frick (2022) # Features Overview Following is an index of the features currently covered by CAP, with status and availability information. In addition, we also list features, which are planned or already in development, but not yet generally available, to give you an idea about our roadmap. #### Legend | Tag | Explanation | |:-----:|---------------------------------------------------| | | generally and publicly available today | | | not applicable for this combination | | | in progress; likely to become available near-term | | | we might pick that up for development soon | | | not scheduled for development by us so far | | | already active contribution | ### CLI & Tools Support | CLI commands | | |----------------------------------------------------------------------------|----------------------------| | [Jump-start cds-based projects](../get-started/) | `cds init ` | | [Add a feature to an existing project](../tools/cds-cli#cds-add) | `cds add ` | | [Add models from external sources](../guides/using-services#local-mocking) | `cds import ` | | [Compile cds models to different outputs](../node.js/cds-compile) | `cds compile ` | | [Run your services in local server](../node.js/cds-serve) | `cds serve ` | | [Run and restart on file changes](../get-started/in-a-nutshell) | `cds watch` | | [Read-eval-event loop](../node.js/cds-env#cli) | `cds repl` | | Inspect effective configuration | `cds env` | | Prepare for deployment | `cds build` | | Deploy to databases or cloud | `cds deploy` | | Login to multitenant SaaS application | `cds login ` | | Upgrade SaaS tenant(s) to latest versions | `cds upgrade` | | Logout from multitenant SaaS application | `cds logout` | | Subscribe a tenant to a SaaS application | `cds subscribe ` | | Unsubscribe a tenant from a SaaS application | `cds unsubscribe ` | | Pull the base model for a SaaS extension | `cds pull` | | Push a SaaS extension | `cds push` | > Run `cds help ` to find details about an individual command. Use `cds version` to check the version that you've installed. To know what is the latest version, see the [Release Notes](../releases/) for CAP.
| Editors/IDE Support | Application Studio | VS Code | |--------------------------|:------------------:|:-------:| | CDS Syntax Highlighting | | | | CDS Code Completion | | | | CDS Prettifier | | | | Advanced Debug/Run Tools | | | | Project Explorer | | | | ... | | | ### CDS Language & Compiler | | CDS | |-------------------------------------------------------------------------------------------------------------------|:----:| | [Entity-Relationship Modeling](../cds/cdl#entities) | | | [Custom-defined Types](../cds/cdl#types) | | | [Views / Projections ](../cds/cdl#views) | | | [Associations & Compositions](../cds/cdl#associations) | | | [Annotations](../cds/cdl#annotations) → [Common](../cds/annotations), [OData](../advanced/odata#annotations) | | | [Aspects](../guides/domain-modeling#aspects) | | | [Services...](../cds/cdl#services) | | | [— w/ Redirected Associations](../cds/cdl#auto-redirect) | | | [— w/ Auto-exposed Targets](../cds/cdl#auto-expose) | | | [— w/ Actions & Functions](../cds/cdl#actions) | | | [— w/ Events](../cds/cdl#events) | | | [Managed Compositions of Aspects](../cds/cdl#managed-compositions) | | | [Structured Elements](../cds/cdl#structured-types) | | | Nested Projections | | | [Calculated Elements](../cds/cdl#calculated-elements) | | | Managed _n:m_ Associations | | | Pluggable CDS Linter | | | [CDS Linter](../tools/cds-lint/) | | ### Providing Services | Core Framework Features | CDS | Node.js | Java | |--------------------------------------------------------------------------------------------|:-----:|:-------:|:----:| | [Automatically Serving CRUD Requests](../guides/providing-services#generic-providers) | | | | | [Deep-Read/Write Structured Documents](../guides/providing-services#deep-reads-and-writes) | | | | | [Automatic Input Validation](../guides/providing-services#input-validation) | | | | | [Auto-filled Primary Keys](../guides/domain-modeling#prefer-uuids-for-keys) | | | | | [Implicit Paging](../guides/providing-services#implicit-pagination) | | | | | [Implicit Sorting](../guides/providing-services#implicit-sorting) | | | | | [Access Control](../guides/security/authorization) | | | | | [Arrayed Elements](../cds/cdl#arrayed-types) | | | | | [Streaming & Media Types](../guides/providing-services#serving-media-data) | | | | | [Conflict Detection through _ETags_](../guides/providing-services#etag) | | | | | [Authentication via JWT](../guides/security/authorization#prerequisite-authentication) | | | | | [Basic Authentication](../guides/security/authorization#prerequisite-authentication) | | | |
| Enterprise Features | CDS | Node.js | Java | |--------------------------------------------------------------------------------------------------------------------|:-----:|:-------:|:----:| | [Authorization](../guides/security/authorization) | | | | | [Analytics in Fiori](../advanced/odata#data-aggregation) | | | | | [Localization/i18n](../guides/i18n) | | | | | [Localized Data](../guides/localized-data) | | | | | [Temporal Data](../guides/temporal-data) | | | | | [Managed Data](../guides/domain-modeling#managed-data) | | | | | [Dynamic Extensibility](../guides/extensibility/) | | | | | Monitoring / Logging [[Node.js](../node.js/cds-log)\|[Java](../java/operating-applications/observability#logging)] | | | | | Audit Logging [[Node.js](../guides/data-privacy/audit-logging)\|[Java](../java/auditlog)] | | | |
| Inbound Protocol Support | CDS 1 | Node.js | Java | |-------------------------------------------------------|:----------------:|:-----------------:|:-----------------:| | [REST/OpenAPI](/advanced/publishing-apis/openapi) | | | | | [OData V2](../advanced/odata#v2-support) 2 | | 3 | | | OData V4 | | | | | OData V4 for APIs | | | | | GraphQL4 | | 5 | 6 |
> 1 Export CDS models to ...
> 2 To support customers with existing OData V2 UIs
> 3 Through [V2 proxy](../advanced/odata#odata-v2-adapter-node)
> 4 Could be a good case for 3rd-party contribution
> 5 For Node.js try out the [GraphQL Adapter](/plugins/#graphql-adapter)
> 6 For Java try out the provided [sample code](https://github.com/SAP-samples/cloud-cap-samples-java/commit/16dc5d9a1f103eb1336405ee601dc7004f70538f).
### Consuming Services | [Service Consumption APIs](../guides/using-services) | Node.js | Java | |------------------------------------------------------|:-------:|:----:| | Uniform Consumption APIs → Hexagonal Architecture | | | | Dynamic Querying | | | | Programmatic Delegation | | | | Generic Delegation | | | | Resilience (retry, circuit breaking, ...) | | |
| Outbound Protocol Support | CDS 1 | Node.js | Java | |------------------------------------------------------------------|:----------------:|:-------:|:----:| | [REST/OpenAPI](../tools/apis/cds-import#cds-import-from-openapi) | | | | | OData V2 | | | | | OData V4 | | | | | GraphQL2 | | | | > 1 Import API to CSN
> 2 Could be a good case for 3rd-party contribution
[Learn more about supported features for consuming services.](../guides/using-services){.learn-more} ### Events / Messaging | | CDS | Node.js | Java | |-------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:------------:|:----:| | [Declared Events in CDS](../cds/cdl#events) | | | | | Mock Broker (to speed up local dev) [[Node.js](../node.js/messaging#file-based)\|[Java](../java/messaging#local-testing)] | | | | | SAP Event Mesh (For single-tenant apps) [[Node.js](../node.js/messaging#event-mesh-shared)\|[Java](../java/messaging#configuring-sap-event-mesh-support)] | | | | | SAP Cloud Application Event Hub (For single-tenant apps) [[Node.js](../node.js/messaging#event-broker)] | |
beta | | | Composite Messaging (routing by configuration) [[Node.js](../node.js/messaging#composite-messaging)\|[Java](../java/messaging#composite-messaging-service)] | | | | | Import AsyncAPI | | | | | Export AsyncAPI | | | | ### Database Support | | CDS/deploy | Node.js | Java | |-----------------------------------------------------------------|:----------:|:-------:|:----:| | [SAP HANA](../guides/databases) | | | | | [SAP HANA Cloud](../guides/databases-hana) | | | | | [PostgreSQL](../guides/databases-postgres) | | | | | [SQLite](../guides/databases-sqlite) 1 | | | | | [H2](../java/cqn-services/persistence-services#h2) 1 | | | | | [MongoDB](../guides/databases) out of the box | | | | | Pluggable drivers architecture | | | | | Out-of-the-box support for other databases? | | | | > 1 To speed up development. Not for productive use!
> Note: You can already integrate your database of choice in a project or a contribution level. The last two are meant to further facilitate this by out-of-the-box features in CAP. ### UIs/Frontend Support | | CDS | Node.js | Java | |---------------------------------------------------------------------------------------------------------|:----:|:-------:|:----:| | [Serving Fiori UIs](../advanced/fiori) | | | | | [Fiori Annotations in CDS](../advanced/fiori#fiori-annotations) | | | | | [Advanced Value Help](../advanced/fiori#value-helps) | | | | | [Draft Support](../advanced/fiori#draft-support) | | | | | [Draft for Localized Data](../advanced/fiori#draft-for-localized-data) | | | | | [Support for Fiori Analytics](../advanced/analytics) | | | | | [Support for other UI technologies, for example Vue.js](../get-started/in-a-nutshell#vue) 1 | | | | > 1 through standard REST/AJAX ### Platform Support & Integration | | Node.js | Java | |--------------------------------------------------------------------------------|:-------:|:----:| | [Deploy to/run on _SAP BTP, Cloud Foundry environment_](../guides/deployment/) | | | | Deploy to/run on _Kubernetes_1 | | | | [Deploy to/run on _Kyma_](../guides/deployment/to-kyma) | | | | [SaaS on-/offboarding](../guides/multitenancy/) | | | | [Multitenancy](../guides/multitenancy/) | | | | [Health checks](/guides/deployment/health-checks) | | | > 1 Available on plain Kubernetes level → see [blog post by Thomas Jung](https://blogs.sap.com/2019/07/16/running-sap-cloud-application-programming-model-with-connection-to-hana-on-kubernetes/)
### Extensibility | | | |------------------------------------------------------------------------------------------|:----:| | [Tenant-Specific Extensions](../guides/extensibility/) | | | [Adding Extension Fields](../guides/extensibility/customization#about-extension-models) | | | [Adding new Entities](../guides/extensibility/customization#about-extension-models) | | | [Adding new Relationships](../guides/extensibility/customization#about-extension-models) | | | [Adding/Overriding Annotations](../guides/extensibility/customization) | | | Adding Events | | | [Extension Namespaces](../guides/extensibility/customization) | | | [Extension Templates](../guides/extensibility/customization#templates) | | | Custom Governance Checks | | | [Generic Input Validations](../guides/providing-services#input-validation) | | | Declarative Constraints | | | Execute Sandboxed Code | | | Runtime API for In-App Extensibility | | | Propagating Extensions across (µ) Services | | # Troubleshooting Find here common solutions to frequently occurring issues. ## Setup {#setup} ### Can't start VS Code from Command Line on macOS {#vscode-macos} In order to start VS Code via the `code` CLI, users on macOS must first run a command (*Shell Command: Install 'code' command in PATH*) to add the VS Code executable to the `PATH` environment variable. Read VS Code's [macOS setup guide](https://code.visualstudio.com/docs/setup/mac) for help. ### Check the Node.js version { #node-version} Make sure you run the latest long-term support (LTS) version of Node.js with an even number like `20`. Refrain from using odd versions, for which some modules with native parts will have no support and thus might even fail to install. Check version with: ```sh node -v ``` Should you see an error like "_Node.js v1... or higher is required for `@sap/cds ...`._" on server startup, upgrade to the indicated version at the minimum, or even better, the most recent LTS version. For [Cloud Foundry](https://docs.cloudfoundry.org/buildpacks/node/index.html#runtime), use the `engines` field in _package.json_. [Learn more about the release schedule of **Node.js**.](https://github.com/nodejs/release#release-schedule/){.learn-more} [Learn about ways to install **Node.js**.](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm){.learn-more} ### Check access permissions on macOS or Linux In case you get error messages like `Error: EACCES: permission denied, mkdir '/usr/local/...'` when installing a global module like `@sap/cds-dk`, configure `npm` to use a different directory for global modules: ```sh mkdir ~/.npm-global ; npm set prefix '~/.npm-global' export PATH=~/.npm-global/bin:$PATH ``` Also add the last line to your user profile, for example, `~/.profile`, so that future shell sessions have changed `PATH` as well. [Learn more about other ways to handle this **error**.](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally){.learn-more} ### Check if your environment variables are properly set on Windows Global npm installations are stored in a user-specific directory on your machine. On Windows, this directory usually is: ```sh C:\Users\\AppData\Roaming\npm ``` Make sure that your `PATH`-environment variable contains this path. In addition, set the variable `NODE_PATH` to:
``C:\Users\\AppData\Roaming\npm\node_modules``. ### How Do I Consume a New Version of CDS? { #cds-versions} * Design time tools like `cds init`: Install and update `@sap/cds-dk` globally using `npm i -g @sap/cds-dk`. * Node.js runtime: Maintain the version of `@sap/cds` in the top-level _package.json_ of your application in the `dependencies` section. [Learn more about recommendations on how to manage **Node.js dependencies**.](../node.js/best-practices#dependencies){.learn-more} * CAP Java SDK: Maintain the version in the _pom.xml_ of your Java module, which is located in the root folder. In this file, modify the property `cds.services.version`. ## Node.js ### How can I start Node.js apps on different ports? By default, Node.js apps started with `cds run` or `cds watch` use port 4004, which might be occupied if other app instances are still running. In this case, `cds watch` now asks you if it should pick a different port. ```log{5} $ cds watch ... [cds] - serving CatalogService ... EADDRINUSE - port 4004 is already in use. Restart with new port? (Y/n) > y restart ... [cds] - server listening on { url: 'http://localhost:4005' } ``` Ports can be explicitly set with the `PORT` environment variable or the `--port` argument. See `cds help run` for more. ### Why do I lose registered event handlers? Node.js allows extending existing services, for example in mashup scenarios. This is commonly done on bootstrap time in `cds.on('served', ...)` handlers like so: #### DO:{.good} ```js cds.on('served', ()=>{ const { db } = cds.services db.on('before',(req)=> console.log(req.event, req.path)) }) ``` It is important to note that by Node.js `emit` are synchronous operations, so, **avoid _any_ `await` operations** in there, as that might lead to race conditions. In particular, when registering additional event handlers with a service, as shown in the snippet above, this could lead to very hard to detect and resolve issues with handler registrations. So, for example, don't do this: #### DON'T:{.bad} ```js cds.on('served', async ()=>{ const db = await cds.connect.to('db') // DANGER: will cause race condition !!! db.on('before',(req)=> console.log(req.event, req.path)) }) ``` ### My app isn't showing up in Dynatrace Make sure that: - Your app's start script is `cds-serve` instead of `npx cds run`. - You have the dependency `@dynatrace/oneagent-sdk` in your _package.json_. ### Why are requests occasionally rejected with "Acquiring client from pool timed out" or "ResourceRequest timed out"? This error indicates database client pool settings don't match the application's requirements. There are two possible root causes: | | Explanation | | --- | ---- | | _Root Cause 1_ | The maximum number of database clients in the pool is reached and additional requests wait too long for the next client. | _Root Cause 2_ | The creation of a new connection to the database takes too long. | _Solution_ | Adapt `max` or `acquireTimeoutMillis` with more appropriate values, according to the [documentation](../node.js/databases#databaseservice-configuration). Always make sure that database transactions are either committed or rolled back. This can work in two ways: 1. Couple it to your request (this happens automatically): Once the request is succeeded, the database service commits the transaction. If there was an error in one of the handlers, the database service performs a rollback. 2. For manual transactions (for example, by writing `const tx = cds.tx()`), you need to perform the commit/rollback yourself: `await tx.commit()`/`await tx.rollback()`. If you're using [@sap/hana-client](https://www.npmjs.com/package/@sap/hana-client), make sure to adjust the environment variable [`HDB_NODEJS_THREADPOOL_SIZE`](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/31a8c93a574b4f8fb6a8366d2c758f21.html?version=2.11) which specifies the amount of workers that concurrently execute asynchronous method calls for different connections. ### Why are requests rejected with status `502` and do not seem to reach the application? If you have long running requests, you may experience intermittent `502` errors that are characterized by being logged by the platform's router, but not by your CAP application. In most cases, this behavior is caused by the server having just closed the TCP connection without waiting for acknowledgement, so that the platform's load balancer still considers it open and uses it to forward the request. The issue is discussed in detail in this [blog post](https://adamcrowder.net/posts/node-express-api-and-aws-alb-502/#the-502-problem) by Adam Crowder. One solution is to increase the server's `keepAliveTimeout` to above that of the respective load balancer. The following example shows how to set `keepAliveTimeout` on the [http.Server](https://nodejs.org/api/http.html#class-httpserver) created by CAP. ```js const cds = require('@sap/cds') cds.once('listening', ({ server }) => { server.keepAliveTimeout = 3 * 60 * 1000 // > 3 mins }) module.exports = cds.server ``` [Watch the video to learn more about **Best Practices for CAP Node.js Apps**.](https://www.youtube.com/watch?v=WTOOse-Flj8&t=87s){.learn-more} ### Why are long running requests rejected with status `504` after 30 seconds even though the application continues processing the request? | | Explanation | | --- | ---- | | _Root Cause_ | Most probably, this error is caused by the destination timeout of the App Router. | _Solution_ | Set your own `timeout` configuration of [@sap/approuter](https://www.npmjs.com/package/@sap/approuter#destinations). ### Why does the server crash with `No service definition found for `? | | Explanation | | --- | ---- | | _Root Cause_ | Most probably, the service name in the `requires` section does not match the served service definition. | _Solution_ | Set the `.service` property in the respective `requires` entry. See [cds.connect()](../node.js/cds-connect#cds-requires-srv-service) for more details. ### Why is the destination of a remote service not correctly retrieved by SAP Cloud SDK and returns a status code 404? | | Explanation | | --- | ---- | | _Root Cause_ | In case the application has a service binding with the same name as the requested destination, the SAP Cloud SDK prioritized the service binding. This service of course does have different endpoints than the originally targeted remote service. For more information, please refer to the [SAP Cloud SDK documentation](https://sap.github.io/cloud-sdk/docs/js/features/connectivity/destinations#referencing-destinations-by-name). | _Solution_ | Use different names for the service binding and the destination. ### Why does my remote service call not work? | | Explanation | | --- | ---- | | _Root Cause_ | The destination, the remote system or the request details are not configured correctly. | _Solution_ | To further troubleshoot the root cause, you can enable logging with environment variables `SAP_CLOUD_SDK_LOG_LEVEL=silly` and `DEBUG=remote`. ## TypeScript ### Type definitions for `@sap/cds` not found or incomplete | | Explanation | |----------------|-----------------------------------------------------------------------| | _Root Cause 1_ | The package `@cap-js/cds-typer` is not installed. | | _Solution 1_ | Install the package as a dev dependency. | | _Root Cause 2_ | Symlink is missing. | | _Solution 2_ | Try `npm rebuild` or add `@cap-js/cds-types` in your _tsconfig.json_. | #### Install package as dev dependency The type definitions for `@sap/cds` are maintained in a separate package `@cap-js/cds-types` and have to be explicitly installed as a dev dependency. This can be done by adding the `typescript` facet: ::: code-group ```sh [facet] cds add typescript ``` ```sh [manually] npm i -D @cap-js/cds-types ``` ::: #### Fix missing symlink Installing `@cap-js/cds-types` leverages VS Code's automatic type resolution mechanism by symlinking the package in `node_modules/@types/sap__cds` in a postinstall script. If you find that this symlink is missing, try `npm rebuild` to trigger the postinstall script again. If the symlink still does not persist, you can explicitly point the type resolution mechanism to `@cap-js/cds-types` in your _tsconfig.json_: ::: code-group ```json [tsconfig.json] { "compilerOptions": { "types": ["@cap-js/cds-types"], } } ``` ::: If you find that the types are still incomplete, open a bug report in [the `@cap-js/cds-types` repository](https://github.com/cap-js/cds-types/issues/new/choose). ## Java ### How can I make sure that a user passes all authorization checks? A new option `privilegedUser()` can be leveraged when [defining](../java/event-handlers/request-contexts#defining-requestcontext) your own `RequestContext`. Adding this introduces a user, which passes all authorization restrictions. This is useful for scenarios, where a restricted service should be called through the [local service consumption API](../java/services) either in a request thread regardless of the original user's authorizations or in a background thread. ### Why do I get a "User should not exist" error during build time? | | Explanation | | --- | ---- | | _Root Cause_ | You've [explicitly configured a mock](../java/security#explicitly-defined-mock-users) user with a name that is already used by a [preconfigured mock user](../java/security#preconfigured-mock-users). | _Solution_ | Rename the mock user and build your project again. ### Why do I get an "Error on server start"? There could be a mismatch between your locally installed Node.js version and the version that is used by the `cds-maven-plugin`. The result is an error similar to the following: ```sh ❗️ ERROR on server start: ❗️ Error: The module '/home/user/....node' was compiled against a different Node.js version using ``` To fix this, either switch the Node.js version using a Node version manager, or add the Node version to your _pom.xml_ as follows: ```xml v20.11.0 ``` [Learn more about the install-node goal.](../java/assets/cds-maven-plugin-site/install-node-mojo.html){.learn-more target="_blank"} ### How can I expose custom REST APIs with CAP? From time to time you might want to expose additional REST APIs in your CAP application, that aren't covered through CAPs existing protocol adapters (for example, OData V4). A common example for this might be a CSV file upload or another type of custom REST endpoint. In that case, you can leverage the powerful capabilities of Spring Web MVC, by implementing your own RestController. From within your RestController implementation, you can fully leverage all CAP Java APIs. Most commonly you'll be interacting with your services and the database through the [local service consumption API](../java/services). To learn more about Spring Web MVC, see the [Spring docs](https://docs.spring.io/spring-framework/docs/current/reference/html/web.html#mvc), [Spring Boot docs](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-spring-mvc), and this [tutorial](https://spring.io/guides/gs/serving-web-content/). ### How can I build a CAP Java application without SQL database? The project skeleton generated by the CAP Java archetype adds the relevant Spring Boot and CAP Java dependencies, so that SQL database is supported by default. However, using an SQL database in CAP Java is fully optional. You can also develop CAP applications that don't use persistence at all. To remove the SQL database support, you need to exclude the JDBC-related dependencies of Spring Boot and CAP Java. This means that CAP Java won't create a Persistence Service instance. ::: tip Default Application Service event handlers delegate to Persistence Service You need to implement your own custom handlers in case you remove the SQL database support. ::: You can exclude those dependencies from the `cds-starter-spring-boot` dependency in the `srv/pom.xml`: ```xml com.sap.cds cds-starter-spring-boot com.sap.cds cds-feature-jdbc org.springframework.boot spring-boot-starter-jdbc ``` In addition you might want to remove the H2 dependency, which is included in the `srv/pom.xml` by default as well. If you don't want to exclude dependencies completely, but make sure that an in-memory H2 database **isn't** used, you can disable Spring Boot's `DataSource` auto-configuration, by annotating the `Application.java` class with `@SpringBootApplication(exclude = org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration.class)`. In that mode CAP Java however can still react on explicit data source configurations or database bindings. ### What to Do About Maven-Related Errors in Eclipse's Problems View? - In _Problems_ view, execute _Quick fix_ from the context menu if available. If Eclipse asks you to install additional Maven Eclipse plug-ins to overcome the error, do so. - Errors like _'Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:exec-maven-plugin)_ can be ignored. Do so in _Problems_ view > _Quick fix_ context menu > _Mark goal as ignored in Eclipse preferences_. - In case, there are still errors in the project, use _Maven > Update Project..._ from the project's context menu. ## OData ### How Do I Generate an OData Response in Node.js for Error 404? If your application(s) endpoints are served with OData and you want to change the standard HTML response to an OData response, adapt the following snippet to your needs and add it in your [custom _server.js_ file](../node.js/cds-serve#custom-server-js). ```js let app cds.on('bootstrap', a => { app = a }) cds.on('served', () => { app.use((req, res, next) => { // > unhandled request res.status(404).json({ message: 'Not Found' }) }) }) ``` ### Why do some requests fail if I set `@odata.draft.enabled` on my entity? The annotation `@odata.draft.enabled` is very specific to SAP Fiori elements, only some requests are allowed. For example it's forbidden to freely add `IsActiveEntity` to `$filter`, `$orderby` and other query options. The technical reason for that is that active instances and drafts are stored in two different database tables. Mixing them together is not trivial, therefore only some special cases are supported. ## SQLite { #sqlite} ### How Do I Install SQLite on Windows? * From the [SQLite page](https://sqlite.org/download.html), download the precompiled binaries for Windows `sqlite-tools-win*.zip`. * Create a folder _C:\sqlite_ and unzip the downloaded file in this folder to get the file `sqlite3.exe`. * Start using SQLite directly by opening `sqlite3.exe` from the folder _sqlite_ or from the command line window opened in _C:\sqlite_. * _Optional_: Add _C:\sqlite_ in your PATH environment variable. As soon as the configuration is active, you can start using SQLite from every location on your Windows installation. * Use the command _sqlite3_ to connect to the in-memory database: ```sh C:\sqlite>sqlite3 SQLite version ... Enter ".help" for instructions Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> ``` If you want to test further, use _.help_ command to see all available commands in _sqlite3_. In case you want a visual interface tool to work with SQLite, you can use [SQLTools](https://marketplace.visualstudio.com/items?itemName=mtxr.sqltools). It's available as an extension for VS Code and integrated in SAP Business Application Studio. ## SAP HANA { #hana} ### How to Get an SAP HANA Cloud Instance for SAP BTP, Cloud Foundry environment? { #get-hana} To configure this service in the SAP BPT cockpit on trial, refer to the [SAP HANA Cloud Onboarding Guide](https://www.sap.com/documents/2021/09/7476f8c4-f77d-0010-bca6-c68f7e60039b.html). See [SAP HANA Cloud](https://help.sap.com/docs/HANA_CLOUD) documentation or visit the [SAP HANA Cloud community](https://pages.community.sap.com/topics/hana/cloud) for more details. ::: warning HANA needs to be restarted on trial accounts On trial, your SAP HANA Cloud instance will be automatically stopped overnight, according to the server region time zone. That means you need to restart your instance every day before you start working with your trial. ::: [Learn more about SAP HANA Cloud trying out tutorials in the Tutorial Navigator.](https://developers.sap.com/mission.hana-cloud-database-get-started.html){.learn-more} ### I removed sample data (_.csv_ file) from my project. Still, the data is deployed and overwrites existing data. { #hana-csv} | | Explanation | | --- | ---- | | _Root Cause_ | SAP HANA still claims exclusive ownership of the data that was once deployed through `hdbtabledata` artifacts, even though the CSV files are now deleted in your project. | _Solution_ | Add an _undeploy.json_ file to the root of your database module (the _db_ folder by default). This file defines the files **and data** to be deleted. See section [HDI Delta Deployment and Undeploy Allow List](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c2b99f19e9264c4d9ae9221b22f6f589/ebb0a1d1d41e4ab0a06ea951717e7d3d.html) for more details. ::: tip If you want to keep the data from _.csv_ files and data you've already added, see [SAP Note 2922271](https://me.sap.com/notes/2922271) for more details. ::: You can apply this solution also when using the `cds-mtxs` library. You can either set the options via the environment variable [`HDI_DEPLOY_OPTIONS`](https://help.sap.com/docs/SAP_HANA_PLATFORM/4505d0bdaf4948449b7f7379d24d0f0d/a4bbc2dd8a20442387dc7b706e8d3070.html), the CDS configuration or you can add them to the model update request as `hdi` parameter: CDS configuration for [Deployment Service](../guides/multitenancy/mtxs#deployment-config) ```json "cds.xt.DeploymentService": { "hdi": { "deploy": { "undeploy": [ "src/gen/data/my.bookshop-Books.hdbtabledata" ], "path_parameter": { "src/gen/data/my.bookshop-Books.hdbtabledata:skip_data_deletion": "true" } }, ... } } ``` Options in [Saas Provisioning Service upgrade API](../guides/multitenancy/mtxs#example-usage-1) call payload ```json { "tenants": ["*"], "_": { "hdi": { "deploy": { "undeploy": [ "src/gen/data/my.bookshop-Books.hdbtabledata" ], "path_parameter": { "src/gen/data/my.bookshop-Books.hdbtabledata:skip_data_deletion": "true" } } } } } ``` ### How Do I Resolve Deployment Errors? #### Deployment fails — _Cyclic dependencies found_ or _Cycle between files_ | | Explanation | | --- | ---- | | _Root Cause_ | This is a known issue with older HDI/HANA versions, which are offered on trial landscapes. | _Solution_ | Apply the workaround of adding `--treat-unmodified-as-modified` as argument to the `hdi-deploy` command in _db/package.json_. This option redeploys files, even if they haven't changed. If you're the owner of the SAP HANA installation, ask for an upgrade of the SAP HANA instance. #### Deployment fails — _Version incompatibility_ | | Explanation | | --- | ---- | | _Root Cause_ | An error like `Version incompatibility for the ... build plugin: "2.0.x" (installed) is incompatible with "2.0.y" (requested)` indicates that your project demands a higher version of SAP HANA than what is available in your org/space on SAP BTP, Cloud Foundry environment. The error might not occur on other landscapes for the same project. | _Solution_ | Lower the version in file `db/src/.hdiconfig` to the one given in the error message. If you're the owner of the SAP HANA installation, ask for an upgrade of the SAP HANA instance. #### Deployment fails — _Cannot create certificate store_ {#cannot-create-certificate-store} | | Explanation | | --- | ---- | | _Root Cause_ | If you deploy to SAP HANA from a local Windows machine, this error might occur if the SAP CommonCryptoLib isn't installed on this machine. | | _Solution_ | To install it, follow these [instructions](https://help.sap.com/docs/SAP_DATA_SERVICES/e54136ab6a4a43e6a370265bf0a2d744/c049e28431ee4e8280cd6f5d1a8937d8.html). If this doesn't solve the problem, also set the environment variables as [described here](https://help.sap.com/docs/SAP_HANA_PLATFORM/e7e79e15f5284474b965872bf0fa3d63/463d3ceeb7404eca8762dfe74e9cff62.html). #### Deployment fails — + _Failed to get connection for database_ + _Connection failed (RTE:[300015] SSL certificate validation failed_ + _Cannot create SSL engine: Received invalid SSL Record Header_ | | Explanation | | --- | ---- | | _Root Cause_ | Your SAP HANA Cloud instance is stopped. | | _Solution_ | [Start your SAP HANA Cloud instance.](https://help.sap.com/docs/HANA_CLOUD/9ae9104a46f74a6583ce5182e7fb20cb/fe8cbc3a13b4425990880bac3a5d50d9.html) #### Deployment fails — SSL certificate validation failed: error code: 337047686 | | Explanation | | --- | ---- | | _Root Cause_ | The `@sap/hana-client` can't verify the certificate because of missing system toolchain dependencies. | | _Solution_ | Make sure [`ca-certificates`](https://packages.ubuntu.com/focal/ca-certificates) is installed on your Docker container. #### Deployment fails — _Cannot create SSL engine: Received invalid SSL Record Header_ | | Explanation | | --- | ---- | | _Root Cause_ | Your SAP HANA Cloud instance is stopped. | | _Solution_ | [Start your SAP HANA Cloud instance.](https://help.sap.com/docs/HANA_CLOUD/9ae9104a46f74a6583ce5182e7fb20cb/fe8cbc3a13b4425990880bac3a5d50d9.html) #### Deployment fails — _Error: HDI make failed_ | | Explanation | | --- | ---- | | _Root Cause_ | Your configuration isn't properly set. | | _Solution_ | Configure your project as described in [Using Databases](../guides/databases). #### Deployment fails — _Connection failed (RTE:[89008] Socket closed by peer_ {#connection-failed-89008} | | Explanation | | --- | ---- | | _Root Cause_ | Your IP isn't part of the filtering you configured when you created an SAP HANA Cloud instance. This error can also happen if you exceed the [maximum number of simultaneous connections to SAP HANA Cloud (1000)](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/20a760537519101497e3cfe07b348f3c.html). | | _Solution_ | Configure your SAP HANA Cloud instance [to accept your IP](https://help.sap.com/docs/HANA_SERVICE_CF/cc53ad464a57404b8d453bbadbc81ceb/71eb651f84274a0cb2f2b4380df91724.html). If configured correctly, check if the number of database connections are exceeded. Make sure your [pool configuration](../node.js/databases#pool) does not allow more than 1000 connections.
#### Deployment fails — _... build plugin for file suffix "hdbmigrationtable" [8210015]_ {#missingPlugin} | | Explanation | | --- | ---- | | _Root Cause_ | Your project configuration is missing some configuration in your _.hdiconfig_ file. | | _Solution_ | Use `cds add hana` to add the needed configuration to your project. Or maintain the _hdbmigrationtable_ plugin in your _.hdiconfig_ file manually: `"hdbmigrationtable": { "plugin_name": "com.sap.hana.di.table.migration" }` #### Deployment fails — _In USING declarations only main artifacts can be accessed, not sub artifacts of \_ This error occurs if all of the following applies: + You [added native SAP HANA objects](../advanced/hana#add-native-objects) to your CAP model. + You used deploy format `hdbcds`. + You didn't use the default naming mode `plain`. | | Explanation | | --- | ---- | | _Root Cause_ | The name/prefix of the native SAP HANA object collides with a name/prefix in the CAP CDS model. | _Solution_ | Change the name of the native SAP HANA object so that it doesn't start with the name given in the error message and doesn't start with any other prefix that occurs in the CAP CDS model. If you can't change the name of the SAP HANA object, because it already exists, define a synonym for the object. The name of the synonym must follow the naming rule to avoid collisions (root cause). ### How do I pass additional HDI deployment options to the multitenancy tenant deployment of the `cds-mtx` library You can add a subset of the [HDI deploy options](https://help.sap.com/docs/SAP_HANA_PLATFORM/4505d0bdaf4948449b7f7379d24d0f0d/a4bbc2dd8a20442387dc7b706e8d3070.html) using the environment variable `HDI_DEPLOY_OPTIONS`.\ When making use of these parameters, for example `exclude_filter`, please always check if the parameters are consistent with your CDS build configuration to avoid deployment problems. For example, make sure to not exclude generated SAP HANA tables that are needed by generated views. ### How can a table function access the logged in user? The _cds runtime_ sets the session variable `APPLICATIONUSER`. This should always reflect the logged in user. Do not use a `XS_` prefix. ## MTXS ### I get a 401 error when logging in to MTXS through App Router { #mtxs-sidecar-approuter-401} See [How to configure your App Router](../guides/extensibility/customization#app-router) to verify your setup. Also check the [documentation about `cds login`](../guides/extensibility/customization#cds-login). ### When running a tenant upgrade, I get the message 'Extensions exist, but extensibility is disabled.' This message indicates that extensions exist, but the application is not configured for extensibility. To avoid accidental data loss by removing existing extensions from the database, the upgrade is blocked in that case. Please check the [configuration for extensibility](../guides/extensibility/customization#_1-enable-extensibility). ::: danger If data loss is intended, you can disable the check by adding cds.requires.cds.xt.DeploymentService.upgrade.skipExtensionCheck = true to the configuration. ::: ## MTA { #mta} ### Why Does My MTA Build Fail? - Make sure to use the latest version of the [Cloud MTA Build Tool (MBT)](https://sap.github.io/cloud-mta-build-tool/). - Consult the [Cloud MTA Build Tool documentation](https://sap.github.io/cloud-mta-build-tool/usage/) for further information, for example, on the available tool options. ### How Can I Define the Build Order Between MTA Modules? By default the Cloud MTA Build Tool executes module builds in parallel. If you want to enforce a specific build order, for example, because one module build relies on the outcome of another one, check the [Configuring build order](https://sap.github.io/cloud-mta-build-tool/configuration/) section in the tool documentation. ### How Do I Undeploy an MTA? `cf undeploy ` deletes an MTA (use `cf mtas` to find the MTA ID). Use the optional `--delete-services` parameter to also wipe service instances.
**Caution:** This deletes the HDI containers with the application data. ### MTA Build Complains About _package-lock.json_ If the MTA build fails with `The 'npm ci' command can only install with an existing package-lock.json`, this means that such a file is missing in your project. - Check with `cds --version` to have `@sap/cds` >= 5.7.0. - Create the _package-lock.json_ file with a regular [`npm update`](https://docs.npmjs.com/cli/v8/commands/npm-update) command. - If the file was not created, make sure to enable it with `npm config set package-lock true` and repeat the previous command. - _package-lock.json_ should also be added to version control, so make sure that _.gitignore_ does __not__ contain it. The purpose of _package-lock.json_ is to pin your project's dependencies to allow for reproducible builds. [Learn more about dependency management in Node.js.](../node.js/best-practices#dependencies){.learn-more} ### How Can I Reduce the MTA Archive Size During Development? { #reduce-mta-size} You can reduce MTA archive sizes, and thereby speedup deployments, by omitting `node_module` folders. First, add a file `less.mtaext` with the following content: ::: code-group ```yaml [less.mtaext] _schema-version: '3.1' ID: bookshop-small extends: capire.bookshop modules: - name: bookshop-srv build-parameters: ignore: ["node_modules/"] ``` ::: Now you can build the archive with: ```sh mbt build -t gen --mtar mta.tar -e less.mtaext ``` ::: warning This approach is only recommended - For test deployments during _development_. For _production_ deployments, self-contained archives are preferrable. - If all your dependencies are available in _public_ registries like npmjs.org or Maven Central. Dependencies from _corporate_ registries are not resolvable in this mode. :::
## CAP on Cloud Foundry ### How Do I Get Started with SAP Business Technology Platform, Cloud Foundry environment? For a start, create your [Trial Account](https://account.hanatrial.ondemand.com/).
### How Do I Resolve Errors with `cf` Executable? { #cf-cli} #### Installation fails — _mkdir ... The system cannot find the path specified_ This is a known [issue](https://github.com/cloudfoundry/docs-cf-cli/issues/57) on Windows. The fix is to set the `HOMEDRIVE` environment variable to `C:`. In any `cmd` shell session, you can do so with `SET HOMEDRIVE=C:`
Also, make sure to persist the variable for future sessions in the system preferences. See [How do I set my system variables in Windows](https://superuser.com/questions/949560/how-do-i-set-system-environment-variables-in-windows-10) for more details. #### `cf` commands fail — _Error writing config_ This is the same issue as with the installation error above. ### Why Can't My _xs-security.json_ File Be Used to Create an XSUAA Service Instance? { #pws-encoding} | | Explanation | | --- | ---- | | _Root Cause_ | Your file isn't UTF-8 encoded. If you executed `cds compile` with Windows PowerShell, the encoding of your _xs-security.json_ file is wrong. | _Solution_ | Make sure, you execute `cds compile` in a command prompt that encodes in UTF-8 when piping output into a file. [You can find related information on **Stack Overflow**.](https://stackoverflow.com/questions/40098771/changing-powershells-default-output-encoding-to-utf-8){.learn-more} ### How Can I Connect to a Backing Service Container like SAP HANA from My Local Machine? { #cf-connect} Depending on, whether the container host is reachable and whether there's a proxy between your machine and the cloud, one of the following options applies: * CF SSH The second most convenient way is the `cf ssh` capability of Cloud Foundry CLI. You can open an SSH tunnel to the target Cloud Foundry container, if these prerequisites are met: - There's **no HTTP proxy** in the way. Those only let HTTP traffic through. - SSH access is enabled for the CF landscape and your space (in _Canary_ this is true, otherwise check with `cf ssh-enabled`). Use it like this: ```sh cf ssh -L localhost::: ``` where `` has to be a running application that is bound to the service. Example: Connect to a SAP HANA service running on remote host 10.10.10.10, port 30010. ```sh cf ssh -L localhost:30010:10.10.10.10:30010 ``` From then on, use `localhost:30010` instead of the remote address. [Learn more about **cf ssh**.](https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html){ .learn-more} * Chisel In all other cases, for example, if there's an HTTP proxy between you and the cloud, you can resort to a TCP proxy tool, called _Chisel_. This also applies if the target host isn't reachable on a network level. You need to install _Chisel_ in your target space and that will tunnel TCP traffic over HTTP from your local host to the target (and vice versa). Find [step-by-step instructions here](https://github.com/jpillora/chisel). For example, to connect to an SAP HANA service running on remote host 10.10.10.10, port 30010: ```sh bin/chisel_... client --auth secrets https:// localhost:30010:10.10.10.10:30010 ``` From then on, use `localhost:30010` instead of the remote address. [Learn more about **Chisel**.](https://github.com/jpillora/chisel){ .learn-more} ### Aborted Deployment With the _Create-Service-Push_ Plugin If you're using _manifest.yml_ features that are part of the new Cloud Foundry API, for example, the `buildpacks` property, the `cf create-service-push` command will abort after service creation without pushing the applications to Cloud Foundry. Use `cf create-service-push --push-as-subprocess` to execute `cf push` in a sub-process. [See `cf create-service-push --help` for further CLI details or visit the Create-Service-Push GitHub repository.](https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin){.learn-more} ### Deployment Crashes With "No space left on device" Error If on deployment to Cloud Foundry, a module crashes with the error message `Cannot mkdir: No space left on device` then the solution is to adjust the space available to that module in the `mta.yaml` file. Adjust the `disk-quota` parameter. ```sh parameters: disk-quota: 512M memory: 256M ``` [Learn more about this error in KBA 3310683](https://userapps.support.sap.com/sap/support/knowledge/en/3310683){.learn-more} ### How Can I Get Logs From My Application in Cloud Foundry? { #cflogs-recent} The SAP BTP cockpit is not meant to analyze a huge amount of logs. You should use the Cloud Foundry CLI. ```sh cf logs --recent ``` ::: tip If you omit the option `--recent`, you can run this command in parallel to your deployment and see the logs as they come in. ::: ### Why do I get "404 Not Found: Requested route does not exist"? In order to send a request to an app, it must be associated with a route. Please see [Cloud Foundry Documentation -> Routes](https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#routes) for details. As this is done automatically by default, the process is mostly transparent for developers. If you receive an error response `404 Not Found: Requested route ('') does not exist`, this can have two reasons: 1. The route really does not exist or is not bound to an app. You can check this in SAP BTP cockpit either in the app details view or in the list of routes in the Cloud Foundry space. 2. The app (or all app instances, in case of horizontal scale-out) failed the readiness check. Please see [Health Checks](../guides/deployment/health-checks.md) and [Using Cloud Foundry health checks](https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html) for details on how to set up the check. ::: details Troubleshoot using the Cloud Foundry CLI ```sh cf apps # -> list all apps cf app # -> get details on your app, incl. state and routes cf app --guid # -> get your app's guid cf curl "/v3/processes//stats" # -> list of processes (one per app instance) with property "routable" # indicating whether the most recent readiness check was successful ``` See [cf curl](https://cli.cloudfoundry.org/en-US/v7/curl.html) and [The process stats object](https://v3-apidocs.cloudfoundry.org/version/3.184.0/index.html#the-process-stats-object) for details on how to use the CLI. ::: ## CAP on Kyma ### Pack Command Fails with Error `package.json and package-lock.json aren't in sync` To fix this error, run `npm i --package-lock-only` to update your `package-lock.json` file and run the pack command again. > Note: After updating the package-lock.json the specific dependency versions would change, go through the changes and verify them. ::: tip For SAP HANA deployment errors see [The HANA section](#how-do-i-resolve-deployment-errors). ::: ## CAP on Windows Please note that Git Bash on Windows, despite offering a Unix-like environment, may encounter interoperability issues with specific scripts or tools due to its hybrid nature between Windows and Unix systems. When using Windows, we recommend testing and verifying all functionalities in the native Windows Command Prompt (cmd.exe) or PowerShell for optimal interoperability. Otherwise, problems can occur when building the mtxs extension on Windows, locally, or in the cloud.
# The CAP Cookbook Guides and Recipes for Common Tasks { .subtitle} The following figure illustrates a walkthrough of the most prominent tasks within CAP's universe of discourse (aka scope). The guides contained in this section provide details and instructions about each. ![The graphic groups topics into three phases: Development, Deploy, Use. The development phase covers topics like domain modeling, sing and providing services, databases and frontends. The deploy phase covers the deployment as well as CI/CD, monitoring and publishing APIs and packages for reuse. The use phase is about the subscription flow of multitenant applications and about customizing and extending those applications. ](assets/cookbook-overview.drawio.svg)
# Providing Services This guide introduces how to define and implement services, leveraging generic implementations provided by the CAP runtimes, complemented by domain-specific custom logic. ## Intro: Core Concepts {#introduction} The following sections give a brief overview of CAP's core concepts. ### Service-Centric Paradigm A CAP application commonly provides services defined in CDS models and served by the CAP runtimes. Every active thing in CAP is a service. They embody the behavioral aspects of a domain in terms of exposed entities, actions, and events. ![This graphic is explained in the accompanying text.](assets/providing-services/service-centric-paradigm.drawio.svg) ### Ubiquitous Events At runtime, everything happening is in response to events. CAP features a ubiquitous notion of events, which represent both, *requests* coming in through **synchronous** APIs, as well as **asynchronous** *event messages*, blurring the line between both worlds. ![This graphic shows that consumers send events to services and that there are hooks, so that event handlers can react on those events.](assets/providing-services/services-events.drawio.svg) ### Event Handlers Service providers basically react on events in event handlers, plugged in to respective hooks provided by the core service runtimes. ## Service Definitions ### Services as APIs In its most basic form, a service definition simply declares the data entities and operations it serves. For example: ```cds service BookshopService { entity Books { key ID : UUID; title : String; author : Association to Authors; } entity Authors { key ID : UUID; name : String; books : Association to many Books on books.author = $self; } action submitOrder (book : Books:ID, quantity : Integer); } ``` This definition effectively defines the API served by `BookshopService`. ![This graphic is explained in the accompanying text.](assets/providing-services/service-apis.drawio.svg) Simple service definitions like that are all we need to run full-fledged servers out of the box, served by CAP's generic runtimes, without any implementation coding required. ### Services as Facades In contrast to the all-in-one definition above, services usually expose views, aka projections, on underlying domain model entities: ```cds using { sap.capire.bookshop as my } from '../db/schema'; service BookshopService { entity Books as projection on my.Books; entity Authors as projection on my.Authors; action submitOrder (book : Books:ID, quantity : Integer); } ``` This way, services become facades to encapsulated domain data, exposing different aspects tailored to respective use cases. ![This graphic is explained in the accompanying text.](assets/providing-services/service-as-facades.drawio.svg) ### Denormalized Views Instead of exposing access to underlying data in a 1:1 fashion, services frequently expose denormalized views, tailored to specific use cases. For example, the following service definition, undiscloses information about maintainers from end users and also [marks the entities as `@readonly`](#readonly): ```cds using { sap.capire.bookshop as my } from '../db/schema'; /** For serving end users */ service CatalogService @(path:'/browse') { /** For displaying lists of Books */ @readonly entity ListOfBooks as projection on Books excluding { descr }; /** For display in details pages */ @readonly entity Books as projection on my.Books { *, author.name as author } excluding { createdBy, modifiedBy }; } ``` [Learn more about **CQL** the language used for `projections`.](../cds/cql){.learn-more} [See also: Prefer Single-Purposed Services!](#single-purposed-services){.learn-more} [Find above sources in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/srv/cat-service.cds){ .learn-more} ### Auto-Exposed Entities Annotate entities with `@cds.autoexpose` to automatically include them in services containing entities with Association referencing to them. For example, this is commonly done for code list entities in order to serve Value Lists dropdowns on UIs: ```cds service Zoo { entity Foo { //... code : Association to SomeCodeList; } } @cds.autoexpose entity SomeCodeList {...} ``` [Learn more about Auto-Exposed Entities in the CDS reference docs.](../cds/cdl#auto-expose){.learn-more} ### Redirected Associations When exposing related entities, associations are automatically redirected. This ensures that clients can navigate between projected entities as expected. For example: ```cds service AdminService { entity Books as projection on my.Books; entity Authors as projection on my.Authors; //> AdminService.Authors.books refers to AdminService.Books } ``` [Learn more about Redirected Associations in the CDS reference docs.](../cds/cdl#auto-redirect){.learn-more} ## Generic Providers The CAP runtimes for [Node.js](../node.js/) and [Java](../java/) provide a wealth of generic implementations, which serve most requests automatically, with out-of-the-box solutions to recurring tasks such as search, pagination, or input validation — the majority of this guide focuses on these generic features. In effect, a service definition [as introduced above](#service-definitions) is all we need to run a full-fledged server out of the box. The need for coding reduces to real custom logic specific to a project's domain → section [Custom Logic](#custom-logic) picks that up. ### Serving CRUD Requests {#serving-crud} The CAP runtimes for [Node.js](../node.js/) and [Java](../java/) provide generic handlers, which automatically serve all CRUD requests to entities for CDS-modelled services on top of a default [primary database](databases). This comprises read and write operations like that: * `GET /Books/201` → reading single data entities * `GET /Books?...` → reading data entity sets with advanced query options * `POST /Books {....}` → creating new data entities * `PUT/PATCH /Books/201 {...}` → updating data entities * `DELETE /Books/201` → deleting data entities
::: warning No filtering and sorting for virtual elements CAP runtimes delegate filtering and sorting to the database. Therefore filtering and sorting is not available for `virtual` elements. ::: ### Deep Reads and Writes CDS and the runtimes have advanced support for modeling and serving document-oriented data. The runtimes provide generic handlers for serving deeply nested document structures out of the box as documented in here. #### Deep `READ` You can read deeply nested documents by *expanding* along associations or compositions. For example, like this in OData: :::code-group ```http GET .../Orders?$expand=header($expand=items) ``` ```js[cds.ql] SELECT.from ('Orders', o => { o.ID, o.title, o.header (h => { h.ID, h.status, h.items('*') }) }) ``` [Learn more about `cds.ql`](../node.js/cds-ql){.learn-more} ::: Both would return an array of nested structures as follows: ```js [{ ID:1, title: 'first order', header: { // to-one ID:2, status: 'open', items: [{ // to-many ID:3, description: 'first order item' },{ ID:4, description: 'second order item' }] } }, ... ] ``` #### Deep `INSERT` Create a parent entity along with child entities in a single operation, for example, like that: :::code-group ```http POST .../Orders { ID:1, title: 'new order', header: { // to-one ID:2, status: 'open', items: [{ // to-many ID:3, description: 'child of child entity' },{ ID:4, description: 'another child of child entity' }] } } ``` ::: Note that Associations and Compositions are handled differently in (deep) inserts and updates: - Compositions → runtime **deeply creates or updates** entries in target entities - Associations → runtime **fills in foreign keys** to *existing* target entries For example, the following request would create a new `Book` with a *reference* to an existing `Author`, with `{ID:12}` being the foreign key value filled in for association `author`: ```http POST .../Books { ID:121, title: 'Jane Eyre', author: {ID:12} } ``` #### Deep `UPDATE` Deep `UPDATE` of the deeply nested documents look very similar to deep `INSERT`: :::code-group ```http PUT .../Orders/1 { title: 'changed title of existing order', header: { ID:2, items: [{ ID:3, description: 'modified child of child entity' },{ ID:5, description: 'new child of child entity' }] }] } ``` ::: Depending on existing data, child entities will be created, updated, or deleted as follows: - entries existing on the database, but not in the payload, are deleted → for example, `ID:4` - entries existing on the database, and in the payload are updated → for example, `ID:3` - entries not existing on the database are created → for example, `ID:5` **`PUT` vs `PATCH`** — Omitted fields get reset to `default` values or `null` in case of `PUT` requests; they are left untouched for `PATCH` requests. Omitted compositions have no effect, whether during `PATCH` or during `PUT`. That is, to delete all children, the payload must specify `null` or `[]`, respectively, for the to-one or to-many composition. #### Deep `DELETE` Deleting a root of a composition hierarchy results in a cascaded delete of all nested children. :::code-group ```sql DELETE .../Orders/1 -- would also delete all headers and items ``` ::: ### Auto-Generated Keys On `CREATE` operations, `key` elements of type `UUID` are filled in automatically. In addition, on deep inserts and upserts, respective foreign keys of newly created nested objects are filled in accordingly. For example, given a model like that: ```cds entity Orders { key ID : UUID; title : String; Items : Composition of many OrderItems on Items.order = $self; } entity OrderItems { key order : Association to Orders; key pos : Integer; descr: String; } ``` When creating a new `Order` with nested `OrderItems` like that: ```js POST .../Orders { title: 'Order #1', Items: [ { pos:1, descr: 'Item #1' }, { pos:2, descr: 'Item #2' } ] } ``` CAP runtimes will automatically fill in `Orders.ID` with a new uuid, as well as the nested `OrderItems.order.ID` referring to the parent. ### Searching Data CAP runtimes provide out-of-the-box support for advanced search of a given text in all textual elements of an entity including nested entities along composition hierarchies. A typical search request looks like that: ```js GET .../Books?$search=Heights ``` That would basically search for occurrences of `"Heights"` in all text fields of Books, that is, in `title` and `descr` using database-specific `contains` operations (for example, using `like '%Heights%'` in standard SQL). #### The `@cds.search` Annotation {#cds-search} By default search is limited to the elements of type `String` of an entity that aren't [calculated](../cds/cdl#calculated-elements) or [virtual](../cds/cdl#virtual-elements). Yet, sometimes you may want to deviate from this default and specify a different set of searchable elements, or to extend the search to associated entities. Use the `@cds.search` annotation to do so. The general usage is: ```cds @cds.search: { element1, // included element2 : true, // included element3 : false, // excluded assoc1, // extend to searchable elements in target entity assoc2.elementA // extend to a specific element in target entity } entity E { } ``` [Learn more about the syntax of annotations.](../cds/cdl#annotations){.learn-more} #### Including Fields ```cds @cds.search: { title } entity Books { ... } ``` Searches the `title` element only. ##### Extend Search to *Associated* Entities ::: warning Node.js: Only w/ streamlined database services For Node.js projects, this feature is only available with the [streamlined `@cap-js/` database services](../releases/archive/2024/jun24#new-database-services-ga) (default with `@sap/cds` >= 8) ::: ```cds @cds.search: { author } entity Books { ... } @cds.search: { biography: false } entity Authors { ... } ``` Searches all elements of the `Books` entity, as well as all searchable elements of the associated `Authors` entity. Which elements of the associated entity are searchable is determined by the `@cds.search` annotation on the associated entity. So, from `Authors`, all elements of type `String` are searched but `biography` is excluded. ##### Extend to Individual Elements in Associated Entities ```cds @cds.search: { author.name } entity Books { ... } ``` Searches only in the element `name` of the associated `Authors` entity. #### Excluding Fields ```cds @cds.search: { isbn: false } entity Books { ... } ``` Searches all elements of type `String` excluding the element `isbn`, which leaves the `title` and `descr` elements to be searched. ::: tip You can explicitly annotate calculated elements to make them searchable, even though they aren't searchable by default. The virtual elements won't be searchable even if they're explicitly annotated. ::: #### Fuzzy Search on SAP HANA Cloud {#fuzzy-search} > Prerequisite: For CAP Java, you need to run in [`HEX` optimization mode](../java/cqn-services/persistence-services#sql-optimization-mode) on SAP HANA Cloud and enable cds.sql.hana.search.fuzzy = true Fuzzy search is a fault-tolerant search feature of SAP HANA Cloud, which returns records even if the search term contains additional characters, is missing characters, or has typographical errors. You can configure the fuzziness in the range [0.0, 1.0]. The value 1.0 enforces exact search. - Java: cds.sql.hana.search.fuzzinessThreshold = 0.8 - Node.js:cds.hana.fuzzy = 0.7 Override the fuzziness for elements, using the `@Search.fuzzinessThreshold` annotation: ```cds entity Books { @Search.fuzzinessThreshold: 0.7 title : String; } ``` The relevance of a search match depends on the weight of the element causing the match. By default, all [searchable elements](#cds-search) have equal weight. To adjust the weight of an element, use the `@Search.ranking` annotation. Allowed values are HIGH, MEDIUM (default), and LOW: ```cds entity Books { @Search.ranking: HIGH title : String; @Search.ranking: LOW publisherName : String; } ``` ::: tip Wildcards in search terms When using wildcards in search terms, an *exact pattern search* is performed. Supported wildcards are '*' matching zero or more characters and '?' matching a single character. You can escape wildcards using '\\'. ::: ### Pagination & Sorting #### Implicit Pagination By default, the generic handlers for READ requests automatically **truncate** result sets to a size of 1,000 records max. If there are more entries available, a link is added to the response allowing clients to fetch the next page of records. The OData response body for truncated result sets contains a `nextLink` as follows: ```http GET .../Books >{ value: [ {... first record ...}, {... second record ...}, ... ], @odata.nextLink: "Books?$skiptoken=1000" } ``` To retrieve the next page of records from the server, the client would use this `nextLink` in a follow-up request, like so: ```http GET .../Books?$skiptoken=1000 ``` On firing this query, you get the second set of 1,000 records with a link to the next page, and so on, until the last page is returned, with the response not containing a `nextLink`. ::: warning Per OData specification for [Server Side Paging](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_ServerDrivenPaging), the value of the `nextLink` returned by the server must not be interpreted or changed by the clients. ::: #### Reliable Pagination > Note: This feature is available only for OData V4 endpoints. Using a numeric skip token based on the values of `$skip` and `$top` can result in duplicate or missing rows if the entity set is modified between the calls. _Reliable Pagination_ avoids this inconsistency by generating a skip token based on the values of the last row of a page. The reliable pagination is available with following limitations: - Results of functions or arithmetic expressions can't be used in the `$orderby` option (explicit ordering). - The elements used in the `$orderby` of the request must be of simple type. - All elements used in `$orderby` must also be included in the `$select` option, if it's set. - Complex [concatenations](../advanced/odata#concat) of result sets aren't supported. ::: warning Don't use reliable pagination if an entity set is sorted by elements that contain sensitive information, the skip token could reveal the values of these elements. ::: The feature can be enabled with the following [configuration options](../node.js/cds-env#project-settings) set to `true`: - Java: cds.query.limit.reliablePaging.enabled: true - Node.js: cds.query.limit.reliablePaging: true #### Paging Limits You can configure default and maximum page size limits in your [project configuration](../node.js/cds-env#project-settings) as follows: ```json "cds": { "query": { "limit": { "default": 20, //> no default "max": 100 //> default 1000 } } } ``` - The **maximum limit** defines the maximum number of items that can get retrieved, regardless of `$top`. - The **default limit** defines the number of items that are retrieved if no `$top` was specified. ##### Annotation `@cds.query.limit` {#annotation-cds-query-limit} You can override the defaults by applying the `@cds.query.limit` annotation on the service or entity level, as follows: ```cds @cds.query.limit: { default?, max? } | Number ``` The limit definitions for `CatalogService` and `AdminService` in the following example are equivalent. ```cds @cds.query.limit.default: 20 @cds.query.limit.max: 100 service CatalogService { // ... } @cds.query.limit: { default: 20, max: 100 } service AdminService { // ... } ``` `@cds.query.limit` can be used as shorthand if no default limit needs to be specified at the same level. ```cds @cds.query.limit: 100 service CatalogService { entity Books as projection on my.Books; //> pages at 100 @cds.query.limit: 20 entity Authors as projection on my.Authors; //> pages at 20 } service AdminService { entity Books as projection on my.Books; //> pages at 1000 (default) } ``` ##### Precedence The closest limit applies, that means, an entity-level limit overrides that of its service, and a service-level limit overrides the global setting. The value `0` disables the respective limit at the respective level. ```cds @cds.query.limit.default: 20 service CatalogService { @cds.query.limit.max: 100 entity Books as projection on my.Books; //> default = 20 (from CatalogService), max = 100 @cds.query.limit: 0 entity Authors as projection on my.Authors; //> no default, max = 1,000 (from environment) } ``` #### Implicit Sorting Paging requires implied sorting, otherwise records might be skipped accidentally when reading follow-up pages. By default the entity's primary key is used as a sort criterion. For example, given a service definition like this: ```cds service CatalogService { entity Books as projection on my.Books; } ``` The SQL query executed in response to incoming requests to Books will be enhanced with an additional order-by clause as follows: ```sql SELECT ... from my_Books ORDER BY ID; -- default: order by the entity's primary key ``` If the request specifies a sort order, for example, `GET .../Books?$orderby=author`, both are applied as follows: ```sql SELECT ... from my_Books ORDER BY author, -- request-specific order has precedence ID; -- default order still applied in addition ``` We can also define a default order when serving books as follows: ```cds service CatalogService { entity Books as projection on my.Books order by title asc; } ``` Now, the resulting order by clauses are as follows for `GET .../Books`: ```sql SELECT ... from my_Books ORDER BY title asc, -- from entity definition ID; -- default order still applied in addition ``` ... and for `GET .../Books?$orderby=author`: ```sql SELECT ... from my_Books ORDER BY author, -- request-specific order has precedence title asc, -- from entity definition ID; -- default order still applied in addition ``` ### Concurrency Control CAP runtimes support different ways to avoid lost-update situations as documented in the following. Use _optimistic locking_ to _detect_ concurrent modification of data _across requests_. The implementation relies on [ETags](#etag). Use _pessimistic locking_ to _protect_ data from concurrent modification by concurrent _transactions_. CAP leverages database locks for [pessimistic locking](#select-for-update). #### Conflict Detection Using ETags {#etag} The CAP runtimes support optimistic concurrency control and caching techniques using ETags. An ETag identifies a specific version of a resource found at a URL. Enable ETags by adding the `@odata.etag` annotation to an element to be used to calculate an ETag value as follows: ```cds using { managed } from '@sap/cds/common'; entity Foo : managed {...} annotate Foo with { modifiedAt @odata.etag } ``` > The value of an ETag element should uniquely change with each update per row. > The `modifiedAt` element from the [pre-defined `managed` aspect](../cds/common#aspect-managed) is a good candidate, as this is automatically updated. > You could also use update counters or UUIDs, which are recalculated on each update. You use ETags when updating, deleting, or invoking the action bound to an entity by using the ETag value in an `If-Match` or `If-None-Match` header. The following examples represent typical requests and responses: ```http POST Employees { ID:111, name:'Name' } > 201 Created {'@odata.etag': 'W/"2000-01-01T01:10:10.100Z"',...} //> Got new ETag to be used for subsequent requests... ``` ```http GET Employees/111 If-None-Match: "2000-01-01T01:10:10.100Z" > 304 Not Modified // Record was not changed ``` ```http GET Employees/111 If-Match: "2000-01-01T01:10:10.100Z" > 412 Precondition Failed // Record was changed by another user ``` ```http UPDATE Employees/111 If-Match: "2000-01-01T01:10:10.100Z" > 200 Ok {'@odata.etag': 'W/"2000-02-02T02:20:20.200Z"',...} //> Got new ETag to be used for subsequent requests... ``` ```http UPDATE Employees/111 If-Match: "2000-02-02T02:20:20.200Z" > 412 Precondition Failed // Record was modified by another user ``` ```http DELETE Employees/111 If-Match: "2000-02-02T02:20:20.200Z" > 412 Precondition Failed // Record was modified by another user ``` If the ETag validation detects a conflict, the request typically needs to be retried by the client. Hence, optimistic concurrency should be used if conflicts occur rarely. #### Pessimistic Locking {#select-for-update} _Pessimistic locking_ allows you to lock the selected records so that other transactions are blocked from changing the records in any way. Use _exclusive_ locks when reading entity data with the _intention to update_ it in the same transaction and you want to prevent the data to be read or updated in a concurrent transaction. Use _shared_ locks if you only need to prevent the entity data to be updated in a concurrent transaction, but don't want to block concurrent read operations. The records are locked until the end of the transaction by commit or rollback statement. [Learn more about using the `SELECT ... FOR UPDATE` statement in the Node.js runtime.](../node.js/cds-ql#forupdate){.learn-more} [Learn more about using the `Select.lock()` method in the Java runtime.](../java/working-with-cql/query-api#write-lock){.learn-more} ::: warning Pessimistic locking is not supported by SQLite. H2 supports exclusive locks only. ::: ## Input Validation CAP runtimes automatically validate user input, controlled by the following annotations. ### `@readonly` Elements annotated with `@readonly`, as well as [_calculated elements_](../cds/cdl#calculated-elements), are protected against write operations. That is, if a CREATE or UPDATE operation specifies values for such fields, these values are **silently ignored**. By default [`virtual` elements](../cds/cdl#virtual-elements) are also _calculated_. ::: tip The same applies for fields with the [OData Annotations](../advanced/odata#annotations) `@FieldControl.ReadOnly` (static), `@Core.Computed`, or `@Core.Immutable` (the latter only on UPDATEs). ::: ### `@mandatory` Elements marked with `@mandatory` are checked for nonempty input: `null` and (trimmed) empty strings are rejected. ```cds service Sue { entity Books { key ID : UUID; title : String @mandatory; } } ``` In addition to server-side input validation as introduced above, this adds a corresponding `@FieldControl` annotation to the EDMX so that OData / Fiori clients would enforce a valid entry, thereby avoiding unnecessary request roundtrips: ```xml ``` ### `@Common.FieldControl` {#common-fieldcontrol} The input validation for `@Common.FieldControl: #Mandatory` and `@Common.FieldControl: #ReadOnly` is done from the CAP runtimes automatically. ::: warning Custom validations are required when using static or dynamic numeric values, for example, `@Common.FieldControl: 1` or `@Common.FieldControl: integer_field`. ::: ### `@assert .unique` Annotate an entity with `@assert.unique.`, specifying one or more element combinations to enforce uniqueness checks on all CREATE and UPDATE operations. For example: ```cds @assert.unique: { locale: [ parent, locale ], timeslice: [ parent, validFrom ], } entity LocalizedTemporalData { key record_ID : UUID; // technical primary key parent : Association to Data; locale : String; validFrom : Date; validTo : Date; } ``` {.indent} This annotation is applicable to entities, which result in tables in SQL databases only. The value of the annotation is an array of paths referring to elements in the entity. These elements may be of a scalar type, structs, or managed associations. Individual foreign keys or unmanaged associations are not supported. If structured elements are specified, the unique constraint will contain all columns stemming from it. If the path points to a managed association, the unique constraint will contain all foreign key columns stemming from it. ::: tip You don't need to specify `@assert.unique` constraints for the primary key elements of an entity as these are automatically secured by a SQL `PRIMARY KEY` constraint. ::: ### `@assert .target` Annotate a [managed to-one association](../cds/cdl#managed-associations) of a CDS model entity definition with the `@assert.target` annotation to check whether the target entity referenced by the association (the reference's target) exists. In other words, use this annotation to check whether a non-null foreign key input in a table has a corresponding primary key in the associated/referenced target table. You can check whether multiple targets exist in the same transaction. For example, in the `Books` entity, you could annotate one or more managed to-one associations with the `@assert.target` annotation. However, it is assumed that dependent values were inserted before the current transaction. For example, in a deep create scenario, when creating a book, checking whether an associated author exists that was created as part of the same deep create transaction isn't supported, in this case, you will get an error. The `@assert.target` check constraint is meant to **validate user input** and not to ensure referential integrity. Therefore only `CREATE`, and `UPDATE` events are supported (`DELETE` events are not supported). To ensure that every non-null foreign key in a table has a corresponding primary key in the associated/referenced target table (ensure referential integrity), the [`@assert.integrity`](databases#database-constraints) constraint must be used instead. If the reference's target doesn't exist, an HTTP response (error message) is provided to HTTP client applications and logged to stdout in debug mode. The HTTP response body's content adheres to the standard OData specification for an error [response body](https://docs.oasis-open.org/odata/odata-json-format/v4.01/cs01/odata-json-format-v4.01-cs01.html#sec_ErrorResponse). #### Example Add `@assert.target` annotation to the service definition as previously mentioned: ```cds entity Books { key ID : UUID; title : String; author : Association to Authors @assert.target; } entity Authors { key ID : UUID; name : String; books : Association to many Books on books.author = $self; } ``` **HTTP Request** — *assume that an author with the ID `"796e274a-c3de-4584-9de2-3ffd7d42d646"` doesn't exist in the database* ```http POST Books HTTP/1.1 Accept: application/json;odata.metadata=minimal Prefer: return=minimal Content-Type: application/json;charset=UTF-8 {"author_ID": "796e274a-c3de-4584-9de2-3ffd7d42d646"} ``` **HTTP Response** ```http HTTP/1.1 400 Bad Request odata-version: 4.0 content-type: application/json;odata.metadata=minimal {"error": { "@Common.numericSeverity": 4, "code": "400", "message": "Value doesn't exist", "target": "author_ID" }} ``` ::: tip In contrast to the `@assert.integrity` constraint, whose check is performed on the underlying database layer, the `@assert.target` check constraint is performed on the application service layer before the custom application handlers are called. ::: ::: warning Cross-service checks are not supported. It is expected that the associated entities are defined in the same service. ::: ::: warning The `@assert.target` check constraint relies on database locks to ensure accurate results in concurrent scenarios. However, locking is a database-specific feature, and some databases don't permit to lock certain kinds of objects. On SAP HANA, for example, views with joins or unions can't be locked. Do not use `@assert.target` on such artifacts/entities. ::: ### `@assert .format` Allows you to specify a regular expression string (in ECMA 262 format in CAP Node.js and java.util.regex.Pattern format in CAP Java) that all string input must match. ```cds entity Foo { bar : String @assert.format: '[a-z]ear'; } ``` ### `@assert .range` Allows you to specify `[ min, max ]` ranges for elements with ordinal types — that is, numeric or date/time types. For `enum` elements, `true` can be specified to restrict all input to the defined enum values. ```cds entity Foo { bar : Integer @assert.range: [ 0, 3 ]; boo : Decimal @assert.range: [ 2.1, 10.25 ]; car : DateTime @assert.range: ['2018-10-31', '2019-01-15']; zoo : String @assert.range enum { high; medium; low; }; } ``` #### ... with open intervals By default, specified `[min,max]` ranges are interpreted as closed intervals, that means, the performed checks are `min ≤ input ≤ max`. You can also specify open intervals by wrapping the *min* and/or *max* values into parenthesis like that: ```cds @assert.range: [(0),100] // 0 < input ≤ 100 @assert.range: [0,(100)] // 0 ≤ input < 100 @assert.range: [(0),(100)] // 0 < input < 100 ``` In addition, you can use an underscore `_` to represent *Infinity* like that: ```cds @assert.range: [(0),_] // positive numbers only, _ means +Infinity here @assert.range: [_,(0)] // negative number only, _ means -Infinity here ``` > Basically values wrapped in parentheses _`(x)`_ can be read as _excluding `x`_ for *min* or *max*. Note that the underscore `_` doesn't have to be wrapped into parenthesis, as by definition no number can be equal to *Infinity* . ::: warning Support in latest runtimes Support for open intervals and infinity has been added to CAP Node.js, i.e. `@sap/cds` version **8.5**. Support in CAP Java is **not yet available** but will follow soon. ::: ### `@assert .notNull` Annotate a property with `@assert.notNull: false` to have it ignored during the generic not null check, for example if your persistence fills it automatically. ```cds entity Foo { bar : String not null @assert.notNull: false; } ``` ## Custom Logic As most standard tasks and use cases are covered by [generic service providers](#generic-providers), the need to add service implementation code is greatly reduced and minified, and hence the quantity of individual boilerplate coding. The remaining cases that need custom handlers, reduce to real custom logic, specific to your domain and application, such as: - Domain-specific programmatic [Validations](#input-validation) - Augmenting result sets, for example to add computed fields for frontends - Programmatic [Authorization Enforcements](/guides/security/authorization#enforcement) - Triggering follow-up actions, for example calling other services or emitting outbound events in response to inbound events - And more... In general, all the things not (yet) covered by generic handlers **In Node.js**, the easiest way to add custom implementations for services is through equally named _.js_ files placed next to a service definition's _.cds_ file: ```sh ./srv - cat-service.cds # service definitions - cat-service.js # service implementation ... ``` [Learn more about providing service implementations in Node.js.](../node.js/core-services#implementing-services){.learn-more} **In Java**, you'd assign `EventHandler` classes using dependency injection as follows: ```Java @Component @ServiceName("org.acme.Foo") public class FooServiceImpl implements EventHandler {...} ``` [Learn more about Event Handler classes in Java.](../java/event-handlers/#handlerclasses){.learn-more} ### Custom Event Handlers Within your custom implementations, you can register event handlers like that: ::: code-group ```js [Node.js] module.exports = function (){ this.on ('submitOrder', (req)=>{...}) //> custom actions this.on ('CREATE',`Books`, (req)=>{...}) this.before ('UPDATE',`*`, (req)=>{...}) this.after ('READ',`Books`, (books)=>{...}) } ``` ```Java @Component @ServiceName("BookshopService") public class BookshopServiceImpl implements EventHandler { @On(event="submitOrder") public void onSubmitOrder (EventContext req) {...} @On(event="CREATE", entity="Books") public void onCreateBooks (EventContext req) {...} @Before(event="UPDATE", entity="*") public void onUpdate (EventContext req) {...} @After(event="READ", entity="Books") public void onReadBooks (EventContext req) {...} } ``` ::: [Learn more about **adding event handlers in Node.js**.](../node.js/core-services#srv-on-before-after){.learn-more} [Learn more about **adding event handlers in Java**.](../java/event-handlers/#handlerclasses){.learn-more} ### Hooks: `on`, `before`, `after` In essence, event handlers are functions/method registered to be called when a certain event occurs, with the event being a custom operation, like `submitOrder`, or a CRUD operation on a certain entity, like `READ Books`; in general following this scheme: - `` , `` , `[]` → handler function CAP allows to plug in event handlers to these different hooks, that is phases during processing a certain event: - `on` handlers run _instead of_ the generic/default handlers. - `before` handlers run _before_ the `on` handlers - `after` handlers run _after_ the `on` handlers, and get the result set as input `on` handlers form an *interceptor* stack: the topmost handler getting called by the framework. The implementation of this handler is in control whether to delegate to default handlers down the stack or not. `before` and `after` handlers are *listeners*: all registered listeners are invoked in parallel. If one vetoes / throws an error the request fails. ### Within Event Handlers {#handler-impls} Event handlers all get a uniform _Request_/_Event Message_ context object as their primary argument, which, among others, provides access to the following information: - The `event` name — that is, a CRUD method name, or a custom-defined one - The `target` entity, if any - The `query` in [CQN](../cds/cqn) format, for CRUD requests - The `data` payload - The `user`, if identified/authenticated - The `tenant` using your SaaS application, if enabled [Learn more about **implementing event handlers in Node.js**.](../node.js/events#cds-request){.learn-more} [Learn more about **implementing event handlers in Java**.](../java/event-handlers/#eventcontext){.learn-more} ## Actions & Functions In addition to common CRUD operations, you can declare domain-specific custom operations as shown below. These custom operations always need custom implementations in corresponding events handlers. You can define actions and functions in CDS models like that: ```cds service Sue { // unbound actions & functions function sum (x:Integer, y:Integer) returns Integer; function stock (id : Foo:ID) returns Integer; action add (x:Integer, to: Integer) returns Integer; // bound actions & functions entity Foo { key ID:Integer } actions { function getStock() returns Integer; action order (x:Integer) returns Integer; //bound to the collection and not a specific instance of Foo action customCreate (in: many $self, x: String) returns Foo; } } ``` [Learn more about modeling actions and functions in CDS.](../cds/cdl#actions){.learn-more} The differentiation between *Actions* and *Functions* as well as *bound* and *unbound* stems from the OData specifications, and in essence is as follows: - **Actions** modify data in the server - **Functions** retrieve data - **Unbound** actions/functions are like plain unbound functions in JavaScript. - **Bound** actions/functions always receive the bound entity's primary key as implicit first argument, similar to `this` pointers in Java or JavaScript. The exception are bound actions to collections, which are bound against the collection and not a specific instance of the entity. An example use case are custom create actions for the SAP Fiori elements UI. ### Implementing Actions / Functions In general, implement actions or functions like that: ```js module.exports = function Sue(){ this.on('sum', ({data:{x,y}}) => x+y) this.on('add', ({data:{x,to}}) => stocks[to] += x) this.on('stock', ({data:{id}}) => stocks[id]) this.on('getStock','Foo', ({params:[id]}) => stocks[id]) this.on('order','Foo', ({params:[id],data:{x}}) => stocks[id] -= x) } ``` Event handlers for actions or functions are very similar to those for CRUD events, with the name of the action/function replacing the name of the CRUD operations. No entity is specific for unbound actions/functions. **Method-style Implementations** in Node.js, you can alternatively implement actions and functions using conventional JavaScript methods with subclasses of `cds.Service`: ```js module.exports = class Sue extends cds.Service { sum(x,y) { return x+y } add(x,to) { return stocks[to] += x } stock(id) { return stocks[id] } getStock(Foo,id) { return stocks[id] } order(Foo,id,x) { return stocks[id] -= x } } ``` ### Calling Actions / Functions **HTTP Requests** to call the actions/function declared above look like that: ```js GET .../sue/sum(x=1,y=2) // unbound function GET .../sue/stock(id=2) // unbound function POST .../sue/add {"x":1,"to":2} // unbound action GET .../sue/Foo(2)/Sue.getStock() // bound function POST .../sue/Foo(2)/Sue.order {"x":1} // bound action ``` > Note: You always need to add the `()` for functions, even if no arguments are required. The OData standard specifies that bound actions/functions need to be prefixed with the service's name. In the previous example, entity `Foo` has a bound action `order`. That action must be called via `/Foo(2)/Sue.order` instead of simply `/Foo(2)/order`. > For convenience, the CAP Node.js runtime also allows the following: > - Call bound actions/functions without prefixing them with the service name. > - Omit the `()` if no parameter is required. > - Use query options to provide function parameters like `sue/sum?x=1&y=2`
**Programmatic** usage via **generic APIs** would look like this for Node.js: ```js const srv = await cds.connect.to('Sue') // unbound actions/functions await srv.send('sum',{x:1,y:2}) await srv.send('add',{x:11,to:2}) await srv.send('stock',{id:2}) // bound actions/functions await srv.send('getStock','Foo',{id:2}) //for passing the params property, use this syntax await srv.send({ event: 'order', entity: 'Foo', data: {x:3}, params: [2]}) ``` > Note: Always pass the target entity name as second argument for bound actions/functions.
**Programmatic** usage via **typed API** — Node.js automatically equips generated service instances with specific methods matching the definitions of actions/functions found in the services' model. This allows convenient usage like that: ```js const srv = await cds.connect.to(Sue) // unbound actions/functions srv.sum(1,2) srv.add(11,2) srv.stock(2) // bound actions/functions srv.getStock('Foo',2) srv.order('Foo',2,3) ``` > Note: Even with that typed APIs, always pass the target entity name as second argument for bound actions/functions.

## Serving Media Data CAP provides out-of-the-box support for serving media and other binary data. ### Annotating Media Elements You can use the following annotations in the service model to indicate that an element in an entity contains media data. `@Core.MediaType` : Indicates that the element contains media data (directly or using a redirect). The value of this annotation is either a string with the contained MIME type (as shown in the first example), or is a path to the element that contains the MIME type (as shown in the second example). `@Core.IsMediaType` : Indicates that the element contains a MIME type. The `@Core.MediaType` annotation of another element can reference this element. `@Core.IsURL @Core.MediaType` : Indicates that the element contains a URL pointing to the media data (redirect scenario). `@Core.ContentDisposition.Filename` : Indicates that the element is expected to be displayed as an attachment, that is downloaded and saved locally. The value of this annotation is a path to the element that contains the Filename (as shown in the fourth example ). `@Core.ContentDisposition.Type` : Can be used to instruct the browser to display the element inline, even if `@Core.ContentDisposition.Filename` is specified, by setting to `inline` (see the fifth example). If omitted, the behavior is `@Core.ContentDisposition.Type: 'attachment'`. [Learn more how to enable stream support in SAP Fiori elements.](https://ui5.sap.com/#/topic/b236d32d48b74304887b3dd5163548c1){.learn-more} The following examples show these annotations in action: 1. Media data is stored in a database with a fixed media type `image/png`: ```cds entity Books { //... image : LargeBinary @Core.MediaType: 'image/png'; } ``` 2. Media data is stored in a database with a _variable_ media type: ```cds entity Books { //... image : LargeBinary @Core.MediaType: imageType; imageType : String @Core.IsMediaType; } ``` 3. Media data is stored in an external repository: ```cds entity Books { //... imageUrl : String @Core.IsURL @Core.MediaType: imageType; imageType : String @Core.IsMediaType; } ``` 4. Content disposition data is stored in a database with a _variable_ disposition: ```cds entity Authors { //... image : LargeBinary @Core.MediaType: imageType @Core.ContentDisposition.Filename: fileName; fileName : String; } ``` 5. The image shall have the suggested file name but be displayed inline nevertheless: ```cds entity Authors { //... image : LargeBinary @Core.MediaType: imageType @Core.ContentDisposition.Filename: fileName @Core.ContentDisposition.Type: 'inline'; fileName : String; } ``` [Learn more about the syntax of annotations.](../cds/cdl#annotations){.learn-more} ::: warning In case you rename the properties holding the media type or content disposition information in a projection, you need to update the annotation's value as well. ::: ### Reading Media Resources Read media data using `GET` requests of the form `/Entity()/mediaProperty`: ```cds GET ../Books(201)/image > Content-Type: application/octet-stream ``` > The response's `Content-Type` header is typically `application/octet-stream`. > Although allowed by [RFC 2231](https://datatracker.ietf.org/doc/html/rfc2231), Node.js does not support line breaks in HTTP headers. Hence, make sure you remove any line breaks from your `@Core.IsMediaType` content. Read media data with `@Core.ContentDisposition.Filename` in the model: ```cds GET ../Authors(201)/image > Content-Disposition: 'attachment; filename="foo.jpg"' ``` > The media data is streamed automatically. [Learn more about returning a custom streaming object (Node.js - beta).](../node.js/best-practices#custom-streaming-beta){.learn-more} ### Creating a Media Resource As a first step, create an entity without media data using a POST request to the entity. After creating the entity, you can insert a media property using the PUT method. The MIME type is passed in the `Content-Type` header. Here are some sample requests: ```cds POST ../Books Content-Type: application/json { } ``` ```cds PUT ../Books(201)/image Content-Type: image/png ``` > The media data is streamed automatically. ### Updating Media Resources The media data for an entity can be updated using the PUT method: ```cds PUT ../Books(201)/image Content-Type: image/png ``` > The media data is streamed automatically. ### Deleting Media Resources One option is to delete the complete entity, including all media data: ```cds DELETE ../Books(201) ``` Alternatively, you can delete a media data element individually: ```cds DELETE ../Books(201)/image ``` ### Using External Resources The following are requests and responses for the entity containing redirected media data from the third example, "Media data is stored in an external repository". > This format is used by OData-Version: 4.0. To be changed in OData-Version: 4.01. ```cds GET: ../Books(201) >{ ... image@odata.mediaReadLink: "http://other-server/image.jpeg", image@odata.mediaContentType: "image/jpeg", imageType: "image/jpeg" } ``` ### Conventions & Limitations #### General Conventions - Binary data in payloads must be a Base64 encoded string. - Binary data in URLs must have the format `binary''`. For example: ```http GET $filter=ID eq binary'Q0FQIE5vZGUuanM=' ``` #### Node.js Runtime Conventions and Limitations - The usage of binary data in some advanced constructs like the `$apply` query option and `/any()` might be limited. - On SQLite, binary strings are stored as plain strings, whereas a buffer is stored as binary data. As a result, if in a CDS query, a binary string is used to query data stored as binary, this wouldn't work. - Please note, that SQLite doesn't support streaming. That means, that LargeBinary fields are read as a whole (not in chunks) and stored in memory, which can impact performance. - SAP HANA Database Client for Node.js (HDB) and SAP HANA Client for Node.js (`@sap/hana-client`) packages handle binary data differently. For example, HDB automatically converts binary strings into binary data, whereas SAP HANA Client doesn't. - In the Node.js Runtime, all binary strings are converted into binary data according to SAP HANA property types. To disable this default behavior, you can set the environment variable cds.hana.base64_to_buffer: false. # Best Practices ## Single-Purposed Services {.best-practice} We strongly recommend designing your services for single use cases. Services in CAP are cheap, so there's no need to save on them. #### **DON'T:**{.bad} Single Services Exposing All Entities 1:1 The anti-pattern to that are single services exposing all underlying entities in your app in a 1:1 fashion. While that may save you some thoughts in the beginning, it's likely that it will result in lots of headaches in the long run: * They open huge entry doors to your clients with only few restrictions * Individual use-cases aren't reflected in your API design * You have to add numerous checks on a per-request basis... * Which have to reflect on the actual use cases in complex and expensive evaluations #### **DO:**{.good} One Service Per Use Case For example, let's assume that we have a domain model defining *Books* and *Authors* more or less as above, and then we add *Orders*. We could define the following services: ```cds using { my.domain as my } from './db/schema'; ``` ```cds /** Serves end users browsing books and place orders */ service CatalogService { @readonly entity Books as select from my.Books { ID, title, author.name as author }; @requires: 'authenticated-user' @insertonly entity Orders as projection on my.Orders; } ``` ```cds /** Serves registered users managing their account and their orders */ @requires: 'authenticated-user' service UsersService { @restrict: [{ grant: 'READ', where: 'buyer = $user' }] // limit to own ones @readonly entity Orders as projection on my.Orders; action cancelOrder ( ID:Orders.ID, reason:String ); } ``` ```cds /** Serves administrators managing everything */ @requires: 'authenticated-user' service AdminService { entity Books as projection on my.Books; entity Authors as projection on my.Authors; entity Orders as projection on my.Orders; } ``` These services serve different use cases and are tailored for each. Note, for example, that we intentionally don't expose the `Authors` entity to end users. ## Late-Cut Microservices {.best-practice} Compared to Microservices, CAP services are 'Nano'. As shown in the previous sections, you should design your application as a set of loosely coupled, single-purposed services, which can all be served embedded in a single-server process at first (that is, a monolith). Yet, given such loosely coupled services, and enabled by CAP's uniform way to define and consume services, you can decide later on to separate, deploy, and run your services as separate microservices, even without changing your models or code. This flexibility allows you to, again, focus on solving your domain problem first, and avoid the efforts and costs of premature microservice design and DevOps overhead, at least in the early phases of development. # Consuming Services ## Introduction If you want to use data from other services or you want to split your application into multiple microservices, you need a connection between those services. We call them **remote services**. As everything in CAP is a service, remote services are modeled the same way as internal services — using CDS. CAP supports service consumption with dedicated APIs to [import](#import-api) service definitions, [query](#execute-queries) remote services, [mash up](#building-mashups) services, and [work locally](#local-mocking) as much as possible. ### Feature Overview For outbound remote service consumption, the following features are supported: + OData V4 + OData V2 (Deprecated) + [Querying API](#querying-api-features) + [Projections on remote services](#supported-projection-features) ### Tutorials and Examples | Example | Description | | ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | [Capire Bookshop (Fiori)](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori) | Example, Node.js, CAP-to-CAP | | [Example Application (Node.js)](https://github.com/SAP-samples/cloud-cap-risk-management/tree/ext-service-s4hc-suppliers-ui) | Complete application from the end-to-end Tutorial | | [Example Application (Java)](https://github.com/SAP-samples/cloud-cap-risk-management/tree/ext-service-s4hc-suppliers-ui-java) | Complete application from the end-to-end Tutorial | ### Define Scenario Before you start your implementation, you should define your scenario. Answering the following questions gets you started: + What services (remote/CAP) are involved? + How do they interact? + What needs to be displayed on the UI? You have all your answers and know your scenario, go on reading about [external service APIs](#external-service-api), getting an API definition from [the SAP Business Accelerator Hub](#from-api-hub) or [from a CAP project](#from-cap-service), and [importing an API definition](#import-api) to your project. #### Sample Scenario from End-to-End Tutorial ![A graphic showing the flow for one possible scenario. A user can either view risks or view the suppliers. The suppliers master data is already available from a system and is consumed in an application that enables the user to add the risks. From the maintained risks the user can get information about the supplier connected to a risk. From the supplier view, it's also possible to get details about a risk that is associated with a supplier. The user can block/unblock suppliers from the risk view.](./assets/using-services/risk-mgmt.drawio.svg){} ::: info _User Story_ A company wants to ensure that goods are only sourced from suppliers with acceptable risks. There shall be a software system, that allows a clerk to maintain risks for suppliers and their mitigations. The system shall block the supplier used if risks can't be mitigated. ::: The application is an extension for SAP S/4HANA. It deals with _risks_ and _mitigations_ that are local entities in the application and _suppliers_ that are stored in SAP S/4HANA Cloud. The application helps to reduce risks associated with suppliers by automatically blocking suppliers with a high risk using a [remote API Call](#execute-queries). ##### Integrate The user picks a supplier from the list. That list is coming [from the remote system and is exposed by the CAP application](#expose-remote-services). Then the user does a risk assessment. Additional supplier data, like name and blocked status, should be displayed on the UI as well, by [integrating the remote supplier service into the local risk service](#integrate-remote-into-local-services). ##### Extend It should be also possible to search for suppliers and show the associated risks by extending the remote supplier service [with the local risk service](#extend-a-remote-by-a-local-service) and its risks. ## Get and Import an External Service API { #external-service-api } To communicate to remote services, CAP needs to know their definitions. Having the definitions in your project allows you to mock them during design time. These definitions are usually made available by the service provider. As they aren't defined within your application but imported from outside, they're called *external* service APIs in CAP. Service APIs can be provided in different formats. Currently, *EDMX* files for OData V2 and V4 are supported. ### From SAP Business Accelerator Hub { #from-api-hub} The [SAP Business Accelerator Hub](https://api.sap.com/) provides many relevant APIs from SAP. You can download API specifications in different formats. If available, use the EDMX format. The EDMX format describes OData interfaces. To download the [Business Partner API (A2X) from SAP S/4HANA Cloud](https://api.sap.com/api/API_BUSINESS_PARTNER/overview), go to section **API Resources**, select **API Specification**, and download the **EDMX** file. [Get more details in the end-to-end tutorial.](https://developers.sap.com/tutorials/btp-app-ext-service-add-consumption.html#07f89fdd-82b2-4987-aa86-070f1d836156){.learn-more} ### For a Remote CAP Service { #from-cap-service} We recommend using EDMX as exchange format. Export a service API to EDMX: ::: code-group ```sh [Mac/Linux] cds compile srv -s OrdersService -2 edmx > OrdersService.edmx ``` ```cmd [Windows] cds compile srv -s OrdersService -2 edmx > OrdersService.edmx ``` ```powershell [Powershell] cds compile srv -s OrdersService -2 edmx -o dest/ ``` ::: [You can try it with the orders sample in cap/samples.](https://github.com/SAP-samples/cloud-cap-samples/tree/master/orders){.learn-more} By default, CAP works with OData V4 and the EDMX export is in this protocol version as well. The `cds compile` command offers options for other OData versions and flavors, call `cds help compile` for more information. ::: warning Don't just copy the CDS file for a remote CAP service Simply copying CDS files from a different application comes with the following issues: - The effective service API depends on the used protocol. - CDS files often use includes, which can't be resolved anymore. - CAP creates unneeded database tables and views for all entities in the file. ::: ### Import API Definition { #import-api} Import the API to your project using `cds import`. ```sh cds import --as cds ``` > `` can be an EDMX (OData V2, OData V4), OpenAPI or AsyncAPI file. | Option | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `--as cds` | The import creates a CDS file (for example _API_BUSINESS_PARTNER.cds_) instead of a CSN file. | This adds the API in CDS format to the _srv/external_ folder and also copies the input file into that folder.
Further, it adds the API as an external service to your _package.json_. You use this declaration later to connect to the remote service [using a destination](#use-destinations-with-node-js). ```json "cds": { "requires": { "API_BUSINESS_PARTNER": { "kind": "odata-v2", "model": "srv/external/API_BUSINESS_PARTNER" } } } ```
::: details Options and flags in _.cdsrc.json_ Alternatively, you can set the options and flags for `cds import` in your _.cdsrc.json_: ```json { "import": { "as": "cds", "force": true, "include_namespaces": "sap,c4c" } } ``` Now run `cds import ` - `--as` only supports these formats: "csn","cds", and "json" - `--force` is applicable only in combination with `--as` option. By default the `--force` flag is set to false. > If set to true, existing CSN/CDS files from previous imports are overwritten. ::: When importing the specification files, the `kind` is set according to the following mapping: | Imported Format | Used `kind` | |-----------------|--------------------------------| | OData V2 | `odata-v2` | | OData V4 | `odata` (alias for `odata-v4`) | | OpenAPI | `rest` | | AsyncAPI | `odata` | [Learn more about type mappings from OData to CDS and vice versa.](../tools/apis/cds-import#odata-type-mappings){.learn-more} ::: tip Always use OData V4 (`odata`) when calling another CAP service. ::: ::: warning Limitations Not all features of OData, OpenAPI, or AsyncAPI are supported in CAP which may lead to the rejection of the imported model by the CDS compiler or may result in a different API when rendered by CAP. Known limitations are cyclic type references and inheritance. :::
You need to configure remote services in Spring Boot's _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: cloud cds: remote.services: API_BUSINESS_PARTNER: type: "odata-v2" ``` ::: To work with remote services, add the following dependency to your Maven project: ```xml com.sap.cds cds-feature-remote-odata runtime ``` [Learn about all `cds.remote.services` configuration possibilities.](../java/developing-applications/properties#cds-remote-services){.learn-more}
## Local Mocking When developing your application, you can mock the remote service. ### Add Mock Data As for any other CAP service, you can add mocking data. The CSV file needs to be added to the _srv/external/data_ folder. {.node} The CSV file needs to be added to the _db/data_ folder. {.java} ::: code-group ```csv [API_BUSINESS_PARTNER-A_BusinessPartner.csv] BusinessPartner;BusinessPartnerFullName;BusinessPartnerIsBlocked 1004155;Williams Electric Drives;false 1004161;Smith Batteries Ltd;false 1004100;Johnson Automotive Supplies;true ``` ::: For Java, make sure to add the `--with-mocks` option to the `cds deploy` command used to generate the `schema.sql` in `srv/pom.xml`. This ensures that tables for the mocked remote entities are created in the database.{.java} [Find this source in the end-to-end tutorial](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui-java/srv/external/data/API_BUSINESS_PARTNER-A_BusinessPartner.csv){.learn-more} [Get more details in the end-to-end tutorial.](https://developers.sap.com/tutorials/btp-app-ext-service-add-consumption.html#12ff20a2-e988-465f-a508-f527c7fc0c29){.learn-more} ### Run Local with Mocks Start your project with the imported service definition.
```sh cds watch ``` The service is automatically mocked, as you can see in the log output on server start. ```log{17} ... [cds] - model loaded from 8 file(s): ... ./srv/external/API_BUSINESS_PARTNER.cds ... [cds] - connect using bindings from: { registry: '~/.cds-services.json' } [cds] - connect to db > sqlite { database: ':memory:' } > filling sap.ui.riskmanagement.Mitigations from ./db/data/sap.ui.riskmanagement-Mitigations.csv > filling sap.ui.riskmanagement.Risks from ./db/data/sap.ui.riskmanagement-Risks.csv > filling API_BUSINESS_PARTNER.A_BusinessPartner from ./srv/external/data/API_BUSINESS_PARTNER-A_BusinessPartner.csv /> successfully deployed to sqlite in-memory db [cds] - serving RiskService { at: '/service/risk', impl: './srv/risk-service.js' } [cds] - mocking API_BUSINESS_PARTNER { at: '/api-business-partner' } // [!code focus] [cds] - launched in: 1.104s [cds] - server listening on { url: 'http://localhost:4004' } [ terminate with ^C ] ```
```sh mvn spring-boot:run ```
### Mock Associations You can't get data from associations of a mocked service out of the box. The associations of imported services lack information how to look up the associated records. This missing relation is expressed with an empty key definition at the end of the association declaration in the CDS model (`{ }`). ::: code-group ```cds{9} [srv/external/API_BUSINESS_PARTNER.cds] entity API_BUSINESS_PARTNER.A_BusinessPartner { key BusinessPartner : LargeString; BusinessPartnerFullName : LargeString; BusinessPartnerType : LargeString; ... to_BusinessPartnerAddress : Association to many API_BUSINESS_PARTNER.A_BusinessPartnerAddress { }; // [!code focus] }; entity API_BUSINESS_PARTNER.A_BusinessPartnerAddress { key BusinessPartner : String(10); key AddressID : String(10); ... }; ``` ::: To mock an association, you have to modify [the imported file](#import-api). Before doing any modifications, create a local copy and add it to your source code management system. ```sh cp srv/external/API_BUSINESS_PARTNER.cds srv/external/API_BUSINESS_PARTNER-orig.cds git add srv/external/API_BUSINESS_PARTNER-orig.cds ... ``` Import the CDS file again, just using a different name: ```sh cds import ~/Downloads/API_BUSINESS_PARTNER.edmx --keep-namespace \ --as cds --out srv/external/API_BUSINESS_PARTNER-new.cds ``` Add an `on` condition to express the relation: ::: code-group ```cds [srv/external/API_BUSINESS_PARTNER-new.cds] entity API_BUSINESS_PARTNER.A_BusinessPartner { // ... to_BusinessPartnerAddress : Association to many API_BUSINESS_PARTNER.A_BusinessPartnerAddress on to_BusinessPartnerAddress.BusinessPartner = BusinessPartner; }; ``` ::: Don't add any keys or remove empty keys, which would change it to a managed association. Added fields aren't known in the service and lead to runtime errors. Use a 3-way merge tool to take over your modifications, check it and overwrite the previous unmodified file with the newly imported file: ```sh git merge-file API_BUSINESS_PARTNER.cds \ API_BUSINESS_PARTNER-orig.cds \ API_BUSINESS_PARTNER-new.cds mv API_BUSINESS_PARTNER-new.cds API_BUSINESS_PARTNER-orig.cds ``` To prevent accidental loss of modifications, the `cds import --as cds` command refuses to overwrite modified files based on a "checksum" that is included in the file. ### Mock Remote Service as OData Service (Node.js) {.node} As shown previously you can run one process including a mocked external service. However, this mock doesn't behave like a real external service. The communication happens in-process and doesn't use HTTP or OData. For a more realistic testing, let the mocked service run in a separate process. First install the required packages: ```sh npm add @sap-cloud-sdk/http-client@3.x @sap-cloud-sdk/connectivity@3.x @sap-cloud-sdk/resilience@3.x ``` Then start the CAP application with the mocked remote service only: ```sh cds mock API_BUSINESS_PARTNER ``` If the startup is completed, run `cds watch` in the same project from a **different** terminal: ```sh cds watch ``` CAP tracks locally running services. The mocked service `API_BUSINESS_PARTNER` is registered in file _~/.cds-services.json_. `cds watch` searches for running services in that file and connects to them. Node.js only supports *OData V4* protocol and so does the mocked service. There might still be some differences to the real remote service if it uses a different protocol, but it's much closer to it than using only one instance. In the console output, you can also easily see how the communication between the two processes happens. ### Mock Remote Service as OData Service (Java) {.java} You configure CAP to do OData and HTTP requests for a mocked service instead of doing it in-process. Configure a new Spring Boot profile (for example `mocked`): ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: mocked cds: application.services: - name: API_BUSINESS_PARTNER-mocked model: API_BUSINESS_PARTNER serve.path: API_BUSINESS_PARTNER remote.services: API_BUSINESS_PARTNER: destination: name: "s4-business-partner-api-mocked" ``` ::: The profile exposes the mocked service as OData service and defines a destination to access the service. The destination just points to the CAP application itself. You need to implement some Java code for this: ::: code-group ```java [DestinationConfiguration.java] @EventListener void applicationReady(ApplicationReadyEvent ready) { int port = Integer.valueOf(environment.getProperty("local.server.port")); DefaultHttpDestination mockDestination = DefaultHttpDestination .builder("http://localhost:" + port) .name("s4-business-partner-api-mocked").build(); DefaultDestinationLoader loader = new DefaultDestinationLoader(); loader.registerDestination(mockDestination); DestinationAccessor.prependDestinationLoader(loader); } ``` ::: Now, you just need to run the application with the new profile: ```sh mvn spring-boot:run -Dspring-boot.run.profiles=default,mocked ``` When sending a request to your CAP application, for example the `Suppliers` entity, it is transformed to the request for the mocked remote service and requested from itself as a OData request. Therefore, you'll see two HTTP requests in your CAP application's log. For example: [http://localhost:8080/service/risk/Suppliers](http://localhost:8080/service/risk/Suppliers) ```log 2021-09-21 15:18:44.870 DEBUG 34645 — [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : GET "/service/risk/Suppliers", parameters={} ... 2021-09-21 15:18:45.292 DEBUG 34645 — [nio-8080-exec-2] o.s.web.servlet.DispatcherServlet : GET "/API_BUSINESS_PARTNER/A_BusinessPartner?$select=BusinessPartner,BusinessPartnerFullName,BusinessPartnerIsBlocked&$top=1000&$skip=0&$orderby=BusinessPartner%20asc&sap-language=de&sap-valid-at=2021-09-21T13:18:45.211722Z", parameters={masked} ... 2021-09-21 15:18:45.474 DEBUG 34645 — [nio-8080-exec-2] o.s.web.servlet.DispatcherServlet : Completed 200 OK 2021-09-21 15:18:45.519 DEBUG 34645 — [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed 200 OK ``` [Try out the example application.](https://github.com/SAP-samples/cloud-cap-risk-management/tree/ext-service-s4hc-suppliers-ui-java){.learn-more} ## Execute Queries {#execute-queries} You can send requests to remote services using CAP's powerful querying API. ### Execute Queries with Node.js{.node} Connect to the service before sending a request, as usual in CAP: ```js const bupa = await cds.connect.to('API_BUSINESS_PARTNER'); ``` Then execute your queries using the [Querying API](../node.js/core-services#srv-run-query): ```js const { A_BusinessPartner } = bupa.entities; const result = await bupa.run(SELECT(A_BusinessPartner).limit(100)); ``` We recommend limiting the result set and avoid the download of large data sets in a single request. You can `limit` the result as in the example: `.limit(100)`. Many features of the querying API are supported for OData services. For example, you can resolve associations like this: ```js const { A_BusinessPartner } = bupa.entities; const result = await bupa.run(SELECT.from(A_BusinessPartner, bp => { bp('BusinessPartner'), bp.to_BusinessPartnerAddress(addresses => { addresses('*') }) }).limit(100)); ``` [Learn more about querying API examples.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui/test/odata-examples.js){.learn-more} [Learn more about supported querying API features.](#querying-api-features){.learn-more} ### Execute Queries with Java {.java} You can use dependency injection to get access to the remote service: ```java @Autowired @Qualifier(ApiBusinessPartner_.CDS_NAME) CqnService bupa; ``` Then execute your queries using the [Querying API](../java/working-with-cql/query-execution): ```java CqnSelect select = Select.from(ABusinessPartner_.class).limit(100); List businessPartner = bupa.run(select).listOf(ABusinessPartner.class); ``` [Learn more about querying API examples.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui/test/odata-examples.js){.learn-more} [Learn more about supported querying API features.](#querying-api-features){.learn-more} ### Model Projections External service definitions, like [generated CDS or CSN files during import](#import-api), can be used as any other CDS definition, but they **don't** generate database tables and views unless they are mocked. It's best practice to use your own "interface" to the external service and define the relevant fields in a projection in your namespace. Your implementation is then independent of the remote service implementation and you request only the information that you require. ```cds using {  API_BUSINESS_PARTNER as bupa } from '../srv/external/API_BUSINESS_PARTNER'; entity Suppliers as projection on bupa.A_BusinessPartner { key BusinessPartner as ID, BusinessPartnerFullName as fullName, BusinessPartnerIsBlocked as isBlocked, } ``` As the example shows, you can use field aliases as well. [Learn more about supported features for projections.](#supported-projection-features){.learn-more} ### Execute Queries on Projections to a Remote Service{.node} Connect to the service before sending a request, as usual in CAP: ```js const bupa = await cds.connect.to('API_BUSINESS_PARTNER'); ``` Then execute your queries: ```js const suppliers = await bupa.run(SELECT(Suppliers).where({ID})); ``` CAP resolves projections and does the required mapping, similar to databases. A brief explanation, based on the previous query, what CAP does: + Resolves the `Suppliers` projection to the external service interface `API_BUSINESS_PARTNER.A_Business_Partner`. + The **where** condition for field `ID` will be mapped to the `BusinessPartner` field of `A_BusinessPartner`. + The result is mapped back to the `Suppliers` projection, so that values for the `BusinessPartner` field are mapped back to `ID`. This makes it convenient to work with external services. ### Building Custom Requests with Node.js{.node} If you can't use the querying API, you can craft your own HTTP requests using `send`: ```js bupa.send({ method: 'PATCH', path: A_BusinessPartner, data: { BusinessPartner: 1004155, BusinessPartnerIsBlocked: true } }) ``` [Learn more about the `send` API.](../node.js/core-services#srv-send-request){.learn-more} ### Building Custom Requests with Java {.java} For Java, you can use the `HttpClient` API to implement your custom requests. The API is enhanced by the SAP Cloud SDK to support destinations. [Learn more about using the HttpClient Accessor.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/sdk-connectivity-http-client){.learn-more} [Learn more about using destinations.](#use-destinations-with-java){.learn-more} ## Integrate and Extend {#integrate-and-extend} By creating projections on remote service entities and using associations, you can create services that combine data from your local service and remote services. What you need to do depends on [the scenarios](#sample-scenario-from-end-to-end-tutorial) and how your remote services should be integrated into, as well as extended by your local services. ### Expose Remote Services To expose a remote service entity, you add a projection on it to your CAP service: ```cds using {  API_BUSINESS_PARTNER as bupa } from '../srv/external/API_BUSINESS_PARTNER'; extend service RiskService with { entity BusinessPartners as projection on bupa.A_BusinessPartner; } ``` CAP automatically tries to delegate queries to database entities, which don't exist as you're pointing to an external service. That behavior would produce an error like this: ```xml 500 SQLITE_ERROR: no such table: RiskService_BusinessPartners in: SELECT BusinessPartner, Customer, Supplier, AcademicTitle, AuthorizationGroup, BusinessPartnerCategory, BusinessPartnerFullName, BusinessPartnerGrouping, BusinessPartnerName, BusinessPartnerUUID, CorrespondenceLanguage, CreatedByUser, CreationDate, (...) FROM RiskService_BusinessPartner ALIAS_1 ORDER BY BusinessPartner COLLATE NOCASE ASC LIMIT 11 ``` To avoid this error, you need to handle projections. Write a handler function to delegate a query to the remote service and run the incoming query on the external service. ::: code-group ```js [Node.js] module.exports = cds.service.impl(async function() { const bupa = await cds.connect.to('API_BUSINESS_PARTNER'); this.on('READ', 'BusinessPartners', req => { return bupa.run(req.query); }); }); ``` ```java [Java] @Component @ServiceName(RiskService_.CDS_NAME) public class RiskServiceHandler implements EventHandler { @Autowired @Qualifier(ApiBusinessPartner_.CDS_NAME) CqnService bupa; @On(entity = BusinessPartners.CDS_NAME) Result readSuppliers(CdsReadEventContext context) { return bupa.run(context.getCqn()); } } ``` ::: [For Node.js, get more details in the end-to-end tutorial.](https://developers.sap.com/tutorials/btp-app-ext-service-add-consumption.html#0a5ed8cc-d0fa-4a52-bb56-9c864cd66e71){.learn-more} ::: warning If you receive `404` errors, check if the request contains fields that don't exist in the service and start with the name of an association. `cds import` adds an empty keys declaration (`{ }`) to each association. Without this declaration, foreign keys for associations are generated in the runtime model, that don't exist in the real service. To solve this problem, you need to reimport the external service definition using `cds import`. ::: This works when accessing the entity directly. Additional work is required to support [navigation](#handle-navigations-across-local-and-remote-entities) and [expands](#handle-expands-across-local-and-remote-entities) from or to a remote entity. Instead of exposing the remote service's entity unchanged, you can [model your own projection](#model-projections). For example, you can define a subset of fields and change their names. ::: tip CAP does the magic that maps the incoming query, according to your projections, to the remote service and maps back the result. ::: ```cds using { API_BUSINESS_PARTNER as bupa } from '../srv/external/API_BUSINESS_PARTNER'; extend service RiskService with { entity Suppliers as projection on bupa.A_BusinessPartner { key BusinessPartner as ID, BusinessPartnerFullName as fullName, BusinessPartnerIsBlocked as isBlocked } } ``` ```js module.exports = cds.service.impl(async function() { const bupa = await cds.connect.to('API_BUSINESS_PARTNER'); this.on('READ', 'Suppliers', req => { return bupa.run(req.query); }); }); ``` [Learn more about queries on projections to remote services.](#execute-queries-on-projections-to-a-remote-service){.learn-more} ### Expose Remote Services with Associations It's possible to expose associations of a remote service entity. You can adjust the [projection for the association target](#model-projections) and change the name of the association: ```cds using { API_BUSINESS_PARTNER as bupa } from '../srv/external/API_BUSINESS_PARTNER'; extend service RiskService with { entity Suppliers as projection on bupa.A_BusinessPartner { key BusinessPartner as ID, BusinessPartnerFullName as fullName, BusinessPartnerIsBlocked as isBlocked, to_BusinessPartnerAddress as addresses: redirected to SupplierAddresses } entity SupplierAddresses as projection on bupa.A_BusinessPartnerAddress { BusinessPartner as bupaID, AddressID as ID, CityName as city, StreetName as street, County as county } } ``` As long as the association is only resolved using expands (for example `.../risk/Suppliers?$expand=addresses`), a handler for the __source entity__ is sufficient: ```js this.on('READ', 'Suppliers', req => { return bupa.run(req.query); }); ``` If you need to resolve the association using navigation or request it independently from the source entity, add a handler for the __target entity__ as well: ```js this.on('READ', 'SupplierAddresses', req => { return bupa.run(req.query); }); ``` As usual, you can put two handlers into one handler matching both entities: ```js this.on('READ', ['Suppliers', 'SupplierAddresses'], req => { return bupa.run(req.query); }); ``` ### Mashing up with Remote Services You can combine local and remote services using associations. These associations need manual handling, because of their different data sources. #### Integrate Remote into Local Services Use managed associations from local entities to remote entities: ```cds @path: 'service/risk' service RiskService { entity Risks : managed { key ID : UUID @(Core.Computed : true); title : String(100); prio : String(5); supplier : Association to Suppliers; } entity Suppliers as projection on BusinessPartner.A_BusinessPartner { key BusinessPartner as ID, BusinessPartnerFullName as fullName, BusinessPartnerIsBlocked as isBlocked, }; } ``` #### Extend a Remote by a Local Service { #extend-a-remote-by-a-local-service} You can augment a projection with a new association, if the required fields for the on condition are present in the remote service. The use of managed associations isn't possible, because this requires to create new fields in the remote service. ```cds entity Suppliers as projection on bupa.A_BusinessPartner { key BusinessPartner as ID, BusinessPartnerFullName as fullName, BusinessPartnerIsBlocked as isBlocked, risks : Association to many Risks on risks.supplier.ID = ID, }; ``` ### Handle Mashups with Remote Services { #building-mashups} Depending on how the service is accessed, you need to support direct requests, navigation, or expands. CAP resolves those three request types only for service entities that are served from the database. When crossing the boundary between database and remote sourced entities, you need to take care of those requests. The list of [required implementations for mashups](#required-implementations-for-mashups) explains the different combinations. #### Handle Expands Across Local and Remote Entities Expands add data from associated entities to the response. For example, for a risk, you want to display the suppliers name instead of just the technical ID. But this property is part of the (remote) supplier and not part of the (local) risk. To handle expands, you need to add a handler for the main entity: 1. Check if a relevant `$expand` column is present. 2. Remove the `$expand` column from the request. 3. Get the data for the request. 4. Execute a new request for the expand. 5. Add the expand data to the returned data from the request. Example of a CQN request with an expand: ```json { "from": { "ref": [ "RiskService.Suppliers" ] }, "columns": [ { "ref": [ "ID" ] }, { "ref": [ "fullName" ] }, { "ref": [ "isBlocked" ] }, { "ref": [ "risks" ] }, { "expand": [ { "ref": [ "ID" ] }, { "ref": [ "title" ] }, { "ref": [ "descr" ] }, { "ref": [ "supplier_ID" ] } ] } ] } ``` [See an example how to handle expands in Node.js.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui/srv/risk-service.js){.node .learn-more} [See an example how to handle expands in Java.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui-java/srv/src/main/java/com/sap/cap/riskmanagement/handler/RiskServiceHandler.java){.java .learn-more} Expands across local and remote can cause stability and performance issues. For a list of items, you need to collect all IDs and send it to the database or the remote system. This can become long and may exceed the limits of a URL string in case of OData. Do you really need expands for a list of items? ```http GET /service/risk/Risks?$expand=supplier ``` Or is it sufficient for single items? ```http GET /service/risk/Risks(545A3CF9-84CF-46C8-93DC-E29F0F2BC6BE)/?$expand=supplier ``` ::: warning Keep performance in mind Consider to reject expands if it's requested on a list of items. ::: #### Handle Navigations Across Local and Remote Entities Navigations allow to address items via an association from a different entity: ```http GET /service/risks/Risks(20466922-7d57-4e76-b14c-e53fd97dcb11)/supplier ``` The CQN consists of a `from` condition with 2 values for `ref`. The first `ref` selects the record of the source entity of the navigation. The second `ref` selects the name of the association, to navigate to the target entity. ```json { "from": { "ref": [ { "id": "RiskService.Risks", "where": [ { "ref": [ "ID" ] }, "=", { "val": "20466922-7d57-4e76-b14c-e53fd97dcb11" } ]}, "supplier" ] }, "columns": [ { "ref": [ "ID" ] }, { "ref": [ "fullName" ] }, { "ref": [ "isBlocked" ] } ], "one": true } ``` To handle navigations, you need to check in your code if the `from.ref` object contains 2 elements. Be aware, that for navigations the handler of the **target** entity is called. If the association's on condition equals the key of the source entity, you can directly select the target entity using the key's value. You find the value in the `where` block of the first `from.ref` entry. Otherwise, you need to select the source item using that `where` block and take the required fields for the associations on condition from that result. [See an example how to handle navigations in Node.js.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui/srv/risk-service.js){.learn-more .node} [See an example how to handle navigations in Java.](https://github.com/SAP-samples/cloud-cap-risk-management/blob/ext-service-s4hc-suppliers-ui-java/srv/src/main/java/com/sap/cap/riskmanagement/handler/RiskServiceHandler.java){.learn-more .java} ### Limitations and Feature Matrix #### Required Implementations for Mashups { #required-implementations-for-mashups} You need additional logic, if remote entities are in the game. The following table shows what is required. "Local" is a database entity or a projection on a database entity. | **Request** | **Example** | **Implementation** | | --------------------------------------------------------------------- | ---------------------------------------- | ----------------------------------------------------------------- | | Local (including navigations and expands) | `/service/risks/Risks` | Handled by CAP | | Local: Expand remote | `/service/risks/Risks?$expand=supplier` | Delegate query w/o expand to local service and implement expand. | | Local: Navigate to remote | `/service/risks(...)/supplier` | Implement navigation and delegate query target to remote service. | | Remote (including navigations and expands to the same remote service) | `/service/risks/Suppliers` | Delegate query to remote service | | Remote: Expand local | `/service/risks/Suppliers?$expand=risks` | Delegate query w/o expand to remote service and implement expand. | | Remote: Navigate to local | `/service/Suppliers(...)/risks` | Implement navigation, delegate query for target to local service | #### Transient Access vs. Replication > This chapter shows only techniques for transient access. The following matrix can help you to find the best approach for your scenario: | **Feature** | **Transient Access** | **Replication** | |-------------------------------------------------------|-----------------------|-----------------------------------| | Filtering on local **or** remote fields 1 | Possible | Possible | | Filtering on local **and** remote fields 2 | Not possible | Possible | | Relationship: Uni-/Bidirectional associations | Possible | Possible | | Relationship: Flatten | Not possible | Possible | | Evaluate user permissions in remote system | Possible | Requires workarounds 3 | | Data freshness | Live data | Outdated until replicated | | Performance | Degraded 4 | Best |
> 1 It's **not required** to filter both, on local and remote fields, in the same request.
> 2 It's **required** to filter both, on local and remote fields, in the same request.
> 3 Because replicated data is accessed, the user permission checks of the remote system aren't evaluated.
> 4 Depends on the connectivity and performance of the remote system.
## Connect and Deploy {#connect-and-deploy} ### Using Destinations { #using-destinations} Destinations contain the necessary information to connect to a remote system. They're basically an advanced URL, that can carry additional metadata like, for example, the authentication information. You can choose to use [SAP BTP destinations](#btp-destinations) or [application defined destinations](#app-defined-destinations). #### Use SAP BTP Destinations { #btp-destinations} CAP leverages the destination capabilities of the SAP Cloud SDK. ##### Create Destinations on SAP BTP Create a destination using one or more of the following options. - **Register a system in your global account:** You can check here how to [Register an SAP System](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/2ffdaff0f1454acdb046876045321c91.html) in your SAP BTP global account and which systems are supported for registration. Once the system is registered and assigned to your subaccount, you can create a service instance. A destination is automatically created along with the service instance. - **Connect to an on-premise system:** With SAP BTP [Cloud Connector](https://help.sap.com/docs/CP_CONNECTIVITY/cca91383641e40ffbe03bdc78f00f681/e6c7616abb5710148cfcf3e75d96d596.html), you can create a connection from your cloud application to an on-premise system. - **Manually create destinations:** You can create destinations manually in your SAP BTP subaccount. See section [destinations](https://help.sap.com/docs/CP_CONNECTIVITY/cca91383641e40ffbe03bdc78f00f681/5eba6234a0e143fdacd8535f44c315c5.html) in the SAP BTP documentation. - **Create a destination to your application:** If you need a destination to your application, for example, to call it from a different application, then you can automatically create it in the MTA deployment. ##### Use Destinations with Node.js {.node} In your _package.json_, a configuration for the `API_BUSINESS_PARTNER` looks like this: ```json "cds": { "requires": { "API_BUSINESS_PARTNER": { "kind": "odata", "model": "srv/external/API_BUSINESS_PARTNER" } } } ``` If you've imported the external service definition using `cds import`, an entry for the service in the _package.json_ has been created already. Here you specify the name of the destination in the `credentials` block. In many cases, you also need to specify the `path` prefix to the service, which is added to the destination's URL. For services listed on the SAP Business Accelerator Hub, you can find the path in the linked service documentation. Since you don't want to use the destination for local testing, but only for production, you can profile it by wrapping it into a `[production]` block: ```json "cds": { "requires": { "API_BUSINESS_PARTNER": { "kind": "odata", "model": "srv/external/API_BUSINESS_PARTNER", "[production]": { "credentials": { "destination": "S4HANA", "path": "/sap/opu/odata/sap/API_BUSINESS_PARTNER" } } } } } ``` Additionally, you can provide [destination options](https://sap.github.io/cloud-sdk/api/v3/types/sap_cloud_sdk_connectivity.DestinationOptions.html) inside a `destinationOptions` object: ```jsonc "cds": { "requires": { "API_BUSINESS_PARTNER": { /* ... */ "[production]": { "credentials": { /* ... */ }, "destinationOptions": { "selectionStrategy": "alwaysSubscriber", "useCache": true } } } } } ``` The `selectionStrategy` property controls how a [destination is resolved](#destination-resolution). The `useCache` option controls whether the SAP Cloud SDK caches the destination. It's enabled by default but can be disabled by explicitly setting it to `false`. Read [Destination Cache](https://sap.github.io/cloud-sdk/docs/js/features/connectivity/destination-cache#destination-cache) to learn more about how the cache works. If you want to configure additional headers for the HTTP request to the system behind the destination, for example an Application Interface Register (AIR) header, you can specify such headers in the destination definition itself using the property [_URL.headers.\_](https://help.sap.com/docs/CP_CONNECTIVITY/cca91383641e40ffbe03bdc78f00f681/4e1d742a3d45472d83b411e141729795.html?q=URL.headers). ##### Use Destinations with Java {.java} Destinations are configured in Spring Boot's _application.yaml_ file: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: API_BUSINESS_PARTNER: type: "odata-v2" destination: name: "cpapp-bupa" http: suffix: "/sap/opu/odata/sap" ``` ::: [Learn more about configuring destinations for Java.](../java/cqn-services/remote-services#destination-based-scenarios){.learn-more} #### Use Application Defined Destinations { #app-defined-destinations} If you don't want to use SAP BTP destinations, you can also define destinations, which means the URL, authentication type, and additional configuration properties, in your application configuration or code. Application defined destinations support a subset of [properties](#destination-properties) and [authentication types](#authentication-types) of the SAP BTP destination service. ##### Configure Application Defined Destinations in Node.js {.node} You specify the destination properties in `credentials` instead of putting the name of a destination there. This is an example of a destination using basic authentication: ```jsonc "cds": { "requires": { "REVIEWS": { "kind": "odata", "model": "srv/external/REVIEWS", "[production]": { "credentials": { "url": "https://reviews.ondemand.com/reviews", "authentication": "BasicAuthentication", "username": "", "password": "", "headers": { "my-header": "header value" }, "queries": { "my-url-param": "url param value" } } } } } } ``` [Supported destination properties.](#destination-properties){.learn-more} ::: warning You shouldn't put any sensitive information here. ::: Instead, set the properties in the bootstrap code of your CAP application: ```js const cds = require("@sap/cds"); if (cds.env.requires?.credentials?.authentication === "BasicAuthentication") { const credentials = /* read your credentials */ cds.env.requires.credentials.username = credentials.username; cds.env.requires.credentials.password = credentials.password; } ``` You might also want to set some values in the application deployment. This can be done using env variables. For this example, the env variable for the URL would be `cds_requires_REVIEWS_credentials_destination_url`. This variable can be parameterized in the _manifest.yml_ for a `cf push` based deployment: ::: code-group ```yaml [manifest.yml] applications: - name: reviews ... env: cds_requires_REVIEWS_credentials_url: ((reviews_url)) ``` ::: ```sh cf push --var reviews_url=https://reviews.ondemand.com/reviews ``` The same can be done using _mtaext_ file for MTA deployment. If the URL of the target service is also part of the MTA deployment, you can automatically receive it as shown in this example: ::: code-group ```yaml [mta.yaml] - name: reviews provides: - name: reviews-api properties: reviews-url: ${default-url} - name: bookshop requires: ... - name: reviews-api properties: cds_requires_REVIEWS_credentials_url: ~{reviews-api/reviews-url} ``` ::: ::: code-group ```properties [.env] cds_requires_REVIEWS_credentials_url=http://localhost:4008/reviews ``` ::: ::: warning For the _configuration path_, you **must** use the underscore ("`_`") character as delimiter. CAP supports dot ("`.`") as well, but Cloud Foundry won't recognize variables using dots. Your _service name_ **mustn't** contain underscores. ::: ##### Implement Application Defined Destinations in Node.js {.node} There is no API to create a destination in Node.js programmatically. However, you can change the properties of a remote service before connecting to it, as shown in the previous example. ##### Configure Application Defined Destinations in Java {.java} Destinations are configured in Spring Boot's _application.yaml_ file. ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: REVIEWS: type: "odata-v4" destination: properties: url: https://reviews.ondemand.com/reviews authentication: TokenForwarding http: headers: my-header: "header value" queries: my-url-param: "url param value" ``` ::: [Learn more about supported destination properties.](#destination-properties){.learn-more} ##### Implement Application Defined Destinations in Java {.java} You can use the APIs offered by SAP Cloud SDK to create destinations programmatically. The destination can be used by its name the same way as destinations on the SAP BTP destination service. ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: REVIEWS: type: "odata-v2" destination: name: "reviews-destination" ``` ::: [Learn more about programmatic destination registration.](../java/cqn-services/remote-services#programmatic-destination-registration){.learn-more} [See examples for different authentication types.](../java/cqn-services/remote-services#programmatic-destinations){.learn-more} ### Connect to Remote Services Locally If you use SAP BTP destinations, you can access them locally using [CAP's hybrid testing capabilities](../advanced/hybrid-testing) with the following procedure: #### Bind to Remote Destinations Your local application needs access to an XSUAA and Destination service instance in the same subaccount where the destination is: 1. Login to your Cloud Foundry org and space 2. Create an XSUAA service instance and service key: ```sh cf create-service xsuaa application cpapp-xsuaa cf create-service-key cpapp-xsuaa cpapp-xsuaa-key ``` 3. Create a Destination service instance and service key: ```sh cf create-service destination lite cpapp-destination cf create-service-key cpapp-destination cpapp-destination-key ``` 4. Bind to XSUAA and Destination service: ```sh cds bind -2 cpapp-xsuaa,cpapp-destination ``` [Learn more about `cds bind`.](../advanced/hybrid-testing#services-on-cloud-foundry){.learn-more} #### Run a Node.js Application with a Destination {.node} Add the destination for the remote service to the `hybrid` profile in the _.cdsrc-private.json_ file: ```jsonc { "requires": { "[hybrid]": { "auth": { /* ... */ }, "destinations": { /* ... */ }, "API_BUSINESS_PARTNER": { "credentials": { "path": "/sap/opu/odata/sap/API_BUSINESS_PARTNER", "destination": "cpapp-bupa" } } } } } ``` Run your application with the Destination service: ```sh cds watch --profile hybrid ``` ::: tip If you are developing in the Business Application Studio and want to connect to an on-premise system, you will need to do so via Business Application Studio's built-in proxy, for which you need to add configuration in an `.env` file. See [Connecting to External Systems From the Business Application Studio](https://sap.github.io/cloud-sdk/docs/js/guides/bas) for more details. ::: #### Run a Java Application with a Destination {.java} Add a new profile `hybrid` to your _application.yaml_ file that configures the destination for the remote service. ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: hybrid sql.init.schema-locations: - "classpath:schema-nomocks.sql" cds: remote.services: - name: API_BUSINESS_PARTNER type: "odata-v2" destination: name: "cpapp-bupa" http: suffix: "/sap/opu/odata/sap" ``` ::: Run your application with the Destination service: ```sh cds bind --exec -- mvn spring-boot:run \ -Dspring-boot.run.profiles=default,hybrid ``` [Learn more about `cds bind --exec`.](../advanced/hybrid-testing#run-arbitrary-commands-with-service-bindings){.learn-more} ::: tip If you are developing in the Business Application Studio and want to connect to an on-premise system, you will need to do so via Business Application Studio's built-in proxy, for which you need to add configuration to your destination environment variable. See [Reach On-Premise Service from the SAP Business Application Studio](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/destination-service#reach-on-premise-service-from-the-sap-business-application-studio) for more details. ::: ### Connect to an Application Using the Same XSUAA (Forward Authorization Token) {#forward-auth-token} If your application consists of microservices and you use one (or more) as a remote service as described in this guide, you can leverage the same XSUAA instance. In that case, you don't need an SAP BTP destination at all. Assuming that your microservices use the same XSUAA instance, you can just forward the authorization token. The URL of the remote service can be injected into the application in the [MTA or Cloud Foundry deployment](#deployment) using [application defined destinations](#app-defined-destinations). #### Forward Authorization Token with Node.js{.node} To enable the token forwarding, set the `forwardAuthToken` option to `true` in your application defined destination: ```json { "requires": { "kind": "odata", "model": "./srv/external/OrdersService", "credentials": { "url": "", "forwardAuthToken": true } } } ``` #### Forward Authorization Token with Java{.java} For Java, you set the authentication type to `TOKEN_FORWARDING` for the destination. You can implement it in your code: ```java urlFromConfig = ...; // read from config DefaultHttpDestination mockDestination = DefaultHttpDestination .builder(urlFromConfig) .name("order-service") .authenticationType(AuthenticationType.TOKEN_FORWARDING) .build(); ``` Or declare the destination in your _application.yaml_ file: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: order-service: type: "odata-v4" destination: properties: url: "" authentication: TokenForwarding ``` ::: Alternatively to setting the authentication type, you can set the property `forwardAuthToken` to `true`. ### Connect to an Application in Your Kyma Cluster The [Istio](https://istio.io) service mesh provides secure communication between the services in your service mesh. You can access a service in your applications' namespace by just reaching out to `http://` or using the full hostname `http://..svc.cluster.local`. Istio sends the requests through an mTLS tunnel. With Istio, you can further secure the communication [by configuring authentication and authorization for your services](https://istio.io/latest/docs/concepts/security) ### Deployment Your micro service needs bindings to the **XSUAA** and **Destination** service to access destinations on SAP BTP. If you want to access an on-premise service using **Cloud Connector**, then you need a binding to the **Connectivity** service as well. [Learn more about deploying CAP applications.](deployment/){.learn-more} [Learn more about deploying an application using the end-to-end tutorial.](https://developers.sap.com/group.btp-app-cap-deploy.html){.learn-more} #### Add Required Services to MTA Deployments The MTA-based deployment is described in [the deployment guide](deployment/). You can follow this guide and make some additional adjustments to the [generated _mta.yml_](deployment/to-cf#add-mta-yaml) file. ```sh cds add xsuaa,destination,connectivity --for production ``` ::: details Learn what this does in the background... 1. Adds **XSUAA**, **Destination**, and **Connectivity** services to your _mta.yaml_: ::: code-group ```yaml [mta.yml] - name: cpapp-uaa type: org.cloudfoundry.managed-service parameters: service: xsuaa service-plan: application path: ./xs-security.json - name: cpapp-destination type: org.cloudfoundry.managed-service parameters: service: destination service-plan: lite # Required for on-premise connectivity only - name: cpapp-connectivity type: org.cloudfoundry.managed-service parameters: service: connectivity service-plan: lite ``` ::: 2. Requires the services for your server in the _mta.yaml_: ::: code-group ```yaml [mta.yaml] - name: cpapp-srv ... requires: ... - name: cpapp-uaa - name: cpapp-destination - name: cpapp-connectivity # Required for on-premise connectivity only ``` ::: ::: Build your application: ```sh mbt build -t gen --mtar mta.tar ``` Now you can deploy it to Cloud Foundry: ```sh cf deploy gen/mta.tar ``` #### Connectivity Service Credentials on Kyma The secret of the connectivity service on Kyma needs to be modified for the Cloud SDK to connect to on-premise destinations. [Support for Connectivity Service Secret in Java](https://github.com/SAP/cloud-sdk/issues/657){.java .learn-more} [Support for Connectivity Service Secret in Node.js](https://github.com/SAP/cloud-sdk-js/issues/2024){.node .learn-more} ### Destinations and Multitenancy With the destination service, you can access destinations in your provider account, the account your application is running in, and destinations in the subscriber accounts of your multitenant-aware application. #### Use Destinations from Subscriber Account Customers want to see business partners from, for example, their SAP S/4 HANA system. As provider, you need to define a name for a destination, which enables access to systems of the subscriber of your application. In addition, your multitenant application or service needs to have a dependency to the destination service. For destinations in an on-premise system, the connectivity service must be bound. The subscriber needs to create a destination with that name in their subscriber account, for example, pointing to their SAP S/4HANA system. #### Destination Resolution The destination is read from the tenant of the request's JWT (authorization) token. If no JWT token is present, the destination is read from the tenant of the application's XSUAA binding.{.java} The destination is read from the tenant of the request's JWT (authorization) token. If no JWT token is present *or the destination isn't found*, the destination is read from the tenant of the application's XSUAA binding.{.node} ::: warning JWT token vs. XSUAA binding Using the tenant of the request's JWT token means reading from the **subscriber subaccount** for a multitenant application. The tenant of the application's XSUAA binding points to the destination of the **provider subaccount**, the account where the application is deployed to. :::
You can change the destination lookup behavior as follows: ```jsonc "cds": { "requires": { "SERVICE_FOR_PROVIDER": { /* ... */ "credentials": { /* ... */ }, "destinationOptions": { "selectionStrategy": "alwaysProvider", "jwt": null } } } } ``` Setting the [`selectionStrategy`](https://sap.github.io/cloud-sdk/docs/js/features/connectivity/destination#multi-tenancy) property for the [destination options](#use-destinations-with-node-js) to `alwaysProvider`, you can ensure that the destination is always read from your provider subaccount. With that you ensure that a subscriber cannot overwrite your destination. Set the destination option `jwt` to `null`, if you don't want to pass the request's JWT to SAP Cloud SDK. Passing the request's JWT to SAP Cloud SDK has implications on, amongst others, the effective defaults for selection strategy and isolation level. In rare cases, these defaults are not suitable, for example when the request to the upstream server does not depend on the current user. Please see [Authentication and JSON Web Token (JWT) Retrieval](https://sap.github.io/cloud-sdk/docs/js/features/connectivity/destinations#authentication-and-json-web-token-jwt-retrieval) for more details.
For Java use the property `retrievalStrategy` in the destination configuration, to ensure that the destination is always read from your provider subaccount: ```yaml cds: remote.services: service-for-provider: type: "odata-v4" destination: retrievalStrategy: "AlwaysProvider" ``` Read more in the full reference of all [supported retrieval strategy values](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/sdk-connectivity-destination-service#retrieval-strategy-options). Please note that the value must be provided in pascal case, for example: `AlwaysProvider`.
## Add Qualities
### Resilience There are two ways to make your outbound communications resilient: 1. Run your application in a service mesh (for example, Istio, Linkerd, etc.). For example, [Kyma is provided as service mesh](#resilience-in-kyma). 2. Implement resilience in your application. Refer to the documentation for the service mesh of your choice for instructions. No code changes should be required. To build resilience into your application, there are libraries to help you implement functions, like doing retries, circuit breakers or implementing fallbacks.
You can use the [resilience features](https://sap.github.io/cloud-sdk/docs/java/features/resilience) provided by the SAP Cloud SDK with CAP Java. You need to wrap your remote calls with a call of `ResilienceDecorator.executeSupplier` and a resilience configuration (`ResilienceConfiguration`). Additionally, you can provide a fallback function. ```java ResilienceConfiguration config = ResilienceConfiguration.of(AdminServiceAddressHandler.class) .timeLimiterConfiguration(TimeLimiterConfiguration.of(Duration.ofSeconds(10))); context.setResult(ResilienceDecorator.executeSupplier(() -> { // ..to access the S/4 system in a resilient way.. logger.info("Delegating GET Addresses to S/4 service"); return bupa.run(select); }, config, (t) -> { // ..falling back to the already replicated addresses in our own database logger.warn("Falling back to already replicated Addresses"); return db.run(select); })); ``` [See the full example](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/main/java/my/bookshop/handlers/AdminServiceAddressHandler.java){.learn-more}
There's no resilience library provided out of the box for CAP Node.js. However, you can use packages provided by the Node.js community. Usually, they provide a function to wrap your code that adds the resilience logic.
#### Resilience in Kyma Kyma clusters run an [Istio](https://istio.io/) service mesh. Istio allows to [configure resilience](https://istio.io/latest/docs/concepts/traffic-management/#network-resilience-and-testing) for the network destinations of your service mesh. ### Tracing CAP adds headers for request correlation to its outbound requests that allows logging and tracing across micro services. [Learn more about request correlation in Node.js.](../node.js/cds-log#node-observability-correlation){.learn-more .node} [Learn more about request correlation in Java.](../java/operating-applications/observability#correlation-ids){.learn-more .java}
## Feature Details ### Legend | Tag | Explanation | |:----:|---------------| | | supported | | | not supported | ### Supported Protocols | Protocol | Java | Node.js | |----------|:----:|:-------:| | odata-v2 | | | | odata-v4 | | | | rest | | | ::: tip The Node.js runtime supports `odata` as an alias for `odata-v4` as well. ::: ### Querying API Features | Feature | Java | Node.js | |------------------------------------|:----:|:-------:| | READ | | | | INSERT/UPDATE/DELETE | | | | Actions | | | | `columns` | | | | `where` | | | | `orderby` | | | | `limit` (top & skip) | | | | `$apply` (aggregate, groupby, ...) | | | | `$search` (OData v4) | | | | `search` (SAP OData v2 extension) | | | ### Supported Projection Features | Feature | Java | Node.js | |-----------------------------------------------------------|:----:|:-------:| | Resolve projections to remote services | | | | Resolve multiple levels of projections to remote services | | | | Aliases for fields | | | | `excluding` | | | | Resolve associations (within the same remote service) | | | | Redirected associations | | | | Flatten associations | | | | `where` conditions | | | | `order by` | | | | Infix filter for associations | | | | Model Associations with mixins | | | ### Supported Features for Application Defined Destinations The following properties and authentication types are supported for *[application defined destinations](#app-defined-destinations)*: #### Properties { #destination-properties} These destination properties are fully supported by both, the Java and the Node.js runtime. ::: tip This list specifies the properties for application defined destinations. ::: | Properties | Description | | -------------------------- | ----------------------------------------- | | `url` | | | `authentication` | Authentication type | | `username` | User name for BasicAuthentication | | `password` | Password for BasicAuthentication | | `headers` | Map of HTTP headers | | `queries` | Map of URL parameters | | `forwardAuthToken` | [Forward auth token](#forward-auth-token) | [Destination Type in SAP Cloud SDK for JavaScript](https://sap.github.io/cloud-sdk/api/v3/interfaces/sap_cloud_sdk_connectivity.Destination.html){.learn-more .node} [HttpDestination Type in SAP Cloud SDK for Java](https://help.sap.com/doc/82a32040212742019ce79dda40f789b9/1.0/en-US/index.html){.learn-more .java} #### Authentication Types | Authentication Types | Java | Node.js | |-------------------------|:-----------------------------------------------------------------------:|:------------------------------:| | NoAuthentication | | | | BasicAuthentication | | | | TokenForwarding | |
Use `forwardAuthToken` | | OAuth2ClientCredentials | [code only](../java/cqn-services/remote-services#programmatic-destinations) | | | UserTokenAuthentication | [code only](../java/cqn-services/remote-services#programmatic-destinations) | | # Events and Messaging CAP provides intrinsic support for emitting and receiving events. This is complemented by Messaging Services connecting to message brokers to exchange event messages across remote services. ## Ubiquitous Events in CAP {#intro} We're starting with an introduction to the core concepts in CAP. If you want to skip the introduction, you can fast-forward to the samples part starting at [Books Reviews Sample](#books-reviews-sample). ### Intrinsic Eventing in CAP Core As introduced in [About CAP](../../about/best-practices#events), everything happening at runtime is in response to events, and all service implementations take place in [event handlers](../providing-services#event-handlers). All CAP services intrinsically support emitting and reacting to events, as shown in this simple code snippet (you can copy & run it in `cds repl`): ```js let srv = new cds.Service // Receiving Events srv.on ('some event', msg => console.log('1st listener received:', msg)) srv.on ('some event', msg => console.log('2nd listener received:', msg)) // Emitting Events await srv.emit ('some event', { foo:11, bar:'12' }) ``` ::: tip Intrinsic support for events The core of CAP's processing model: all services are event emitters. Events can be sent to them, emitted by them, and event handlers register with them to react to such events. ::: ### Typical Emitter and Receiver Roles In contrast to the previous code sample, emitters and receivers of events are decoupled, in different services and processes. And as all active things in CAP are services, so are usually emitters and receivers of events. Typical patterns look like that: ```js class Emitter extends cds.Service { async someMethod() { // inform unknown receivers about something happened await this.emit ('some event', { some:'payload' }) }} ``` ```js class Receiver extends cds.Service { async init() { // connect to and register for events from Emitter const Emitter = await cds.connect.to('Emitter') Emitter.on ('some event', msg => {...}) }} ``` ::: tip Emitters vs Receivers **Emitters** usually emit messages to *themselves* to inform *potential* listeners about certain events. **Receivers** connect to *Emitters* to register handlers to such emitted events. ::: ### Ubiquitous Notion of Events A *Request* in CAP is actually a specialization of an *Event Message*. The same intrinsic mechanisms of sending and reacting to events are used for asynchronous communication in inverse order. A typical flow: ![Clients send requests to services which are handled in event handlers.](assets/sync.drawio.svg) Asynchronous communication looks similar, just with reversed roles: ![Services emit event. Receivers subscribe to events which are handled in event hanlders. ](assets/async.drawio.svg) ::: tip Event Listeners vs Interceptors Requests are handled the same ways as events, with one major difference: While `on` handlers for events are *listeners* (all are called), handlers for synchronous requests are *interceptors* (only the topmost is called by the framework). An interceptor then decides whether to pass down control to `next` handlers or not. ::: ### Asynchronous & Synchronous APIs To sum up, handling events in CAP is done in the same way as you would handle requests in a service provider. Also, emitting event messages is similar to sending requests. The major difference is that the initiative is inverted: While *Consumers* connect to *Services* in synchronous communications, the *Receivers* connect to _Emitters_ in asynchronous ones; _Emitters_ in turn don't know _Receivers_. ![This graphic is explained in the accompanying text.](assets/sync-async.drawio.svg) ::: tip Blurring the line between synchronous and asynchronous API In essence, services receive events. The emitting service itself or other services can register handlers for those events in order to implement the logic of how to react to these events. ::: ### Why Using Messaging? Using messaging has two major advantages: ::: tip Resilience If a receiving service goes offline for a while, event messages are safely stored, and guaranteed to be delivered to the receiver as soon as it goes online again. ::: ::: tip Decoupling Emitters of event messages are decoupled from the receivers and don't need to know them at the time of sending. This way a service is able to emit events that other services can register on in the future, for example, to implement **extension** points. ::: ## Books Reviews Sample The following explanations walk us through a books review example from cap/samples: * **[@capire/bookshop](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop)** provides the well-known basic bookshop app. * **[@capire/reviews](https://github.com/sap-samples/cloud-cap-samples/tree/main/reviews)** provides an independent service to manage reviews. * **[@capire/bookstore](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookstore)** combines both into a composite application. ![This graphic is explained in the accompanying text.](assets/cap-samples.drawio.svg) ::: tip Follow the instructions in [*cap/samples/readme*](https://github.com/SAP-samples/cloud-cap-samples#readme) for getting the samples and exercising the following steps. ::: ### Declaring Events in CDS Package `@capire/reviews` essentially provides a `ReviewsService`, [declared like that](https://github.com/sap-samples/cloud-cap-samples/blob/main/reviews/srv/reviews-service.cds): ```cds service ReviewsService { // Sync API entity Reviews as projection on my.Reviews excluding { likes } action like (review: Reviews:ID); action unlike (review: Reviews:ID); // Async API event reviewed : { subject : Reviews:subject; count : Integer; rating : Decimal; // new avg rating } } ``` [Learn more about declaring events in CDS.](../../cds/cdl#events){.learn-more} As you can read from the definitions, the service's synchronous API allows to create, read, update, and delete user `Reviews` for arbitrary review subjects. In addition, the service's asynchronous API declares the `reviewed` event that shall be emitted whenever a subject's average rating changes. ::: tip **Services in CAP** combine **synchronous** *and* **asynchronous** APIs. Events are declared on conceptual level focusing on domain, instead of low-level wire protocols. ::: ### Emitting Events Find the code to emit events in *[@capire/reviews/srv/reviews-service.js](https://github.com/SAP-samples/cloud-cap-samples/blob/139d9574950d1a5ead475c7b47deb174418500e4/reviews/srv/reviews-service.js#L12-L20)*: ```js class ReviewsService extends cds.ApplicationService { async init() { // Emit a `reviewed` event whenever a subject's avg rating changes this.after (['CREATE','UPDATE','DELETE'], 'Reviews', (req) => { let { subject } = req.data, count, rating //= ... return this.emit ('reviewed', { subject, count, rating }) }) }} ``` [Learn more about `srv.emit()` in Node.js.](../../node.js/core-services#srv-emit-event){.learn-more} [Learn more about `srv.emit()` in Java.](../../java/services#an-event-based-api){.learn-more} Method `srv.emit()` is used to emit event messages. As you can see, emitters usually emit messages to themselves, that is, `this`, to inform potential listeners about certain events. Emitters don't know the receivers of the events they emit. There might be none, there might be local ones in the same process, or remote ones in separate processes. ::: tip Messaging on Conceptual Level Simply use `srv.emit()` to emit events, and let the CAP framework care for wire protocols like CloudEvents, transports via message brokers, multitenancy handling, and so forth. ::: ### Receiving Events Find the code to receive events in *[@capire/bookstore/srv/mashup.js](https://github.com/SAP-samples/cloud-cap-samples/blob/30764b261b6bf95854df59f54a8818a4ceedd462/bookstore/srv/mashup.js#L39-L47)* (which is the basic bookshop app enhanced by reviews, hence integration with `ReviewsService`): ```js // Update Books' average ratings when reviews are updated ReviewsService.on ('reviewed', (msg) => { const { subject, count, rating } = msg.data // ... }) ``` [Learn more about registering event handlers in Node.js.](../../node.js/core-services#srv-on-before-after){.learn-more} [Learn more about registering event handlers in Java.](../../java/event-handlers/#introduction-to-event-handlers){.learn-more} The message payload is in the `data` property of the inbound `msg` object. ::: tip To have more control over imported service definitions, you can set the `model` configuration of your external service to a cds file where you define the external service and only use the imported definitions your app needs. This way, plugins like [Open Resource Discovery (ORD)](../../plugins/#ord-open-resource-discovery) know which parts of the external service you actually use in your application. ::: ## In-Process Eventing As emitting and handling events is an intrinsic feature of the CAP core runtimes, there's nothing else required when emitters and receivers live in the same process. ![This graphic is explained in the accompanying text.](assets/local.drawio.svg) Let's see that in action... ### 1. Start a Single Server Process {#start-single-server} Run the following command to start a reviews-enhanced bookshop as an all-in-one server process: ```sh cds watch bookstore ``` It produces a trace output like that: ```log [cds] - mocking ReviewsService { path: '/reviews', impl: '../reviews/srv/reviews-service.js' } [cds] - mocking OrdersService { path: '/orders', impl: '../orders/srv/orders-service.js' } [cds] - serving CatalogService { path: '/browse', impl: '../bookshop/srv/cat-service.js' } [cds] - serving AdminService { path: '/admin', impl: '../bookshop/srv/admin-service.js' } [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 5/25/2023, 4:53:46 PM, version: 7.0.0, in: 991.573ms ``` As apparent from the output, both, the two bookshop services `CatalogService` and `AdminService` as well as our new `ReviewsService`, are served in the same process (mocked, as the `ReviewsService` is configured as required service in _bookstore/package.json_). ### 2. Add or Update Reviews {#add-or-update-reviews} Now, open [http://localhost:4004/reviews](http://localhost:4004/reviews) to display the Vue.js UI that is provided with the reviews service sample: ![A vue.js UI, showing the bookshop sample with the adding a review functionality](assets/capire-reviews.png) - Choose one of the reviews. - Change the 5-star rating with the dropdown. - Choose *Submit*. - Enter *bob* to authenticate. → In the terminal window you should see a server reaction like this: ```log [cds] - PATCH /reviews/Reviews/148ddf2b-c16a-4d52-b8aa-7d581460b431 < emitting: reviewed { subject: '201', count: 2, rating: 4.5 } > received: reviewed { subject: '201', count: 2, rating: 4.5 } ``` Which means the `ReviewsService` emitted a `reviewed` message that was received by the enhanced `CatalogService`. ### 3. Check Ratings in Bookshop App Open [http://localhost:4004/bookshop](http://localhost:4004/bookshop) to see the list of books served by `CatalogService` and refresh to see the updated average rating and reviews count: ![A vue.js UI showing the pure bookhsop sample without additional features.](assets/capire-books.png) ## Using Message Channels When emitters and receivers live in separate processes, you need to add a message channel to forward event messages. CAP provides messaging services, which take care for that message channel behind the scenes as illustrated in the following graphic: ![The reviews service and the catalog service, each in a seperate process, are connected to the messaging service which holds the messaging channel behind the scenes.](assets/remote.drawio.svg) ::: tip Uniform, Agnostic Messaging CAP provides messaging services, which transport messages behind the scenes using different messaging channels and brokers. All of this happens without the need to touch your code, which stays on conceptual level. ::: ### 1. Use `file-based-messaging` in Development For quick tests during development, CAP provides a simple file-based messaging service implementation. Configure that as follows for the `[development]` profile: ```jsonc "cds": { "requires": { "messaging": { "[development]": { "kind": "file-based-messaging" } }, } } ``` [Learn more about `cds.env` profiles.](../../node.js/cds-env#profiles){.learn-more} In our samples, you find that in [@capire/reviews/package.json](https://github.com/SAP-samples/cloud-cap-samples/blob/main/reviews/package.json) as well as [@capire/bookstore/package.json](https://github.com/SAP-samples/cloud-cap-samples/blob/main/bookstore/package.json), which you'll run in the next step as separate processes. ### 2. Start the `reviews` Service and `bookstore` Separately First start the `reviews` service separately: ```sh cds watch reviews ``` The trace output should contain these lines, confirming that you're using `file-based-messaging`, and that the `ReviewsService` is served by that process at port 4005: ```log [cds] - connect to messaging > file-based-messaging { file: '~/.cds-msg-box' } [cds] - serving ReviewsService { path: '/reviews', impl: '../reviews/srv/reviews-service.js' } [cds] - server listening on { url: 'http://localhost:4005' } [cds] - launched at 5/25/2023, 4:53:46 PM, version: 7.0.0, in: 593.274ms ``` Then, in a separate terminal start the `bookstore` server as before: ```sh cds watch bookstore ``` This time the trace output is different to [when you started all in a single server](#start-single-server). The output confirms that you're using `file-based-messaging`, and that you now *connected* to the separately started `ReviewsService` at port 4005: ```log [cds] - connect to messaging > file-based-messaging { file: '~/.cds-msg-box' } [cds] - mocking OrdersService { path: '/orders', impl: '../orders/srv/orders-service.js' } [cds] - serving CatalogService { path: '/browse', impl: '../reviews/srv/cat-service.js' } [cds] - serving AdminService { path: '/admin', impl: '../reviews/srv/admin-service.js' } [cds] - connect to ReviewsService > odata { url: 'http://localhost:4005/reviews' } [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 5/25/2023, 4:55:46 PM, version: 7.0.0, in: 1.053s ``` ### 3. Add or Update Reviews {#add-or-update-reviews-2} Similar to before, open [http://localhost:4005/vue/index.html](http://localhost:4005/vue/index.html) to add or update reviews. → In the terminal window for the `reviews` server you should see this: ```log [cds] - PATCH /reviews/Reviews/74191a20-f197-4829-bd47-c4676710e04a < emitting: reviewed { subject: '251', count: 1, rating: 3 } ``` → In the terminal window for the `bookstore` server you should see this: ```log > received: reviewed { subject: '251', count: 1, rating: 3 } ``` ::: tip **Agnostic Messaging APIs** Without touching any code the event emitted from the `ReviewsService` got transported via `file-based-messaging` channel behind the scenes and was received in the `bookstore` as before, when you used in-process eventing → which was to be shown (*QED*). ::: ### 4. Shut Down and Restart Receiver → Resilience by Design You can simulate a server outage to demonstrate the value of messaging for resilience as follows: 1. Terminate the `bookstore` server with Ctrl + C in the respective terminal. 2. Add or update more reviews as described before. 3. Restart the receiver with `cds watch bookstore`. → You should see some trace output like that: ```log [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 5/25/2023, 10:45:42 PM, version: 7.0.0, in: 1.023s [cds] - [ terminate with ^C ] > received: reviewed { subject: '207', count: 1, rating: 2 } > received: reviewed { subject: '207', count: 1, rating: 2 } > received: reviewed { subject: '207', count: 1, rating: 2 } ``` ::: tip **Resilience by Design** All messages emitted while the receiver was down stayed in the messaging queue and are delivered when the server is back. ::: ### Have a Look Into _~/.cds-msg-box_ You can watch the messages flowing through the message queue by opening _~/.cds-msg-box_ in a text editor. When the receiver is down and therefore the message not already consumed, you can see the event messages emitted by the `ReviewsService` in entries like that: ```json ReviewsService.reviewed {"data":{"subject":"201","count":4,"rating":5}, "headers": {...}} ``` ## Using Multiple Channels By default CAP uses a single message channel for all messages. For example: If you consume messages from SAP S/4HANA in an enhanced version of `bookstore`, as well as emit messages a customer could subscribe and react to in a customer extension, the overall topology would look like that: ![The reviews service, bookstore, and the SAP S/4HANA system send events to a common message bus. The bookstore also receives events and customer extensions as well.](assets/composite1.drawio.svg) ### Using Separate Channels Now, sometimes you want to use separate channels for different emitters or receivers. Let's assume you want to have a dedicated channel for all events from SAP S/4HANA, and yet another separate one for all outgoing events, to which customer extensions can subscribe too. This situation is illustrated in this graphic: ![The graphic shows seperate message channels for each event emitter and its subscribers.](assets/composite2.drawio.svg) This is possible when using [low-level messaging](#low-level-messaging), but comes at the price of loosing all advantages of conceptual-level messaging as explained in the following. ### Using `composite-messaging` Implementation To avoid falling back to low-level messaging, CAP provides the `composite-messaging` implementation, which basically acts like a transparent dispatcher for both, inbound and outbound messages. The resulting topology would look like that: ![Each emitter and subscriber has its own message channel. In additon there's a composite message channel that dispatches to/from each of those seperate channels.](assets/composite3.drawio.svg) ::: tip **Transparent Topologies** The `composite-messaging` implementation allows to flexibly change topologies of message channels at deployment time, without touching source code or models. ::: ### Configuring Individual Channels and Routes You would configure this in `bookstore`'s _package.json_ as follows: ```jsonc "cds": { "requires": { "messaging": { "kind": "composite-messaging", "routes": { "ChannelA": ["**/ReviewsService/*"], "ChannelB": ["**/sap/s4/**"] "ChannelC": ["**/bookshop/**"] } }, "ChannelA": { "kind": "enterprise-messaging", ... }, "ChannelB": { "kind": "enterprise-messaging", ... }, "ChannelC": { "kind": "enterprise-messaging", ... } } } ``` In essence, you first configure a messaging service for each channel. In addition, you would configure the default `messaging` service to be of kind `composite-messaging`. In the `routes`, you can use the glob pattern to define filters for event names, that means: - `**` will match any number of characters. - `*` will match any number of characters except `/` and `.`. - `?` will match a single character. ::: tip You can also refer to events declared in CDS models, by using their fully qualified event name (unless annotation `@topic` is used on them). ::: ## Low-Level Messaging In the previous sections it's documented how CAP promotes messaging on conceptual levels, staying agnostic to topologies and message brokers. While CAP strongly recommends staying on that level, CAP also offers lower-level messaging, which loses some of the advantages but still stays independent from specific message brokers. ::: tip Messaging as Just Another CAP Service All messaging implementations are provided through class `cds.MessagingService` and broker-specific subclasses of that. This class is in turn a standard CAP service, derived from `cds.Service`, hence it's consumed as any other CAP service, and can also be extended by adding event handlers as usual. ::: #### Configure Messaging Services As with all other CAP services, add an entry to `cds.requires` in your _package.json_ or _.cdsrc.json_ like that: ```jsonc "cds": { "requires": { "messaging": { "kind": // ... }, } } ``` [Learn more about `cds.env` and `cds.requires`.](../../node.js/cds-env#services){.learn-more} You're free how you name your messaging service. Could be `messaging` as in the previous example, or any other name you choose. You can also configure multiple messages services with different names. #### Connect to the Messaging Service Instead of connecting to an emitter service, connect to the messaging service: ```js const messaging = await cds.connect.to('messaging') ``` #### Emit Events to Messaging Service Instead of emitter services emitting to themselves, emit to the messaging service: ```js await messaging.emit ('ReviewsService.reviewed', { ... }) ``` #### Receive Events from Messaging Service Instead of registering event handlers with a concrete emitter service, register handlers on the messaging service: ```js messaging.on ('ReviewsService.reviewed', msg => console.log(msg)) ```
#### Declared Events and `@topic` Names When declaring events in CDS models, be aware that the fully qualified name of the event is used as topic names when emitting to message brokers. Based on the following model, the resulting topic name is `my.namespace.SomeEventEmitter.SomeEvent`. ```cds namespace my.namespace; service SomeEventEmitter { event SomeEvent { ... } } ``` If you want to manually define the topic, you can use the `@topic` annotation: ```cds //... @topic: 'some.very.different.topic-name' event SomeEvent { ... } ``` #### Conceptual vs. Low-Level Messaging When looking at the previous code samples, you see that in contrast to conceptual messaging you need to provide fully qualified event names now. This is just one of the advantages you lose. Have a look at the following list of advantages you have using conceptual messaging and lose with low-level messaging. - Service-local event names (as already mentioned) - Event declarations (as they go with individual services) - Generated typed API classes for declared events - Run in-process without any messaging service ::: tip Always prefer conceptual-level API over low-level API variants. Besides the things listed above, this allows you to flexibly change topologies, such as starting with co-located services in a single process, and moving single services out to separate micro services later on. ::: ## CloudEvents Standard {#cloudevents} CAP messaging has built-in support for formatting event data compliant to the [CloudEvents](https://cloudevents.io/) standard. Enable this using the `format` config option as follows: ```json "cds": { "requires": { "messaging": { "format": "cloudevents" } } } ``` With this setting, all mandatory and some more basic header fields, like `type`, `source`, `id`, `datacontenttype`, `specversion`, `time` are filled in automatically. The event name is used as `type`. The message payload is in the `data` property anyways. ::: tip CloudEvents is a wire protocol specification. Application developers shouldn't have to care for such technical details. CAP ensures that for you, by filling in the respective fields behind the scenes. ::: ## [Using SAP Event Mesh](./event-mesh) {#sap-event-mesh} CAP has out-of-the-box support for SAP Event Mesh. As an application developer, all you need to do is configuring CAP to use `enterprise-messaging`, usually in combination with `cloudevents` format, as in this excerpt from a _package.json_: ```jsonc "cds": { "requires": { "messaging": { "[production]": { "kind": "enterprise-messaging", "format": "cloudevents" } } } } ``` [Learn more about `cds.env` profiles](../../node.js/cds-env#profiles){.learn-more} ::: tip Read the guide Find additional information about deploying SAP Event Mesh on SAP BTP in this guide: [→ **_Using SAP Event Mesh in BTP_**](./event-mesh) ::: ## [Using SAP Cloud Application Event Hub](./event-broker) {#sap-event-broker} CAP has growing out-of-the-box support for SAP Cloud Application Event Hub. As an application developer, all you need to do is configuring CAP to use `event-broker`, as in this excerpt from a _package.json_: ```jsonc "cds": { "requires": { "messaging": { "[production]": { "kind": "event-broker" } } } } ``` [Learn more about `cds.env` profiles](../../node.js/cds-env#profiles){.learn-more} ::: tip Read the guide Find additional information about deploying SAP Event Broper on SAP BTP in this guide: [→ **_Using SAP Cloud Application Event Hub in BTP_**](./event-broker) ::: ## [Events from SAP S/4HANA](./s4) SAP S/4HANA integrates SAP Event Mesh as well as SAP Cloud Application Event Hub for messaging. That makes it relatively easy for CAP-based applications to receive events from SAP S/4HANA systems. In contrast to CAP, the asynchronous APIs of SAP S/4HANA are separate from the synchronous ones (OData, REST). So, the effort on the CAP side is to fill this gap. You can achieve it like that, for example, for an already imported SAP S/4HANA BusinessPartner API: ```cds // filling in missing events as found on SAP Business Accelerator Hub using { API_BUSINESS_PARTNER as S4 } from './API_BUSINESS_PARTNER'; extend service S4 with { event BusinessPartner.Created @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Created.v1') { BusinessPartner : String } event BusinessPartner.Changed @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1') { BusinessPartner : String } } ``` [Learn more about importing SAP S/4HANA service APIs.](../using-services#external-service-api){.learn-more} With that gap filled, we can easily receive events from SAP S/4HANA the same way as from CAP services as explained in this guide, for example: ```js const S4Bupa = await cds.connect.to ('API_BUSINESS_PARTNER') S4Bupa.on ('BusinessPartner.Changed', msg => {...}) ``` ::: tip Read the guide Find more detailed information specific to receiving events from SAP S/4HANA in this separate guide: [→ **_Receiving Events from SAP S/4HANA_**](./s4) ::: # Using SAP Event Mesh in Cloud Foundry CAP provides out-of-the-box support for [SAP Event Mesh](https://help.sap.com/docs/event-mesh), and automatically handles many things behind the scenes, so that application coding stays agnostic and focused on conceptual messaging. ::: warning The following guide is based on a productive (paid) account on SAP BTP. It's not supported to use the trial offering of SAP Event Mesh. ::: ## Prerequisite: Create an Instance of SAP Event Mesh - [Follow this tutorial](https://developers.sap.com/group.cp-enterprisemessaging-get-started.html) to create an instance of SAP Event Mesh with plan `default`. - Alternatively follow [one of the guides in SAP Help Portal](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/3ef34ffcbbe94d3e8fff0f9ea2d5911d.html). ::: tip **Important:** You don't need to manually create queues or queue subscriptions as CAP takes care for that automatically based on declared events and subscriptions. ::: ## Use `enterprise-messaging` Add the following to your _package.json_ to use SAP Event Mesh: ```jsonc "cds": { "requires": { "messaging": { "[production]": { "kind": "enterprise-messaging" }, } } } ``` [Learn more about `cds.env` profiles](../../node.js/cds-env#profiles){.learn-more}
**Behind the Scenes**, the `enterprise-messaging` implementation handles these things automatically and transparently: - Creation of queues & subscriptions for event receivers - Handling all broker-specific handshaking and acknowledgments - Constructing topic names as expected by the broker - Wire protocol envelopes, that is, CloudEvents
### Optional: Add `namespace` Prefixing Rules SAP Event Mesh documentation recommends to prefix all event names with the service instance's configured `namespace`, both, when emitting as well as when subscribing to events. If you followed these rules, add corresponding rules to your configuration in _package.json_ to have CAP's messaging service implementations enforcing these rules automatically: ```json "cds": { "requires": { "messaging": { "publishPrefix": "$namespace/", "subscribePrefix": "$namespace/" } } } ``` The variable `$namespace` is resolved from your SAP Event Mesh service instance's configured `namespace` property. ## Run Tests in `hybrid` Setup Before [deploying to the cloud](#deploy-to-the-cloud-with-mta), you may want to run some ad-hoc tests with a hybrid setup, that is, keep running the CAP services locally, but using the SAP Event Mesh instance from the cloud. Do that as follows: 1. Configure CAP to use the `enterprise-messaging-shared` implementation in the `reviews` and `bookstore` sample: ```jsonc "cds": { "requires": { "messaging": { "[hybrid]": { "kind": "enterprise-messaging-shared" } } } } ``` > The `enterprise-messaging-shared` variant is for single-tenant usage and uses AMQP by default. Thus, it requires much less setup for local tests compared to the production variant, which uses HTTP-based protocols by default. 2. Add `@sap/xb-msg-amqp-v100` as dependency to `reviews` and `bookstore`: ```sh npm add @sap/xb-msg-amqp-v100 ``` [Learn more about SAP Event Mesh (Shared).](../../node.js/messaging#event-mesh-shared){.learn-more} 3. Create a service key for your Event Mesh instance [→ see help.sap.com](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/4514a14ab6424d9f84f1b8650df609ce.html) 4. Bind to your Event Mesh instance's service key from `reviews` and `bookstore`: ```sh cds bind -2 : ``` [Learn more about `cds bind` and hybrid testing.](../../advanced/hybrid-testing){.learn-more} 5. Run your services in separate terminal shells with the `hybrid` profile: ```sh cds watch reviews --profile hybrid ``` ```sh cds watch bookstore --profile hybrid ``` [Learn more about `cds.env` profiles.](../../node.js/cds-env#profiles){.learn-more} 6. Test your app [as described in the messaging guide](./#add-or-update-reviews). ### CAP Automatically Creates Queues and Subscriptions When you run the services with a bound instance of SAP Event Mesh as documented in a previous section, CAP messaging service implementations will automatically create a queue for each receiver process. The queue name is chosen automatically and the receiver's subscriptions added. ### Optional: Configure Queue Names In case you want to manage queues yourself, use config option `queue.name` as follows: ```jsonc "cds": { "requires": { "messaging": { // ... "queue": { "name": "$namespace/my/own/queue" } } } } ``` In both cases — automatically chosen queue names or explicitly configured ones — if the queue already exists it's reused, otherwise it's created. [Learn more about queue configuration options.](../../node.js/messaging#message-brokers){.learn-more} ## Deploy to the Cloud (with MTA) A general description of how to deploy CAP applications to SAP BTP's Cloud Foundry, can be found in the [Deploy to Cloud* guide](../deployment/). As documented there, MTA is frequently used to deploy to SAP BTP. Follow these steps to ensure binding of your deployed application to the SAP Event Mesh instance. ### 1. Specify Binding to SAP Event Mesh Instance Add SAP Event Mesh's service instance's name to the `requires` section of your CAP application's module, and a matching entry to the `resources` section, for example: ```yaml modules: - name: bookstore-srv requires: - name: resources: # SAP Event Mesh - name: type: org.cloudfoundry.managed-service parameters: service: enterprise-messaging service-plan: ``` [Learn more about using MTA.](../deployment/){.learn-more} ::: warning Make sure to use the exact `name` and `service-plan` used at the time creating the service instance you want to use. ::: ### 2. Optional: Auto-Create SAP Event Mesh Instances MTA can also create the service instance automatically. To do so, you need to additionally provide a service descriptor file and reference that through the `path` parameter in the `resources` section: ```yaml resources: # SAP Event Mesh as above... parameters: path: ./ ``` [Learn more about Service Descriptors for SAP Event Mesh.](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/5696828fd5724aa5b26412db09163530.html){.learn-more} # Using SAP Cloud Application Event Hub in Cloud Foundry [SAP Cloud Application Event Hub](https://help.sap.com/docs/event-broker) is the new default offering for messaging in SAP Business Technology Platform (SAP BTP). CAP provides out-of-the-box support for SAP Cloud Application Event Hub, and automatically handles many things behind the scenes, so that application coding stays agnostic and focused on conceptual messaging. ::: warning The following guide is based on a productive (paid) account on SAP BTP. ::: ## Consuming Events in a Stand-alone App { #consume-standalone } This guide describes the end-to-end process of developing a stand-alone (or "single tenant") CAP application that consumes messages via SAP Cloud Application Event Hub. The guide uses SAP S/4HANA as the event emitter, but this is a stand-in for any system that is able to publish CloudEvents via SAP Cloud Application Event Hub. Sample app: [@capire/incidents with Customers based on S/4's Business Partners](https://github.com/cap-js/incidents-app/tree/event-broker) ### Prerequisite: Events & Messaging in CAP From the perspective of a CAP developer, SAP Cloud Application Event Hub is yet another messaging broker. That is to say, CAP developers focus on [modeling their domain](../domain-modeling) and [implementing their domain-specific custom logic](../providing-services#custom-logic). Differences between the various event transporting technologies are held as transparent as possible. Hence, before diving into this guide, you should be familiar with the general guide for [Events & Messaging in CAP](../messaging/), as it already covers the majority of the content. Since SAP Cloud Application Event Hub is based on the [CloudEvents](https://cloudevents.io/) standard, the `@topic` annotation for events in your CDS model is interpreted as the CloudEvents `type` attribute. ### Add Events and Handlers There are two options for adding the events that shall be consumed to your model, and subsequently registering event handlers for the same. #### 1. Import and Augment This approach is described in [Events from SAP S/4HANA](../messaging/#events-from-sap-s-4hana), [Receiving Events from SAP S/4HANA Cloud Systems](../messaging/s4), and specifically [Consume Events Agnostically](../messaging/s4#consume-events-agnostically) regarding handler registration. #### 2. Using Low-Level Messaging As a second option, you can skip the modeling part and simply use [Low-Level Messaging](../messaging/s4#using-low-level-messaging). However, please note that future [Open Resource Discovery (ORD)](https://sap.github.io/open-resource-discovery/) integration will most likely benefit from modeled approaches. ### Use `event-broker` Configure your application to use the messaging service kind `event-broker` (derived from SAP Cloud Application Event Hub's technical name). [Learn more about configuring SAP Cloud Application Event Hub in CAP Node.js](../../node.js/messaging#event-broker){.learn-more} [Learn more about `cds.env` profiles](../../node.js/cds-env#profiles){.learn-more} ::: tip Local Testing Since SAP Cloud Application Event Hub sends events via HTTP, you won't be able to receive events on your local machine unless you use a tunneling service. Therefore we recommend to use a messaging service of kind [`local-messaging`](../../node.js/messaging#local-messaging) for local testing. ::: ### Deploy to the Cloud (with MTA) Please see [Deploy to Cloud Foundry](../deployment/to-cf) regarding deployment with MTA as well as the deployment section from [SAP Cloud Application Event Hub in CAP Node.js](../../node.js/messaging#event-broker). ### Connecting it All Together In SAP BTP System Landscape, add a new system of type `SAP BTP Application` for your CAP application including its integration dependencies, connect all involved systems (incl. SAP Cloud Application Event Hub) into a formation and enable the event subscription. For more details, please refer to guide [CAP Application as a Consumer](https://help.sap.com/docs/event-broker/event-broker-service-guide/cap-application-as-subscriber) in the official documentation of SAP Cloud Application Event Hub. ::: tip Test Events For testing purposes, SAP S/4HANA can send technical test events of type `sap.eee.iwxbe.testproducer.v1.Event.Created.v1` which your app can subscribe to. You can trigger such events with _Enterprise Event Enablement - Event Monitor_. ::: # Receiving Events from SAP S/4HANA Cloud Systems SAP S/4HANA integrates SAP Event Mesh as well as SAP Cloud Application Event Hub for messaging. Hence, it is relatively easy for CAP-based application to receive events from SAP S/4HANA systems. This guide provides detailed information on that. ## Find & Import APIs As documented in the [Service Consumption guide](../using-services#external-service-api), get, and `cds import` the API specification of an SAP S/4HANA service you want to receive events from. For example, for "BusinessPartner" using [SAP Business Accelerator Hub](https://api.sap.com/): 1. Find / open [Business Partner (A2X) API](https://api.sap.com/api/API_BUSINESS_PARTNER). 2. Choose button *"API Specification"*. 3. Download the EDMX spec from this list: ![Showing all available specifications on the SAP Business Accelerator Hub.](./assets/api-specification.png){ } 1. Import it as a CDS model: ```sh cds import ``` [Learn more about importing SAP S/4HANA service APIs.](../using-services#external-service-api){.learn-more} ## Find Information About Events For example, using [SAP Business Accelerator Hub](https://api.sap.com/): 1. [Find the BusinessPartner Events page.](https://api.sap.com/event/SAPS4HANABusinessEvents_BusinessPartnerEvents/overview) 2. Choose _View Event Reference_. 3. Expand the _POST_ request shown. 4. Choose _Schema_ tab. 5. Expand the `data` property. ![Shows the event reference page on the SAP Business Accelerator Hub, highlighting the data property.](assets/business-partner-events.png){.mute-dark} The expanded part, highlighted in red, tells you all you need to know: - the event name: `sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1` - the payload's schema → in `{...}` > All the other information on this page can be ignored, as it's about standard CloudEvents wire format attributes, which are always the same, and handled automatically by CAP behind the scenes for you. ## Add Missing Event Declarations In contrast to CAP, the asynchronous APIs of SAP S/4HANA are separate from synchronous APIs (that is, OData, REST). On CAP side, you need to fill this gap. For example, for an already imported SAP S/4HANA BusinessPartner API: ```cds // filling in missing events as found on SAP Business Accelerator Hub using { API_BUSINESS_PARTNER as S4 } from './API_BUSINESS_PARTNER'; extend service S4 with { event BusinessPartner.Created @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Created.v1') { BusinessPartner : String } event BusinessPartner.Changed @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1') { BusinessPartner : String } } ``` ::: tip If using SAP Event Mesh, please see [CloudEvents Standard](./index.md#cloudevents) and [Node - Messaging - CloudEvents Protocol](../../node.js/messaging.md#cloudevents-protocol) to learn about `format: 'cloudevents'`, `publishPrefix` and `subscribePrefix`. :::
## Consume Events Agnostically With agnostic consumption, you can easily receive events from SAP S/4HANA the same way as from CAP services as already explained in this guide, for example like that: ```js const S4Bupa = await cds.connect.to ('API_BUSINESS_PARTNER') S4bupa.on ('BusinessPartner.Changed', msg => {...}) ``` ## Configure CAP To ease the pain of the afore-mentioned topic rewriting effects, CAP has built-in support for [SAP Event Mesh](./event-mesh) as well as [SAP Cloud Application Event Hub](./event-broker). Configure the messaging service as follows, to let it automatically create correct technical topics to subscribe to SAP S/4HANA events: For SAP Event Mesh: ```json "cds": { "requires": { "messaging": { "kind": "enterprise-messaging-shared", "format": "cloudevents", // implicitly applied default prefixes "publishPrefix": "$namespace/ce/", "subscribePrefix": "+/+/+/ce/" } } } ``` **Note:** In contrast to the default configuration recommended in the [SAP Event Mesh documentation](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/5499e2e74e674c69b057072272c80d4f.html), ensure you configure your service instance to allow the pattern `+/+/+/ce/*` for subscriptions. That is, **do not** restrict `subscribeFilter`s to `${namespace}`! For SAP Cloud Application Event Hub: ```json "cds": { "requires": { "messaging": { "kind": "event-broker" } } } ``` With that, your developers can enter event names as they're found on SAP Business Accelerator Hub. And our CDS extensions, as previously described, simplify to that definition: ```cds // filling in missing events as found on SAP Business Accelerator Hub using { API_BUSINESS_PARTNER as S4 } from './API_BUSINESS_PARTNER'; extend service S4 with { event BusinessPartner.Created @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1') { BusinessPartner : String } event BusinessPartner.Changed @(topic:'sap.s4.beh.businesspartner.v1.BusinessPartner.Created.v1') { BusinessPartner : String } } ``` ## Configure SAP S/4HANA As a prerequisite for consuming SAP S/4HANA events, the SAP S/4HANA system itself needs to be configured to send out specific event messages to a specific SAP Event Mesh or SAP Cloud Application Event Hub service instance. How to create the necessary service instances and use them with a CAP application was already described in the previous sections [SAP Event Mesh](./event-mesh) and [SAP Cloud Application Event Hub](./event-broker), respectively. A description of how to configure an SAP S/4HANA system to send out specific events is out of scope of this documentation here. See [this documentation](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/82e97d5329044732af1efd996bfdc2ab.html) for more details. ## Using Low-Level Messaging Instead of adding events found on [SAP Business Accelerator Hub](https://api.sap.com/content-type/Events/events/packages) to a CDS service model, it's also possible to use a messaging service directly to consume events from SAP S/4HANA. You have to bind the `messaging` service directly to the SAP Event Mesh or SAP Cloud Application Event Hub service instance that the SAP S/4HANA system sends the event messages to. Then you can consume the event by registering a handler on the `type` of the event that should be received (`sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1` in the example): ```js const messaging = await cds.connect.to ('messaging') messaging.on ('sap.s4.beh.businesspartner.v1.BusinessPartner.Changed.v1', (msg) => { const { BusinessPartner } = msg.data console.log('--> Event received: BusinessPartner changed (ID="'+BusinessPartner+'")') }) ``` All the complex processes, like determining the correct technical topic to subscribe to and adding this subscription to a queue, will be done automatically in the background. # Publishing APIs How to publish APIs in different formats
# Serving OData APIs ## Feature Overview { #overview} OData is an OASIS standard, which essentially enhances plain REST with standardized system query options like `$select`, `$expand`, `$filter`, etc. Find a rough overview of the feature coverage in the following table: | Query Options | Remarks | Node.js | Java | |----------------|-------------------------------------------|------------|---------| | `$search` | Search in multiple/all text elements(1)| | | | `$value` | Retrieves single rows/values | | | | `$top`,`$skip` | Requests paginated results | | | | `$filter` | Like SQL where clause | | | | `$select` | Like SQL select clause | | | | `$orderby` | Like SQL order by clause | | | | `$count` | Gets number of rows for paged results | | | | `$apply` | For [data aggregation](#data-aggregation) | | | | `$expand` | Deep-read associated entities | | | | [Lambda Operators](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#_Toc31361024) | Boolean expressions on a collection | | (2) | | [Parameters Aliases](https://docs.oasis-open.org/odata/odata/v4.01/os/part1-protocol/odata-v4.01-os-part1-protocol.html#sec_ParameterAliases) | Replace literal value in URL with parameter alias | | (3) | - (1) The elements to be searched are specified with the [`@cds.search` annotation](../guides/providing-services#searching-data). - (2) The navigation path identifying the collection can only contain one segment. - (3) Supported for key values and for parameters of functions only. System query options can also be applied to an [expanded navigation property](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#_Toc31361039) (nested within `$expand`): | Query Options | Remarks | Node.js | Java | |----------------|-------------------------------------------|----------|--------| | `$select` | Select properties of associated entities | | | | `$filter` | Filter associated entities | | | | `$expand` | Nested expand | | | | `$orderby` | Sort associated entities | | | | `$top`,`$skip` | Paginate associated entities | | | | `$count` | Count associated entities | | | | `$search` | Search associated entities | | | [Learn more in the **Getting Started guide on odata.org**.](https://www.odata.org/getting-started/){.learn-more} [Learn more in the tutorials **Take a Deep Dive into OData**.](https://developers.sap.com/mission.scp-3-odata.html){.learn-more} | Data Modification | Remarks | Node.js | Java | |-------------------|-------------------------------------------|------------|---------| | [Create an Entity](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_CreateanEntity) | `POST` request on Entity collection | | | | [Update an Entity](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_UpdateanEntity) | `PATCH` or `PUT` request on Entity | | | [ETags](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_UseofETagsforAvoidingUpdateConflicts) | For avoiding update conflicts | | | | [Delete an Entity](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_DeleteanEntity) | `DELETE` request on Entity | | | | [Delta Payloads](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_DeltaPayloads) | For nested entity collections in [deep updates](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_UpdateRelatedEntitiesWhenUpdatinganE) | | | | [Patch Collection](#odata-patch-collection) | Update Entity collection with [delta](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_DeltaPayloads) | | | ## PATCH Entity Collection with Mass Data (Java) { #odata-patch-collection } With OData v4, you can [update a collection of entities](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_UpdateaCollectionofEntities) with a _single_ PATCH request. The resource path of the request targets the entity collection and the body of the request is given as a [delta payload](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_DeltaPayloads): ```js PATCH /CatalogService/Books Content-Type: application/json { "@context": "#$delta", "value": [ { "ID": 17, "title": "CAP - what's new in 2023", "price": 29.99, "author_ID": 999 }, { "ID": 85, "price": 9.99 }, { "ID": 42, "@removed": { "reason": "deleted" } } ] } ``` PATCH requests with delta payload are executed using batch delete and [upsert](../java/working-with-cql/query-api#bulk-upsert) statements, and are more efficient than OData [batch requests](https://docs.oasis-open.org/odata/odata/v4.01/csprd02/part1-protocol/odata-v4.01-csprd02-part1-protocol.html#sec_BatchRequests). Use PATCH on entity collections for uploading mass data using a dedicated service, which is secured using [role-based authorization](../java/security#role-based-auth). Delta updates must be explicitly enabled by annotating the entity with ```cds @Capabilities.UpdateRestrictions.DeltaUpdateSupported ``` Limitations: * Conflict detection via [ETags](../guides/providing-services#etag) is not supported. * [Draft flow](../java/fiori-drafts#bypassing-draft-flow) is bypassed, `IsActiveEntity` has to be `true`. * [Draft locks](../java/fiori-drafts#draft-lock) are ignored, active entities are updated or deleted w/o canceling drafts. * [Added and deleted links](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_IteminaDeltaPayloadResponse) are not supported. * The header `Prefer=representation` is not yet supported. * The `continue-on-error` preference is not yet supported. * The generic CAP handler support for [upsert](../java/working-with-cql/query-api#upsert) is limited, for example, audit logging is not supported. ## Mapping of CDS Types { #type-mapping} The table below lists [CDS's built-in types](../cds/types) and their mapping to the OData EDM type system. | CDS Type | OData V4 | | -------------- | --------------------------------------- | | `UUID` | _Edm.Guid_ (1) | | `Boolean` | _Edm.Boolean_ | | `UInt8 ` | _Edm.Byte_ | | `Int16` | _Edm.Int16_ | | `Int32` | _Edm.Int32_ | | `Integer` | _Edm.Int32_ | | `Int64` | _Edm.Int64_ | | `Integer64` | _Edm.Int64_ | | `Decimal` | _Edm.Decimal_ | | `Double` | _Edm.Double_ | | `Date` | _Edm.Date_ | | `Time` | _Edm.TimeOfDay_ | | `DateTime` | _Edm.DateTimeOffset_ | | `Timestamp` | _Edm.DateTimeOffset_ with Precision="7" | | `String` | _Edm.String_ | | `Binary` | _Edm.Binary_ | | `LargeBinary` | _Edm.Binary_ | | `LargeString` | _Edm.String_ | | `Map` | represented as empty, open complex type | | `Vector` | not supported (2) | > (1) Mapping can be changed with, for example, `@odata.Type='Edm.String'` > (2) Type `cds.Vector` must not appear in an OData service OData V2 has the following differences: | CDS Type | OData V2 | | ------------ | ----------------------------------------------- | | `Date` | _Edm.DateTime_ with `sap:display-format="Date"` | | `Time` | _Edm.Time_ | | `Map` | not supported | ### Overriding Type Mapping { #override-type-mapping} Override standard type mappings using the annotation `@odata.Type` first, and then additionally define `@odata {MaxLength, Precision, Scale, SRID}`. `@odata.Type` is effective on scalar CDS types only and the value must be a valid OData (EDM) primitive type for the specified protocol version. Unknown types and non-matching facets are silently ignored. No further value constraint checks are applied. They allow, for example, to produce additional OData EDM types which are not available in the standard type mapping. This is done during the import of external service APIs, see [Using Services](../guides/using-services#external-service-api). ```cds entity Foo { // ... @odata: { Type: 'Edm.GeometryPolygon', SRID: 0 } geoCollection : LargeBinary; }; ``` Another prominent use case is the CDS type `UUID`, which maps to `Edm.Guid` by default. However, the OData standard puts up restrictive rules for _Edm.Guid_ values - for example, only hyphenated strings are allowed - which can conflict with existing data. Therefore, you can override the default mapping as follows: ```cds entity Books { key ID : UUID @odata.Type:'Edm.String'; // ... } ``` ::: warning This annotation affects the client side facing API only. There's no automatic data modification of any kind behind the scenes, like rounding, truncation, conversion, and so on. It's your responsibility to perform all required modifications on the data stream such that the values match their type in the API. If you are not doing the required conversions, you can "cast" any scalar CDS type into any incompatible EDM type: ```cds entity Foo { // ... @odata: {Type: 'Edm.Decimal', Scale: 'floating' } str: String(17) default '17.4'; } ``` This translates into the following OData API contract: ```xml ``` The client can now rightfully expect that float numbers are transmitted but in reality the values are still strings. ::: ## OData Annotations { #annotations} The following sections explain how to add OData annotations to CDS models and how they're mapped to EDMX outputs. Only annotations defined in the vocabularies mentioned in section [Annotation Vocabularies](#vocabularies) are considered in the translation. ### Terms and Properties OData defines a strict two-fold key structure composed of `@.` and all annotations are always specified as a _Term_ with either a primitive value, a record value, or collection values. The properties themselves may, in turn, be primitives, records, or collections. #### Example ```cds @Common.Label: 'Customer' @UI.HeaderInfo: { TypeName : 'Customer', TypeNamePlural : 'Customers', Title : { Value : name } } entity Customers { /* ... */ } ``` This is represented in CSN as follows: ```jsonc {"definitions":{ "Customers":{ "kind": "entity", "@Common.Label": "Customer", "@UI.HeaderInfo.TypeName": "Customer", "@UI.HeaderInfo.TypeNamePlural": "Customers", "@UI.HeaderInfo.Title.Value": {"=": "name"}, /* ... */ } }} ``` And would render to EDMX as follows: ```xml ``` ::: tip The value for `@UI.HeaderInfo` is flattened to individual key-value pairs in CSN and 'restructured' to a record for OData exposure in EDMX. ::: For each annotated target definition in CSN, the rules for restructuring from CSN sources are: 1. Annotations with a single-identifier key are skipped (as OData annotations always have a `@Vocabulary.Term...` key signature). 2. All individual annotations with the same `@` prefix are collected. 3. If there is only one annotation without a suffix, → that one is a scalar or array value of an OData term. 4. If there are more annotations with suffix key parts → it's a record value for the OData term. ### Qualified Annotations OData foresees [qualified annotations](https://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part3-csdl/odata-v4.0-errata03-os-part3-csdl-complete.html#_Toc453752511), which essentially allow to specify different values for a given property. CDS syntax for annotations was extended to also allow appending OData-style qualifiers after a `#` sign to an annotation key, but always only as the last component of a key in the syntax. For example, this is supported: ```cds @Common.Label: 'Customer' @Common.Label#Legal: 'Client' @Common.Label#Healthcare: 'Patient' @Common.ValueList: { Label: 'Customers', CollectionPath:'Customers' } @Common.ValueList#Legal: { Label: 'Clients', CollectionPath:'Clients' } ``` and would render as follows in CSN: ```json { "@Common.Label": "Customer", "@Common.Label#Legal": "Clients", "@Common.Label#Healthcare": "Patients", "@Common.ValueList.Label": "Customers", "@Common.ValueList.CollectionPath": "Customers", "@Common.ValueList#Legal.Label": "Clients", "@Common.ValueList#Legal.CollectionPath": "Clients", } ``` Note that there's no interpretation and no special handling for these qualifiers in CDS. You have to write and apply them in exactly the same way as your chosen OData vocabularies specify them. ### Primitives > Note: The `@Some` annotation isn't a valid term definition. The following example illustrates the rendering of primitive values. Primitive annotation values, meaning Strings, Numbers, `true`, and `false` are mapped to corresponding OData annotations as follows: ```cds @Some.Boolean: true @Some.Integer: 1 @Some.Number: 3.14 @Some.String: 'foo' ``` ```xml ``` #### Null Value { #null-value } A `null` value can be set either as an [annotation expression](#expression-annotations) or as a [dynamic expression](#dynamic-expressions): ```cds @Some.NullXpr: (null) // annotation expression, short form @Some.NullFunc: ($Null()) // annotation expression, functional form @Some.NullDyn: { $edmJson: { $Null } } // dynamic expression ``` All three expressions result in the following rendering: ```xml ``` [Have a look at our *CAP SFLIGHT* sample, showcasing the usage of OData annotations.](https://github.com/SAP-samples/cap-sflight/blob/main/app/travel_processor/capabilities.cds){.learn-more} ### Records > Note: The `@Some` annotation isn't a valid term definition. The following example illustrates the rendering of record values. Record-like source structures are mapped to `` nodes in EDMX, with primitive types translated analogously to the above: ```cds @Some.Record: { Null: (null), Boolean: true, Integer: 1, Number: 3.14, String: 'foo' } ``` ```xml ``` If possible, the type of the record in OData is deduced from the information in the [OData Annotation Vocabularies](#vocabularies): ```cds @Common.ValueList: { CollectionPath: 'Customers' } ``` ```xml ``` Frequently, the OData record type cannot be determined unambiguously, for example if the type found in the vocabulary is abstract. Then you need to explicitly specify the type by adding a property named `$Type` in the record. For example: ```cds @UI.Facets : [{ $Type : 'UI.CollectionFacet', ID : 'Customers' }] ``` ```xml ``` There is one exception for a very prominent case: if the deduced [record type is `UI.DataFieldAbstract`](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/UI.md), the compiler by default automatically chooses `UI.DataField`: ```cds @UI.Identification: [{ Value: deliveryId }] ``` ```xml ``` To overwrite the default, use an explicit `$Type` like shown previously. [Have a look at our *CAP SFLIGHT* sample, showcasing the usage of OData annotations.](https://github.com/SAP-samples/cap-sflight/blob/a7b166b7b9b3d2adb1640b4b68c3f8a26c6961c1/app/travel_processor/value-helps.cds){.learn-more} ### Collections > Note: The `@Some` annotation isn't a valid term definition. The following example illustrates the rendering of collection values. Arrays are mapped to `` nodes in EDMX and if primitives show up as direct elements of the array, these elements are wrapped into individual primitive child nodes of the resulting collection as is. The rules for records and collections are applied recursively: ```cds @Some.Collection: [ null, true, 1, 3.14, 'foo', { $Type:'UI.DataField', Label:'Whatever', Hidden } ] ``` ```xml true 1 3.14 foo ``` ### References { #references } > Note: The `@Some` annotation isn't a valid term definition. The following example illustrates the rendering of reference values. References in CDS annotations are mapped to `Path` properties or nested `` elements, respectively: ```cds @Some.Term: My.Reference @Some.Record: { Value: My.Reference } @Some.Collection: [ My.Reference ] ``` ```xml My/Reference ``` As the compiler isn't aware of the semantics of such references, the mapping is very simplistic: each `.` in a path is replaced by a `/`. Use [expression-valued annotations](#expression-annotations) for more convenience. Use a [dynamic expression](#dynamic-expressions) if the generic mapping can't produce the desired ``: ```cds @Some.Term: {$edmJson: {$Path: '/com.sap.foo.EntityContainer/EntityName/FieldName'}} ``` ```xml /com.sap.foo.EntityContainer/EntityName/FieldName ``` ### Enumeration Values Enumeration symbols are mapped to corresponding `EnumMember` properties in OData. Here are a couple of examples of enumeration values and the annotations that are generated. The first example is for a term in the [Common vocabulary](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/Common.md): ```cds @Common.TextFormat: #html ``` ```xml ``` The second example is for a (record type) term in the [Communication vocabulary](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/Communication.md): ```cds @Communication.Contact: { gender: #F } ``` ```xml ``` ### Expressions { #expression-annotations } If the value of an OData annotation is an [expression](../cds/cdl#expressions-as-annotation-values), the OData backend provides improved handling of references and automatic mapping from CDS expression syntax to OData expression syntax. #### Flattening In contrast to [simple references](#references), the references in expression-like annotation values are correctly handled during model transformations, like other references in the model. When the CDS model is flattened for OData, the flattening is consequentially also applied to these references, and they are translated to the flat model. ::: tip Although CAP supports structured types and elements, we recommend to use them only if they bring a real benefit. In general, you should keep your models as flat as possible. ::: Example: ```cds type Price { @Measures.ISOCurrency: (currency) amount : Decimal; currency : String(3); } service S { entity Product { key id : Integer; name : String; price : Price; } } ``` Structured element `price` of `S.Product` is unfolded to flat elements `price_amount` and `price_currency`. Accordingly, the reference in the annotation is rewritten from `currency` to `price_currency`: ```xml ``` Example: ```cds service S { entity E { key id : Integer; f : Association to F; @Some.Term: (f.struc.y) val : Integer; } entity F { key id : Integer; struc { y : Integer; } } } ``` The OData backend is aware of the semantics of a path and distinguishes association path steps from structure access. The CDS path `f.struc.y` is translated to the OData path `f/struc_y`: ```xml ``` ::: warning Restrictions concerning the foreign key elements of managed associations 1. Usually an annotation assigned to a managed association is copied to the foreign key elements of the association. This is a workaround for the lack of possibility to directly annotate a foreign key element. This copy mechanism is _not_ applied for annotations with expression values. So it is currently not possible to use expression-valued annotations for annotating foreign keys of a managed association. 2. In an expression-valued annotation, it is not possible to reference the foreign key element of a managed association. ::: #### Expression Translation If the expression provided as annotation value is more complex than just a reference, the OData backend translates CDS expressions to the corresponding OData expression syntax and rejects those expressions that are not applicable in an OData API. ::: info While the flattening of references described in the section above is applied to all annotations, the syntactic translation of expressions is only done for annotations defined in one of the [OData vocabularies](#vocabularies). ::: The following operators and clauses of CDL are supported: * `case when ... then ... else ...` and the logical ternary operator ` ? : ` * Logical: `and`, `or`, `not` * Relational: `=`, `<>`, `!=`, `<`, `<=`, `>`, `>=`, `in`, `between ... and ...` * Unary `+` and `-` * Arithmetic: `+`, `-`, `*`, `/` * Concat: `||` * `cast(...)` Example: ```cds @Some.Xpr: ( -(a + b) ) ``` ```xml a b ``` Such expressions can for example be used for [some Fiori UI annotations](https://ui5.sap.com/#/topic/0e7b890677c240b8ba65f8e8d417c048): ```cds service S { @UI.LineItem: [ // ... { Value: (status), Criticality: ( status = 'O' ? 2 : ( status = 'A' ? 3 : 0 ) ) }] entity Order { key id : Integer; // ... status : String; } } ``` If you need to access an element of an entity in an annotation for a bound action or function, use a path that navigates via an explicitly defined [binding parameter](../cds/cdl#bound-actions). Example: ```cds service S { entity Order { key id : Integer; // ... status : String; } actions { @Core.OperationAvailable: ( :in.status <> 'A' ) action accept (in: $self) } } ``` In addition, the following functions are supported: * `$Null()` representing the `null` value [`Null`]([annotation expression](#null-value)). * `Div(...)` (or `$Div(...)`) and `Mod(...)` (or `$Mod(...)`) for integer division and modulo * [`Has(...)`](https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html#Has) (or `$Has(...)`) * the functions listed in sections [5.1.1.5](https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html#StringandCollectionFunctions) through [5.1.1.11](https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html#GeoFunctions) of [OData URL conventions](https://docs.oasis-open.org/odata/odata/v4.02/odata-v4.02-part2-url-conventions.html) + See examples below for the syntax for `cast` and `isof` (section [5.1.1.10](https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html#TypeFunctions)) + The names of the geo functions (section [5.1.1.11](https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html#GeoFunctions)) need to be escaped like
`![geo.distance]` * [`fillUriTemplate(...)`](https://docs.oasis-open.org/odata/odata-csdl-xml/v4.01/odata-csdl-xml-v4.01.html#sec_FunctionodatafillUriTemplate) and [`uriEncode(...)`](https://docs.oasis-open.org/odata/odata-csdl-xml/v4.01/odata-csdl-xml-v4.01.html#sec_FunctionodatauriEncode) * `Type(...)` (or `$Type(...)`) is to be used to specify a type name with their corresponding type facets such as `MaxLength(...)`, `Precision(...)`, `Scale(...)` and `SRID(...)` (or `$MaxLength(...)`, `$Precision(...)`, `$Scale(...)`, `$SRID(...)`) Example: ```cds @Some.Func1: ( concat(a, b, c) ) @Some.Func2: ( round(aNumber) ) @Some.Func3: ( $Cast(aValue, $Type('Edm.Decimal', $Precision(38), $Scale(19)) ) ) @Some.Func4: ( $IsOf(aValue, $Type('Edm.Decimal', $Precision(38), $Scale(19)) ) ) @Some.Func5: ( ![geo.distance](a, b) ) @Some.Func6: ( fillUriTemplate(a, b) ) ``` If a functional expression starts with a `$`, all inner function must also be `$` functions and vice versa. Instead of `[$]Type(...)` an EDM primitive type name can be directly used as function name like in CDL. It is worth to mention that there are two alternatives for the cast function, one in the EDM and one in the CDS domain: ```cds @Some.ODataStyleCast: ( Cast(aValue, Decimal(38, 'variable') ) ) // => Edm.Decimal @Some.ODataStyleCast2: ( Cast(aValue, PrimitiveType()) ) // => Edm.PrimitiveType @Some.SQLStyleCast: ( cast(aValue as Decimal(38, variable)) ) // => cds.Decimal @Some.SQLStyleCast2: ( cast(aValue as String) ) // => cds.String without type facets ``` Both `cast` functions look similar, but there are some differences: The OData style `Cast` _function_ starts with a capital letter and the SQL `cast` _operator_ uses the keyword `as` to delimit the element reference from the type specifier. The OData `Cast` requires an EDM primitive type to be used either as `[$]Type()` or as direct type function whereas the SQL `cast` requires a scalar CDS type as argument which is then converted into the corresponding EDM primitive type. ::: info CAP only provides a syntactic translation. It is up to each client whether an expression value is supported for a particular annotation. See for example [Firori's list of supported annotations](https://ui5.sap.com/#/topic/0e7b890677c240b8ba65f8e8d417c048). ::: Use a [dynamic expression](#dynamic-expressions) if the desired EDMX expression cannot be obtained via the automatic translation of a CDS expression. ### Annotating Annotations { #annotating-annotations} OData can annotate annotations. This often occurs in combination with enums like `UI.Importance` and `UI.TextArrangement`. CDS has no corresponding language feature. For OData annotations, nesting can be achieved in the following way: * To annotate a Record, add an additional element to the CDS source structure. The name of this element is the full name of the annotation, including the `@`. See `@UI.Importance` in the following example. * To annotate a single value or a Collection, add a parallel annotation that has the nested annotation name appended to the outer annotation name. See `@UI.Criticality` and `@UI.TextArrangement` in the following example. ```cds @UI.LineItem: [ {Value: ApplicationName, @UI.Importance: #High}, {Value: Description}, {Value: SourceName}, {Value: ChangedBy}, {Value: ChangedAt} ] @UI.LineItem.@UI.Criticality: #Positive @Common.Text: Text @Common.Text.@UI.TextArrangement: #TextOnly ``` Alternatively, annotating a single value or a Collection by turning them into a structure with an artificial property `$value` is still possible, but deprecated: ```cds @UI.LineItem: { $value:[ /* ... */ ], @UI.Criticality: #Positive } @Common.Text: { $value: Text, @UI.TextArrangement: #TextOnly } ``` As `TextArrangement` is common, there's a shortcut for this specific situation: ```cds ... @Common: { Text: Text, TextArrangement: #TextOnly } ``` In any case, the resulting EDMX is: ```xml ... ``` ### Dynamic Expressions { #dynamic-expressions} OData supports dynamic expressions in annotations. For OData annotations you can use the "edm-json inline mechanism" by providing a [dynamic expression](https://docs.oasis-open.org/odata/odata-csdl-json/v4.01/odata-csdl-json-v4.01.html#_Toc38466479) as defined in the [JSON representation of the OData Common Schema Language](https://docs.oasis-open.org/odata/odata-csdl-json/v4.01/odata-csdl-json-v4.01.html) enclosed in `{ $edmJson: { ... }}`. Note that here the CDS syntax for string literals with single quotes (`'foo'`) applies, and that paths are not automatically recognized but need to be written as `{$Path: 'fieldName'}`. The CDS compiler translates the expression into the corresponding [XML representation](https://docs.oasis-open.org/odata/odata-csdl-xml/v4.01/odata-csdl-xml-v4.01.html#_Toc38530421). For example, the CDS annotation: ```cds @UI.Hidden: {$edmJson: {$Ne: [{$Path: 'status'}, 'visible']}} ``` is translated to: ```xml status visible ``` One of the main use cases for such dynamic expressions is SAP Fiori, but note that SAP Fiori supports dynamic expressions only for [specific annotations](https://ui5.sap.com/#/topic/0e7b890677c240b8ba65f8e8d417c048). ::: tip Use expression-like annotation values Instead of writing annotations directly with EDM JSON syntax, try using [expression-like annotation values](#expression-annotations), which are automatically translated. For the example above you would simply write `@UI.Hidden: (status <> 'visible')`. ::: ### `sap:` Annotations In general, back ends and SAP Fiori UIs understand or even expect OData V4 annotations. You should use those rather than the OData V2 SAP extensions.
If necessary, CDS automatically translates OData V4 annotations to OData V2 SAP extensions when invoked with `v2` as the OData version. This means that you shouldn't have to deal with this at all. Nevertheless, in case you need to do so, you can add `sap:...` attribute-style annotations as follows: ```cds @sap.applicable.path: 'to_eventStatus/EditEnabled' action EditEvent(...) returns SomeType; ``` Which would render to OData EDMX as follows: ```xml ... ``` The rules are: * Only strings are supported as values. * The first dot in `@sap.` is replaced by a colon `:`. * Subsequent dots are replaced by dashes. ### Differences to ABAP In contrast to ABAP CDS, we apply a **generic, isomorphic approach** where names and positions of annotations are exactly as specified in the [OData Vocabularies](#vocabularies). This has the following advantages: * Single source of truth — users only need to consult the official OData specs * Speed — we don't need complex case-by-case mapping logic * No bottlenecks — we always support the full set of OData annotations * Bidirectional mapping — we can translate CDS to EDMX and vice versa Last but not least, it also saves us lots of effort as we don't have to write derivatives of all the OData vocabulary specs. ## Annotation Vocabularies { #vocabularies} When translating a CDS model to an OData API, by default only those annotations are considered that are part of the standard OASIS or SAP vocabularies listed below. You can add further vocabularies to the translation process [using configuration.](#additional-vocabularies) ### [OASIS Vocabularies](https://github.com/oasis-tcs/odata-vocabularies#further-description-of-this-repository) { target="_blank"} | Vocabulary | Description | | ------------------------------------------------------------------ | -------------------------------------------- | | [@Aggregation](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Aggregation.V1.md){target="_blank"} | for describing aggregatable data | | [@Authorization](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Authorization.V1.md){target="_blank"} | for authorization requirements | | [@Capabilities](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Capabilities.V1.md){target="_blank"} | for restricting capabilities of a service | | [@Core](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Core.V1.md){target="_blank"} | for general purpose annotations | | [@JSON](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.JSON.V1.md){target="_blank"} | for JSON properties | | [@Measures](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Measures.V1.md){target="_blank"} | for monetary amounts and measured quantities | | [@Repeatability](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Repeatability.V1.md){target="_blank"} | for repeatable requests | | [@Temporal](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Temporal.V1.md){target="_blank"} | for temporal annotations | | [@Validation](https://github.com/oasis-tcs/odata-vocabularies/tree/main/vocabularies/Org.OData.Validation.V1.md){target="_blank"} | for adding validation rules | ### [SAP Vocabularies](https://github.com/SAP/odata-vocabularies#readme){target="_blank"} | Vocabulary | Description | | ------------------------------------------------------------- | ------------------------------------------------- | | [@Analytics](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/Analytics.md){target="_blank"} | for annotating analytical resources | | [@CodeList](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/CodeList.md){target="_blank"} | for code lists | | [@Common](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/Common.md){target="_blank"} | for all SAP vocabularies | | [@Communication](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/Communication.md){target="_blank"} | for annotating communication-relevant information | | [@DataIntegration](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/DataIntegration.md){target="_blank"} | for data integration | | [@PDF](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/PDF.md){target="_blank"} | for PDF | | [@PersonalData](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/PersonalData.md){target="_blank"} | for annotating personal data | | [@Session](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/Session.md){target="_blank"} | for sticky sessions for data modification | | [@UI](https://github.com/SAP/odata-vocabularies/tree/main/vocabularies/UI.md){target="_blank"} | for presenting data in user interfaces | [Learn more about annotations in CDS and OData and how they work together](https://github.com/SAP-samples/odata-basics-handsonsapdev/blob/annotations/bookshop/README.md){.learn-more} ### Additional Vocabularies Assuming you have a vocabulary `com.MyCompany.vocabularies.MyVocabulary.v1`, you can set the following configuration option: ::: code-group ```json [package.json] { "cds": { "cdsc": { "odataVocabularies": { "MyVocabulary": { "Alias": "MyVocabulary", "Namespace": "com.sap.vocabularies.MyVocabulary.v1", "Uri": "" } } } } } ``` ```json [.cdsrc.json] { "cdsc": { "odataVocabularies": { "MyVocabulary": { "Alias": "MyVocabulary", "Namespace": "com.sap.vocabularies.MyVocabulary.v1", "Uri": "" } } } } ``` ::: With this configuration, all annotations prefixed with `MyVocabulary` are considered in the translation. ```cds service S { @MyVocabulary.MyAnno: 'My new Annotation' entity E { /*...*/ }; }; ``` The annotation is added to the OData API, as well as the mandatory reference to the vocabulary definition: ```xml ... ``` The compiler evaluates neither annotation values nor the URI. It is your responsibility to make the URI accessible if required. Unlike for the standard vocabularies listed above, the compiler has no access to the content of the vocabulary, so the values are translated completely generically. ## Data Aggregation Data aggregation in OData V4 is leveraged by the `$apply` system query option, which defines a pipeline of transformations that is applied to the _input set_ specified by the URI. On the _result set_ of the pipeline, the standard system query options come into effect.
### Example ```http GET /Orders(10)/books? $apply=filter(year eq 2000)/ groupby((author/name),aggregate(price with average as avg))/ orderby(title)/ top(3) ``` This request operates on the books of the order with ID 10. First it filters out the books from the year 2000 to an intermediate result set. The intermediate result set is then grouped by author name and the price is averaged. Finally, the result set is sorted by title and only the top 3 entries are retained. ::: warning If the `groupby` transformation only includes a subset of the entity keys, the result order might be unstable. ::: ### Transformations | Transformation | Description | Node.js | Java | |-------------------------------|----------------------------------------------|---------|--------| | `filter` | filter by filter expression | | | | `search` | filter by search term or expression | | | | `groupby` | group by dimensions and aggregates values | | | | `aggregate` | aggregate values | | | | `compute` | add computed properties to the result set | | | | `expand` | expand navigation properties | | | | `concat` | append additional aggregation to the result | (1) | | | `skip` / `top` | paginate | (1) | | | `orderby` | sort the input set | (1) | | | `topcount`/`bottomcount` | retain highest/lowest _n_ values | | | | `toppercent`/`bottompercent` | retain highest/lowest _p_% values | | | | `topsum`/`bottomsum` | retain _n_ values limited by sum | | | - (1) Supported with experimental feature `cds.features.odata_new_parser = true` #### `concat` The [`concat` transformation](https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs02/odata-data-aggregation-ext-v4.0-cs02.html#_Toc435016581) applies additional transformation sequences to the input set and concatenates the result: ```http GET /Books?$apply= filter(author/name eq 'Bram Stroker')/ concat( aggregate($count as totalCount), groupby((year), aggregate($count as countPerYear))) ``` This request filters all books, keeping only books by Bram Stroker. From these books, `concat` calculates (1) the total count of books *and* (2) the count of books per year. The result is heterogeneous. The `concat` transformation must be the last of the apply pipeline. If `concat` is used, then `$apply` can't be used in combination with other system query options. #### `skip`, `top`, and `orderby` Beyond the standard transformations specified by OData, CDS Java supports the transformations `skip`, `top`, and `orderby` that allow you to sort and paginate an input set: ```http GET /Order(10)/books? $apply=orderby(price desc)/ top(500)/ groupby((author/name),aggregate(price with max as maxPrice)) ``` This query groups the 500 most expensive books by author name and determines the price of the most expensive book per author. ### Hierarchical Transformations Provide support for hierarchy attribute calculation and navigation, and allow the execution of typical hierarchy operations directly on relational data. | Transformation | Description | Node.js | Java | |-----------------------------------------------|--------------------------------------------------------------------|---------|--------------------| | `com.sap.vocabularies.Hierarchy.v1.TopLevels` | generate a hierarchy based on recursive parent-child source data | | (1) | | `ancestors` | return all ancestors of a set of start nodes in a hierarchy | | (1) | | `descendants` | return all descendants of a set of start nodes in a hierarchy | | (1) | - (1) Beta feature, API may change ::: warning Generic implementation is supported on SAP HANA only ::: :::info The source elements of the entity defining the recursive parent-child relation are identified by a naming convention or aliases `node_id` and `parent_id`. For more refer to [SAP HANA Hierarchy Developer Guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/4f9859d273254e04af6ab3e9ea3af286/f29c70e984254a6f8df76ad84e78f123.html?locale=en-US&version=2.0.05) ::: #### `com.sap.vocabularies.Hierarchy.v1.TopLevels` The [`TopLevels` transformation](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/Hierarchy.xml) produces the hierarchical result based on recursive parent-child relationship: ```http GET /SalesOrganizations?$apply= com.sap.vocabularies.Hierarchy.v1.TopLevels(..., NodeProperty='ID', Levels=2) ``` #### `ancestors` and `descendants` The [`ancestors` and `descendants` transformations](https://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs03/odata-data-aggregation-ext-v4.0-cs03.html#Transformationsancestorsanddescendants) compute the subset of a given recursive hierarchy, which contains all nodes that are ancestors or descendants of a start nodes set. Its output is the ancestors or descendants set correspondingly. ```http GET SalesOrganizations?$apply= descendants(..., ID, filter(ID eq 'US'), keep start) /ancestors(..., ID, filter(contains(Name, 'New York')), keep start) ``` ### Aggregation Methods | Aggregation Method | Description | Node.js | Java | |-------------------------------|--------------------|---------|--------| | `min` | smallest value | | | | `max` | largest | | | | `sum` | sum of values | | | | `average` | average of values | | | | `countdistinct` | count of distinct values | | | | custom method | custom aggregation method | | | | `$count` | number of instances in input set | | | ### Custom Aggregates Instead of explicitly using an expression with an aggregation method in the `aggregate` transformation, the client can use a _custom aggregate_. A custom aggregate can be considered as a virtual property that aggregates the input set. It's calculated on the server side. The client doesn't know _How_ the custom aggregate is calculated. They can only be used for the special case when a default aggregation method can be specified declaratively on the server side for a measure. A custom aggregate is declared in the CDS model as follows: * The measure must be annotated with an `@Aggregation.default` annotation that specifies the aggregation method. * The CDS entity should be annotated with an `@Aggregation.CustomAggregate` annotation to expose the custom aggregate to the client. ```cds @Aggregation.CustomAggregate#stock : 'Edm.Decimal' entity Books as projection on bookshop.Books { ID, title, @Aggregation.default: #SUM stock }; ``` With this definition, it's now possible to use the custom aggregate `stock` in an `aggregate` transformation: ```http GET /Books?$apply=aggregate(stock) HTTP/1.1 ``` which is equivalent to: ```http GET /Books?$apply=aggregate(stock with sum as stock) HTTP/1.1 ``` #### Currencies and Units of Measure If a property represents a monetary amount, it may have a related property that indicates the amount's *currency code*. Analogously, a property representing a measured quantity can be related to a *unit of measure*. To indicate that a property is a currency code or a unit of measure it can be annotated with the [Semantics Annotations](https://help.sap.com/docs/SAP_NETWEAVER_750/cc0c305d2fab47bd808adcad3ca7ee9d/fbcd3a59a94148f6adad80b9c97304ff.html) `@Semantics.currencyCode` or `@Semantics.unitOfMeasure`. ```cds @Aggregation.CustomAggregate#amount : 'Edm.Decimal' @Aggregation.CustomAggregate#currency : 'Edm.String' entity Sales { key id : GUID; productId : GUID; @Semantics.amount.currencyCode: 'currency' amount : Decimal(10,2); @Semantics.currencyCode currency : String(3); } ``` The CAP Java SDK exposes all properties annotated with `@Semantics.currencyCode` or `@Semantics.unitOfMeasure` as a [custom aggregate](../advanced/odata#custom-aggregates) with the property's name that returns: * The property's value if it's unique within a group of dimensions * `null` otherwise A custom aggregate for a currency code or unit of measure should also be exposed by the `@Aggregation.CustomAggregate` annotation. Moreover, a property for a monetary amount or a measured quantity should be annotated with `@Semantics.amount.currencyCode` or `@Semantics.quantity.unitOfMeasure` to reference the corresponding property that holds the amount's currency code or the quantity's unit of measure, respectively. ### Other Features | Feature | Node.js | Java | |-----------------------------------------|---------|--------| | use path expressions in transformations | | | | chain transformations | | | | chain transformations within group by | | | | `groupby` with `rollup`/`$all` | | | | `$expand` result set of `$apply` | | | | `$filter`/`$search` result set | | | | sort result set with `$orderby` | | | | paginate result set with `$top`/`$skip` | | | ## Open Types An entity type or a complex type may be declared as _open_, allowing clients to add properties dynamically to instances of the type by specifying uniquely named property values in the payload used to insert or update an instance of the type. To indicate that the entity or complex type is open, the corresponding type must be annotated with `@open`: ```cds service CatalogService { @open // [!code focus] entity Book { // [!code focus] key id : Integer; // [!code focus] } // [!code focus] } ``` The cds build for OData v4 will render the entity type `Book` in `edmx` with the [`OpenType` attribute](https://docs.oasis-open.org/odata/odata-csdl-xml/v4.01/odata-csdl-xml-v4.01.html#sec_OpenEntityType) set to `true`: ```xml // [!code focus] ``` The entity `Book` is open, allowing the client to enrich the entity with additional properties. Example 1: ```json {"id": 1, "title": "Tow Sawyer"} ``` Example 2: ```json {"title": "Tow Sawyer", "author": { "name": "Mark Twain", "age": 74 } } ``` Open types can also be referenced in non-open types and entities. This, however, doesn't make the referencing entity or type open. ```cds service CatalogService { type Order { guid: Integer; book: Book; } @open // [!code focus] type Book {} // [!code focus] } ``` The following payload for `Order` is allowed: `{"guid": 1, "book": {"id": 2, "title": "Tow Sawyer"}}` Note that type `Order` itself is not open thus doesn't allow dynamic properties, in contrast to type `Book`. ::: warning Dynamic properties are not persisted in the underlying data source automatically and must be handled completely by custom code. ::: ### Java Type Mapping #### Simple Types The simple values of deserialized JSON payload can be of type: `String`, `Boolean`, `Number` or simply an `Object` for `null` values. |JSON | Java Type of the `value` | |-------------------------|--------------------------------| |`{"value": "Tom Sawyer"}`| `java.lang.String` | |`{"value": true}` | `java.lang.Boolean` | |`{"value": 42}` | `java.lang.Number` (Integer) | |`{"value": 36.6}` | `java.lang.Number` (BigDecimal)| |`{"value": null}` | `java.lang.Object` | #### Structured Types The complex and structured types are deserialized to `java.util.Map`, whereas collections are deserialized to `java.util.List`. |JSON | Java Type of the `value` | |-------------------------------------------------------------------|--------------------------------------| |`{"value": {"name": "Mark Twain"}}` | `java.util.Map` | |`{"value":[{"name": "Mark Twain"}, {"name": "Charlotte Bronte"}}]}`| `java.util.List>`| ## Singletons A singleton is a special one-element entity introduced in OData V4. It can be addressed directly by its name from the service root without specifying the entity's keys. Annotate an entity with `@odata.singleton` or `@odata.singleton.nullable`, to use it as a singleton within a service, for example: ```cds service Sue { @odata.singleton entity MySingleton { key id : String; // can be omitted in OData v4.01 prop : String; assoc : Association to myEntity; } } ``` It can also be defined as an ordered `SELECT` from another entity: ```cds service Sue { @odata.singleton entity OldestEmployee as select from Employees order by birthyear; } ``` ### Requesting Singletons As mentioned above, singletons are accessed without specifying keys in the request URL. They can contain navigation properties, and other entities can include singletons as their navigation properties as well. The `$expand` query option is also supported. ```http GET …/MySingleton GET …/MySingleton/prop GET …/MySingleton/assoc GET …/MySingleton?$expand=assoc ``` ### Updating Singletons The following request updates a prop property of a singleton MySingleton: ```http PATCH/PUT …/MySingleton {prop: “New value”} ``` ### Deleting Singletons A `DELETE` request to a singleton is possible only if a singleton is annotated with `@odata.singleton.nullable`. An attempt to delete a singleton annotated with `@odata.singleton` will result in an error. ### Creating Singletons Since singletons represent a one-element entity, a `POST` request is not supported.
## V2 Support While CAP defaults to OData V4, the latest protocol version, older projects may need to fallback to OData V2, for example, to keep using existing V2-based UIs. ::: warning OData V2 is deprecated. Use OData V2 only if you need to support existing UIs or if you need to use specific controls that don't work with V4 **yet** like, tree tables (sap.ui.table.TreeTable). ::: ### Enabling OData V2 via CDS OData V2 Adapter in Node.js Apps { #odata-v2-adapter-node} CAP Node.js supports serving the OData V2 protocol through the [_OData V2 adapter for CDS_](https://www.npmjs.com/package/@cap-js-community/odata-v2-adapter), which translates between the OData V2 and V4 protocols. For Node.js projects, add the CDS OData V2 adapter as express.js middleware as follows: 1. Add the adapter package to your project: ```sh npm add @cap-js-community/odata-v2-adapter ``` 2. Access OData V2 services at [http://localhost:4004/odata/v2/${path}](http://localhost:4004/odata/v2). 3. Access OData V4 services at [http://localhost:4004/odata/v4/${path}](http://localhost:4004/odata/v4) (as before). Example: Read service metadata for `CatalogService`: - CDS: ```cds @path:'/browse' service CatalogService { ... } ``` - OData V2: `GET http://localhost:4004/odata/v2/browse/$metadata` - OData V4: `GET http://localhost:4004/odata/v4/browse/$metadata` [Find detailed instructions at **@cap-js-community/odata-v2-adapter**.](https://www.npmjs.com/package/@cap-js-community/odata-v2-adapter){.learn-more} ### Using OData V2 in Java Apps In CAP Java, serving the OData V2 protocol is supported natively by the [CDS OData V2 Adapter](../java/migration#v2adapter). ## Miscellaneous ### Omitting Elements from APIs Add annotation `@cds.api.ignore` to suppress unwanted entity fields (for example, foreign-key fields) in APIs exposed from this the CDS model, that is, OData or OpenAPI. For example: ```cds entity Books { ... @cds.api.ignore author : Association to Authors; } ``` Please note that `@cds.api.ignore` is effective on regular elements that are rendered as `Edm.Property` only. The annotation doesn't suppress an `Edm.NavigationProperty` which is rendered for associations or compositions. If a managed association is annotated, the annotations are propagated to the (generated) foreign keys. In the previous example, the foreign keys of the managed association `author` are muted in the API. ### Absolute Context URL { #absolute-context-url} In some scenarios, an absolute [context URL](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_ContextURL) is needed. In the Node.js runtime, this can be achieved through configuration `cds.odata.contextAbsoluteUrl`. You can use your own URL (including a protocol and a service path), for example: ```js cds.odata.contextAbsoluteUrl = "https://your.domain.com/yourService" ``` to customize the annotation as follows: ```json { "@odata.context":"https://your.domain.com/yourService/$metadata#Books(title,author,ID)", "value":[ {"ID": 201,"title": "Wuthering Heights","author": "Emily Brontë"}, {"ID": 207,"title": "Jane Eyre","author": "Charlotte Brontë"}, {"ID": 251,"title": "The Raven","author": "Edgar Allen Poe"} ] } ``` If `contextAbsoluteUrl` is set to something truthy that doesn't match `http(s)://*`, an absolute path is constructed based on the environment of the application on a best effort basis. Note that we encourage you to stay with the default relative format, if possible, as it's proxy safe. # Publishing to AsyncAPI You can convert events in CDS models to the [AsyncAPI specification](https://www.asyncapi.com), a widely adopted standard used to describe and document message-driven asynchronous APIs. ## Usage from CLI { #cli} Use the following command to convert all services in `srv/` and store the generated AsyncAPI documents in the `docs/` folder: ```sh cds compile srv --service all -o docs --to asyncapi ``` For each service that is available in the `srv/` files, an AsyncAPI document with the service name is generated in the output folder. If you want to generate one AsyncAPI document for all the services, you can use `--asyncapi:merged` flag: ```sh cds compile srv --service all -o docs --to asyncapi --asyncapi:merged ``` [Learn how to programmatically convert the CSN file into an AsyncAPI Document](/node.js/cds-compile#to-asyncapi){.learn-more} ## Presets { #presets} Use presets to add configuration for the AsyncAPI export tooling. ::: code-group ```json [.cdsrc.json] { "export": { "asyncapi": { "application_namespace": "sap.example" [...] } } } ``` ::: | Term | Preset Target | AsyncAPI field | Remarks | |-------------------------|---------------|-------------------------------|--------------------------------------------------------------------------------------------------------------------------| | `merged.title` | Service | info.title | Mandatory when `--asyncapi:merged` flag is given.
`title` from here is used in the generated AsyncAPI document. | | `merged.version` | Service | info.version | Mandatory when `--asyncapi:merged` flag is given.
`version` from here is used in the generated AsyncAPI document | | `merged.description` | Service | info.description | Optional when `--asyncapi:merged` flag is given.
`description` from here is used in the generated AsyncAPI document. | | `merged.short_text` | Service | x-sap-shortText | Optional when `--asyncapi:merged` flag is given.
The value from here is used in the generated AsyncAPI document. | | `application_namespace` | Document | x-sap-application-namespace | Mandatory | | `event_spec_version` | Event | x-sap-event-spec-version | | | `event_source` | Event | x-sap-event-source | | | `event_source_params` | Event | x-sap-event-source-parameters | | | `event_characteristics` | Event | x-sap-event-characteristics | | ## Annotations { #annotations} Use annotations to add configuration for the AsyncAPI export tooling. ::: tip Annotations will take precedence over [presets](#presets). ::: | Term (`@AsyncAPI.`) | Annotation Target | AsyncAPI field | Remarks | |------------------------|-------------------|-------------------------------|-------------------------------------------------------------------------------------------------------------------------| | `Title` | Service | info.title | Mandatory | | `SchemaVersion` | Service | info.version | Mandatory | | `Description` | Service | info.description | | | `StateInfo` | Service | x-sap-stateInfo | | | `ShortText` | Service | x-sap-shortText | | | `EventSpecVersion` | Event | x-sap-event-spec-version | | | `EventSource` | Event | x-sap-event-source | | | `EventSourceParams` | Event | x-sap-event-source-parameters | | | `EventCharacteristics` | Event | x-sap-event-characteristics | | | `EventStateInfo` | Event | x-sap-stateInfo | | | `EventSchemaVersion` | Event | x-sap-event-version | | | `EventType` | Event | | Optional; The value from this annotation will be used to
overwrite the default event type in the AsyncAPI document. | For example: ```cds @AsyncAPI.Title : 'CatalogService Events' @AsyncAPI.SchemaVersion: '1.0.0' @AsyncAPI.Description : 'Events emitted by the CatalogService.' service CatalogService { @AsyncAPI.EventSpecVersion : '2.0' @AsyncAPI.EventCharacteristics: { ![state-transfer]: 'full-after-image' } @AsyncAPI.EventSchemaVersion : '1.0.0' event SampleEntity.Changed.v1 : projection on CatalogService.SampleEntity; } ``` ## Type Mapping { #mapping} CDS Type to AsyncAPI Mapping | CDS Type | AsyncAPI Supported Types | |----------------------------------------|-----------------------------------------------------------------------------------------------------| | `UUID` | `{ "type": "string", "format": "uuid" }` | | `Boolean` | `{ "type": "boolean" }` | | `Integer` | `{ "type": "integer" }` | | `Integer64` | `{ "type": "string", "format": "int64" }` | | `Decimal`, `{precision, scale}` | `{ "type": "string", "format": "decimal", "formatPrecision": , "formatScale": }` | | `Decimal`, without scale | `{ "type": "string", "format": "decimal", "formatPrecision": }` | | `Decimal`, without precision and scale | `{ "type": "string", "format": "decimal" }` | | `Double` | `{ "type": "number" }` | | `Date` | `{ "type": "string", "format": "date" }` | | `Time` | `{ "type": "string", "format": "partial-time" }` | | `DateTime` | `{ "type": "string", "format": "date-time" }` | | `Timestamp` | `{ "type": "string", "format": "date-time" }` | | `String` | `{ "type": "string", "maxLength": length }` | | `Binary` | `{ "type": "string", "maxLength": length }` | | `LargeBinary` | `{ "type": "string" }` | | `LargeString` | `{ "type": "string" }` | # Serving Fiori UIs CAP provides out-of-the-box support for SAP Fiori elements front ends. This guide explains how to add one or more SAP Fiori elements apps to a CAP project, how to add SAP Fiori elements annotations to respective service definitions, and more. In the following sections, when mentioning Fiori, we always mean SAP Fiori elements. [Learn more about developing SAP Fiori elements and OData V4 (since 1.84.)](https://sapui5.hana.ondemand.com/#/topic/62d3f7c2a9424864921184fd6c7002eb){.learn-more} ## SAP Fiori Preview For entities exposed via OData V4 there is a _Fiori preview_ link on the index page. It dynamically serves an SAP Fiori Elements list page that allows you to quickly see the effect of annotation changes without having to create a UI application first.
::: details Be aware that this is **not meant for production**. The preview not meant as a replacement for a proper SAP Fiori Elements (UI5) application. It is only active locally where the [development profile](../node.js/cds-env#profiles) is enabled. To also enable it in cloud deployments, for test or demo purposes maybe, add the following configuration: ::: code-group ```json [package.json] { "cds": { "features": { "fiori_preview": true } } } ``` ```json [.cdsrc.json] { "features": { "fiori_preview": true } } ``` :::
::: details Be aware that this is **not meant for production**. The preview not meant as a replacement for a proper SAP Fiori Elements (UI5) application. It is active by default, but disabled automatically in case the [production profile](../java/developing-applications/configuring#production-profile) is enabled. To also enable it in cloud deployments, for test or demo purposes maybe, add the following configuration: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: index-page: enabled: true ``` :::
## Adding Fiori Apps As showcased in [cap/samples](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app), SAP Fiori apps should be added as sub folders to the `app/` of a CAP project. Each sub folder constitutes an individual SAP Fiori application, with [local annotations](#fiori-annotations), _manifest.json_, etc. So, a typical folder layout would look like this: | Folder/Sub Folder | Description | |----------------------------|--------------------------------------| | `app/` | All SAP Fiori apps should go in here | |     `browse/` | SAP Fiori app for end users | |     `orders/` | SAP Fiori app for order management | |     `admin/` | SAP Fiori app for admins | |     `index.html` | For sandbox tests | | `srv/` | All services | | `db/` | Domain models, and db stuff | ::: tip Links to Fiori applications created in the `app/` folder are automatically added to the index page of your CAP application for local development. ::: ### Using SAP Fiori Tools The SAP Fiori tools provide advanced support for adding SAP Fiori apps to existing CAP projects as well as a wealth of productivity tools, for example for adding SAP Fiori annotations, or graphical modeling and editing. They can be used locally in [Visual Studio Code (VS Code)](https://marketplace.visualstudio.com/items?itemName=SAPSE.sap-ux-fiori-tools-extension-pack) or in [SAP Business Application Studio](https://help.sap.com/docs/SAP_FIORI_tools/17d50220bcd848aa854c9c182d65b699/b0110400b44748d7b844bb5977a657fa.html). [Learn more about **how to install SAP Fiori tools**.](https://help.sap.com/docs/SAP_FIORI_tools/17d50220bcd848aa854c9c182d65b699/2d8b1cb11f6541e5ab16f05461c64201.html){.learn-more} ### From [cap/samples](https://github.com/sap-samples/cloud-cap-samples) For example, you can copy the [SAP Fiori apps from cap/samples](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app) as a template and modify the content as appropriate. ### From [Incidents Sample](https://github.com/SAP-samples/fiori-elements-incident-management/tree/sampleSolution) This is a sample to create an incident management app with SAP Fiori elements for OData V4. ## Fiori Annotations The main content to add is service definitions annotated with information about how to render respective data. ### What Are SAP Fiori Annotations? SAP Fiori elements apps are generic front ends, which construct and render the pages and controls based on annotated metadata documents. The annotations provide semantic annotations used to render such content, for example: ```cds annotate CatalogService.Books with @( UI: { SelectionFields: [ ID, price, currency_code ], LineItem: [ {Value: title}, {Value: author, Label:'{i18n>Author}'}, {Value: genre.name}, {Value: price}, {Value: currency.symbol, Label:' '}, ] } ); ``` [Find this source and many more in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app){.learn-more target="_blank"} [Learn more about **OData Annotations in CDS**.](./odata#annotations){.learn-more} ### Where to Put Them? While CDS in principle allows you to add such annotations everywhere in your models, we recommend putting them in separate _.cds_ files placed in your _./app/*_ folders, for example, as follows. ```sh ./app #> all your Fiori annotations should go here, for example: ./admin fiori-service.cds #> annotating ../srv/admin-service.cds ./browse fiori-service.cds #> annotating ../srv/cat-service.cds index.cds ./srv #> all service definitions should stay clean in here: admin-service.cds cat-service.cds ... ``` [See this also in **cap/samples/fiori**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app){.learn-more} **Reasoning:** This recommendation essentially follows the best practices and guiding principles of [Conceptual Modeling](../guides/domain-modeling#domain-driven-design) and [Separation of Concerns](../guides/domain-modeling#separation-of-concerns). ### Maintaining Annotations Maintaining OData annotations in _.cds_ files is accelerated by the SAP Fiori tools - CDS OData Language Server [@sap/ux-cds-odata-language-server-extension](https://www.npmjs.com/package/@sap/ux-cds-odata-language-server-extension) in the [SAP CDS language support plugin](https://marketplace.visualstudio.com/items?itemName=SAPSE.vscode-cds). It helps you add and edit OData annotations in CDS syntax with: - Code completion - Validation against the OData vocabularies and project metadata - Navigation to the referenced annotations - Quick view of vocabulary information - Internationalization support These assisting features are provided for [OData annotations in CDS syntax](../advanced/odata#annotations) and can't be used yet for the [core data services common annotations](../cds/annotations). The [@sap/ux-cds-odata-language-server-extension](https://www.npmjs.com/package/@sap/ux-cds-odata-language-server-extension) module doesn't require any manual installation. The latest version is fetched by default from [npmjs.com](https://npmjs.com) as indicated in the user preference setting **CDS > Contributions: Registry**. [Learn more about the **CDS extension for VS Code**.](https://www.youtube.com/watch?v=eY7BTzch8w0){.learn-more} ### Code Completion The CDS OData Language Server provides a list of context-sensitive suggestions based on the service metadata and OData vocabularies. You can use it to choose OData annotation terms, their properties, and values from the list of suggestions in annotate directives applied to service entities and entity elements. See [annotate directives](../cds/cdl#annotate) for more details. #### Using Code Completion To trigger code completion, choose + (macOS) or Ctrl + (other platforms). The list of suggested values is displayed. > Note: You can filter the list of suggested values by typing more characters. Navigate to the desired value using the up or down arrows or your mouse. Accept the highlighted value by pressing Enter or by clicking the mouse. Use code completion to add and change individual values (word-based completion) and to add small code blocks containing annotation structures along with mandatory properties (micro-snippets). In an active code snippet, you can use the (tab) key to quickly move to the next tab stop. ##### Example: Annotating Service Entities \(cursor position indicated by `|`\) 1. Place cursor in the `annotate` directive for a service entity, for example `annotate Foo.Bar with ;` and trigger code completion. 2. Type `u` to filter the suggestions and choose `{} UI`. Micro-snippet `@UI : {|}` is inserted: `annotate Foo.Bar with @UI : {|};` 3. Use code completion again to add an annotation term from the UI vocabulary, in this example `SelectionFields`. The micro snippet for this annotation is added and the cursor is placed directly after the term name letting you define a qualifier: `annotate Foo.Bar with @UI : {SelectionFields | : []};` 4. Press the (tab) key to move the cursor to the next tab stop and use code completion again to add values. Because the `UI.SelectionFields` annotation is a collection of entity elements \(entity properties\), all elements of the annotated entity are suggested. ::: tip To choose an element of an associated entity, first select the corresponding association from the list and type *. \(period\)*. Elements of associated entity are suggested. Note: You can add multiple values separated by comma. ::: ```cds annotate Foo.Bar with @UI : { SelectionFields : [ description, assignedIndividual.lastName| ], }; ``` 5. Add a new line after `,` (comma) and use code completion again to add another annotation from the UI vocabulary, such as `LineItem`. Line item is a collection of `DataField` records. To add a record, select the record type you need from the completion list. ```cds annotate Foo.Bar with @UI : { SelectionFields : [ description, assignedIndividual.lastName ], LineItem : [{ $Type:'UI.DataField', Value : |, }, }; ``` > Note: For each record type, two kinds of micro-snippets are provided: one containing only mandatory properties and one containing all properties defined for this record \(full record\). Usually you need just a subset of properties. So, you either select a full record and then remove the properties you don't need, or add the record containing only required properties and then add the remaining properties. 6. Use code completion to add values for the annotation properties. ```cds annotate Foo.Bar with @UI : { SelectionFields : [ description, assignedIndividual.lastName ], LineItem : [ { $Type:'UI.DataField', Value : description, }, { $Type:'UI.DataFieldForAnnotation', Target : 'assignedIndividual/@Communication.Contact', },| ] }; ``` > Note: To add values pointing to annotations defined in another CDS source, you must reference this source with the `using` directive. See [The `using` Directive](../cds/cdl#using) for more details. ##### Example: Annotating Entity Elements \(cursor position indicated by `|`\) 1. Place the cursor in the `annotate` directive, for example `annotate Foo.Bar with {|};`, add a new line and trigger code completion. You get the list of entity elements. Choose the one that you want to annotate. ```cds annotate Foo.Bar with { code| }; ``` 2. Press the key, use code completion again, and choose `{} UI`. The `@UI : {|}` micro-snippet is inserted: ```cds annotate Foo.Bar with { code @UI : { | } }; ``` 3. Trigger completion again and choose an annotation term from the UI vocabulary, in this example: **Hidden**. ```cds annotate Foo.Bar with { code @UI : {Hidden : |} }; ``` 4. Press the (tab) key to move the cursor to the next tab stop and use code completion again to add the value. Because the `UI.Hidden` annotation is of Boolean type, the values true and false are suggested: ```cds annotate Foo.Bar with { code @UI : {Hidden : false } }; ``` ### Diagnostics The CDS OData Language Server validates OData annotations in _.cds_ files against the service metadata and OData vocabularies. It also checks provided string content for language-dependent annotation values and warns you if the format doesn't match the internationalization (i18n) key reference. It shows you that this string is hard coded and won't change based on the language setting in your application. See [Internationalization support](#internationalization-support) for more details. Validation is performed when you open a _.cds_ file and then is retriggered with each change to the relevant files. You can view the diagnostic messages by hovering over the highlighted part in the annotation file or by opening the problems panel. Click on the message in the problems panel to navigate to the related place in the annotation file. > Note: If an annotation value points to the annotation defined in another CDS source, you must reference this source with a `using` directive to avoid warnings. See [The `using` Directive](../cds/cdl#using) for more details. ### Navigation to Referenced Annotations CDS OData Language Server enables quick navigation to the definition of referenced annotations. For example, if your annotation file contains a `DataFieldForAnnotation` record referencing an `Identification` annotation defined in the service file, you can view which file it's defined in and what fields or labels this annotation contains. You can even update the `Identification` annotation or add comments. You can navigate to the referenced annotation using the [Peek Definition](#peek-definition) and [Go To Definition](#go-to-definition) features. > Note: If the referenced annotation is defined in another CDS source, you must reference this source with the `using` directive to enable the navigation. See [The `using` Directive](../cds/cdl#using) for more details. #### Peek Definition { #peek-definition} Peek Definition lets you preview and update the referenced annotation without switching away from the code that you're writing. It's triggered when your cursor is inside the referenced annotation value. - Using a keyboard: choose + F12 (macOS) or Alt + F12 (other platforms) - Using a mouse: right-click and select **Peek Definition** If an annotation is defined in multiple sources, all these sources are listed. You can select which one you want to view or update. Annotation layering isn't considered. #### Go to Definition { #go-to-definition} Go To Definition lets you navigate to the source of the referenced annotation and opens the source file scrolled to the respective place in a new tab. It's triggered when your cursor is inside the referenced annotation value. Place your cursor inside the path referencing the annotation term segment or translatable string value, and trigger Go to Definition: - Using a keyboard: choose F12 in VS Code, or Ctrl + F12 in SAP Business Application Studio - Using a mouse: right-click and select **Go To Definition** - Using a keyboard and mouse: + mouse click (macOS) or Ctrl + mouse click (other platforms) If an annotation is defined in multiple sources, a Peek definition listing these sources will be shown instead. Annotation layering isn't considered. ### Documentation \(Quick Info\) The annotation language server provides quick information for annotation terms, record types, and properties used in the annotation file, or provided as suggestions in code completion lists. This information is retrieved from the respective OData vocabularies and can provide answers to the following questions: - What is the type and purpose of the annotation term/record type/property? - What targets can the annotation term apply to? - Is the annotation term/record type/property experimental? Is it deprecated? - Is this annotation property mandatory or optional? > Note: The exact content depends on the availability in OData vocabularies. To view the quick info for an annotation term, record type, or property used in the annotation file, hover your mouse over it. The accompanying documentation is displayed in a hover window, if provided in the respective OData vocabularies. To view the quick info for each suggestion in the code completion list, either pressing + (macOS) or Ctrl + (other platforms), or click the *info* icon. The accompanying documentation for the suggestion expands to the side. The expanded documentation stays open and updates as you navigate the list. You can close this by pressing + / Ctrl + again or by clicking on the close icon. #### Internationalization Support When you open an annotation file, all language-dependent string values are checked against the _i18n.properties_ file. Each value that doesn't represent a valid reference to the existing text key in the _i18n.properties_ file, is indicated with a warning. A Quick Fix action is suggested to generate a text key in i18n file and substitute your string value with the reference to that entry. ### Prefer `@title` and `@description` Influenced by the [JSON Schema](https://json-schema.org), CDS supports the [common annotations](../cds/annotations#common-annotations) `@title` and `@description`, which are mapped to corresponding [OData annotations](./odata#annotations) as follows: | CDS | JSON Schema | OData | |----------------|---------------|---------------------| | `@title` | `title` | `@Common.Label` | | `@description` | `description` | `@Core.Description` | We recommend preferring these annotations over the OData ones in protocol-agnostic data models and service models, for example: ```cds annotate my.Books with { //... title @title: 'Book Title'; author @title: 'Author ID'; } ``` ### Prefer `@readonly`, `@mandatory`, ... CDS supports `@readonly` as a common annotation, which translates to respective [OData annotations](./odata#annotations) from the `@Capabilities` vocabulary. We recommend using the former for reasons of conciseness and comprehensibility as shown in this example: ```cds @readonly entity Foo { // entity-level @readonly foo : String // element-level } ``` is equivalent to: ```cds entity Foo @(Capabilities:{ // entity-level InsertRestrictions.Insertable: false, UpdateRestrictions.Updatable: false, DeleteRestrictions.Deletable: false }) { // element-level @Core.Computed foo : String } ``` Similar recommendations apply to `@mandatory` and others → see [Common Annotations](../cds/annotations#common-annotations). ## Draft Support SAP Fiori supports edit sessions with draft states stored on the server, so users can interrupt and continue later on, possibly from different places and devices. CAP, as well as SAP Fiori elements, provide out-of-the-box support for drafts as outlined in the following sections. **We recommend to always use draft** when your application needs data input by end users. [For details and guidelines, see **SAP Fiori Design Guidelines for Draft**.](https://experience.sap.com/fiori-design-web/draft-handling/){.learn-more} [Find a working end-to-end version in **cap/samples/fiori**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori){.learn-more} [For details about the draft flow in SAP Fiori elements, see **SAP Fiori elements > Draft Handling**](https://ui5.sap.com/#/topic/ed9aa41c563a44b18701529c8327db4d){.learn-more} ### Enabling Draft with `@odata.draft.enabled` To enable draft for an entity exposed by a service, simply annotate it with `@odata.draft.enabled` as in this example: ```cds annotate AdminService.Books with @odata.draft.enabled; ``` [See it live in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app/admin-books/fiori-service.cds#L51){.learn-more} ::: warning You can't project from draft-enabled entities, as annotations are propagated. Either _enable_ the draft for the projection and not the original entity or _disable_ the draft on the projection using `@odata.draft.enabled: null`. ::: ### Difference between Compositions and Associations Be aware that you must not modify associated entities through drafts. Only compositions will get a "Create" button in SAP Fiori elements UIs because they are stored as part of the same draft entity. ### Enabling Draft for [Localized Data](../guides/localized-data) {#draft-for-localized-data} Annotate the underlying base entity in the base model with `@fiori.draft.enabled` to also support drafts for [localized data](../guides/localized-data): ```cds annotate sap.capire.bookshop.Books with @fiori.draft.enabled; ``` :::info Background SAP Fiori drafts required single keys of type `UUID`, which isn't the case by default for the automatically generated `_texts` entities (→ [see the _Localized Data_ guide for details](../guides/localized-data#behind-the-scenes)). The `@fiori.draft.enabled` annotation tells the compiler to add such a technical primary key element named `ID_texts`. ::: ::: warning Adding the annotation `@fiori.draft.enabled` won't work if the corresponding `_texts` entities contain any entries, because existing entries don't have a value for the new key field `ID_texts`. ::: ![An SAP Fiori UI showing how a book is edited in the bookshop sample and that the translations tab is used for non-standard languages.](../assets/draft-for-localized-data.png){} [See it live in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/fiori/app/admin-books/fiori-service.cds#L50){.learn-more} If you're editing data in multiple languages, the _General_ tab in the example above is reserved for the default language (often "en"). Any change to other languages has to be done in the _Translations_ tab, where a corresponding language can be chosen from a drop-down menu as illustrated above. This also applies if you use the URL parameter `sap-language` on the draft page. ### Validating Drafts You can add [custom handlers](../guides/providing-services#custom-logic) to add specific validations, as usual. In addition, for a draft, you can register handlers to the `PATCH` events to validate input per field, during the edit session, as follows. ##### ... in Java You can add your validation logic before operation event handlers. Specific events for draft operations exist. See [Java > Fiori Drafts > Editing Drafts](../java/fiori-drafts#draftevents) for more details. ##### ... in Node.js You can add your validation logic before the operation handler for either CRUD or draft-specific events. See [Node.js > Fiori Support > Handlers Registration](../node.js/fiori#draft-support) for more details about handler registration.
### Query Drafts Programmatically To access drafts in code, you can use the [`.drafts` reflection](../node.js/cds-reflect#drafts). ```js SELECT.from(Books.drafts) //returns all drafts of the Books entity ``` [Learn how to query drafts in Java.](../java/fiori-drafts#draftservices){.learn-more} ## Use Roles to Toggle Visibility of UI elements In addition to adding [restrictions on services, entities, and actions/functions](/guides/security/authorization#restrictions), there are use cases where you only want to hide certain parts of the UI for specific users. This is possible by using the respective UI annotations like `@UI.Hidden` or `@UI.CreateHidden` in conjunction with `$edmJson` pointing to a singleton. First, you define the [singleton](../advanced/odata#singletons) in your service and annotate it with [`@cds.persistency.skip`](../guides/databases#cds-persistence-skip) so that no database artefact is created: ```cds @odata.singleton @cds.persistency.skip entity Configuration { key ID: String; isAdmin : Boolean; } ``` > A key is technically not required, but without it some consumers might run into problems. Then define an `on` handler for serving the request: ```js srv.on('READ', 'Configuration', async req => { req.reply({ isAdmin: req.user.is('admin') //admin is the role, which for example is also used in @requires annotation }); }); ``` Finally, refer to the singleton in the annotation by using a [dynamic expression](../advanced/odata#dynamic-expressions): ```cds annotate service.Books with @( UI.CreateHidden : { $edmJson: {$Not: { $Path: '/CatalogService.EntityContainer/Configuration/isAdmin'} } }, UI.UpdateHidden : { $edmJson: {$Not: { $Path: '/CatalogService.EntityContainer/Configuration/isAdmin'} } }, ); ``` The Entity Container is OData specific and refers to the `$metadata` of the OData service in which all accessible entities are located within the Entity Container. :::details SAP Fiori elements also allows to not include it in the path ```cds annotate service.Books with @( UI.CreateHidden : { $edmJson: {$Not: { $Path: '/Configuration/isAdmin'} } }, UI.UpdateHidden : { $edmJson: {$Not: { $Path: '/Configuration/isAdmin'} } }, ); ``` ::: ## Value Helps In addition to supporting the standard `@Common.ValueList` annotations as defined in the [OData Vocabularies](odata#annotations), CAP provides advanced, convenient support for Value Help as understood and supported by SAP Fiori. ### Convenience Option `@cds.odata.valuelist` Simply add the `@cds.odata.valuelist` annotation to an entity, and all managed associations targeting this entity will automatically receive Value Lists in SAP Fiori clients. For example: ```cds @cds.odata.valuelist entity Currencies {} ``` ```cds service BookshopService { entity Books { //... currency : Association to Currencies; } } ``` ### Pre-Defined Types in `@sap/cds/common` [@sap/cds/common]: ../cds/common The reuse types in [@sap/cds/common] already have this added to base types and entities, so all uses automatically benefit from this. This is an effective excerpt of respective definitions in `@sap/cds/common`: ```cds type Currencies : Association to sap.common.Currencies; ``` ```cds context sap.common { entity Currencies : CodeList {...}; entity CodeList { name : localized String; ... } } ``` ```cds annotate sap.common.CodeList with @( UI.Identification: [name], cds.odata.valuelist, ); ``` ### Usages of `@sap/cds/common` In effect, usages of [@sap/cds/common] stay clean of any pollution, for example: ```cds using { Currency } from '@sap/cds/common'; entity Books { //... currency : Currency; } ``` [Find this also in our **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop/db/schema.cds){.learn-more} Still, all SAP Fiori UIs, on all services exposing `Books`, will automatically receive Value Help for currencies. You can also benefit from that when [deriving your project-specific code list entities from **sap.common.CodeList**](../cds/common#adding-own-code-lists). ### Resulting Annotations in EDMX Here is an example showing how this ends up as OData `Common.ValueList` annotations: ```xml ``` ## Actions In our SFLIGHT sample application, we showcase how to use actions covering the definition in your CDS model, the needed custom code and the UI implementation. [Learn more about Custom Actions & Functions.](../guides/providing-services#actions-functions){.learn-more} We're going to look at three things. 1. Define the action in CDS and custom code. 1. Create buttons to bring the action to the UI 1. Dynamically define the buttons status on the UI First you need to define an action, like in the [_travel-service.cds_ file](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/srv/travel-service.cds#L11). ```cds entity Travel as projection on my.Travel actions { action createTravelByTemplate() returns Travel; action rejectTravel(); action acceptTravel(); action deductDiscount( percent: Percentage not null ) returns Travel; }; ``` To define what the action actually is doing, you need to write some custom code. See the [_travel-service.js_](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/srv/travel-service.js#L126) file for example: ```js this.on('acceptTravel', req => UPDATE(req._target).with({TravelStatus_code:'A'})) ``` > Note: `req._target` is a workaround that has been [introduced in SFlight](https://github.com/SAP-samples/cap-sflight/blob/685867de9e6a91d61276671e4af7354029c70ac8/srv/workarounds.js#L52). In the future, there might be an official API for it. Create the buttons, to bring this action onto the UI and make it actionable for the user. There are two buttons: On the overview and in the detail screen. Both are defined in the [_layouts.cds_](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/app/travel_processor/layouts.cds) file. For the overview of all travels, use the [`@UI.LineItem` annotation](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/app/travel_processor/layouts.cds#L40-L41). ```cds annotate TravelService.Travel with @UI : { LineItem : [ { $Type : 'UI.DataFieldForAction', Action : 'TravelService.acceptTravel', Label : '{i18n>AcceptTravel}' } ] }; ``` For the detail screen of a travel, use the [`@UI.Identification` annotation](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/app/travel_processor/layouts.cds#L9-L10). ```cds annotate TravelService.Travel with @UI : { Identification : [ { $Type : 'UI.DataFieldForAction', Action : 'TravelService.acceptTravel', Label : '{i18n>AcceptTravel}' } ] }; ``` Now, the buttons are there and connected to the action. The missing piece is to define the availability of the buttons dynamically. Annotate the `Travel` entity in the `TravelService` service accordingly in the [_field-control.cds_](https://github.com/SAP-samples/cap-sflight/blob/8f65dc8b7985bc22584d2a9f94335f110c0450ea/app/travel_processor/field-control.cds#L20-L32) file. ```cds annotate TravelService.Travel with actions { acceptTravel @( Core.OperationAvailable : { $edmJson: { $Ne: [{ $Path: 'in/TravelStatus_code'}, 'A']} }, Common.SideEffects.TargetProperties : ['in/TravelStatus_code'], ) }; ``` This annotation uses [dynamic expressions](../advanced/odata#dynamic-expressions) to control the buttons for each action. And the status of a travel on the UI is updated, triggered by the `@Common.SideEffects.TargetProperties` annotation. :::info More complex calculation If you have the need for a more complex calculation, then the interesting parts in SFLIGHT are [virtual fields in _field-control.cds_](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/app/travel_processor/field-control.cds#L10-L16) (also lines 37-44) and [custom code in _travel-service.js_](https://github.com/SAP-samples/cap-sflight/blob/dfc7827da843ace0ea126f76fc78a6591b325c67/srv/travel-service.js#L13-L22). ::: ## Cache Control CAP provides the option to set a [Cache-Control](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) header with a [max-age](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#max-age) directive to indicate that a response remains fresh until _n_ seconds after it was generated . In the CDS model, this can be done using the `@http.CacheControl: {maxAge: }` annotation on stream properties. The header indicates that caches can store the response and reuse it for subsequent requests while it's fresh. The `max-age` (in seconds) specifies the maximum age of the content before it becomes stale. :::info Elapsed time since the response was generated The `max-age` is the elapsed time since the response was generated on the origin server. It's not related to when the response was received. ::: ::: warning Only Java Cache Control feature is currently supported on the Java runtime only. :::
# Using Databases This guide provides instructions on how to use databases with CAP applications. Out of the box-support is provided for SAP HANA, SQLite, H2 (Java only), and PostgreSQL. ## Setup & Configuration
### Adding Database Packages {.node} Following are cds-plugin packages for CAP Node.js runtime that support respective databases: | Database | Package | Remarks | | ------------------------------ | ------------------------------------------------------------ | ---------------------------------- | | **[SAP HANA Cloud](databases-hana)** | [`@cap-js/hana`](https://www.npmjs.com/package/@cap-js/hana) | recommended for production | | **[SQLite](databases-sqlite)** | [`@cap-js/sqlite`](https://www.npmjs.com/package/@cap-js/sqlite) | recommended for development | | **[PostgreSQL](databases-postgres)** | [`@cap-js/postgres`](https://www.npmjs.com/package/@cap-js/postgres) | maintained by community + CAP team | > Follow the links above to find specific information for each. In general, all you need to do is to install one of the database packages, as follows: Using SQLite for development: ```sh npm add @cap-js/sqlite -D ``` Using SAP HANA for production: ```sh npm add @cap-js/hana ``` ::: details Prefer `cds add hana` ... ... which also does the equivalent of `npm add @cap-js/hana` but in addition cares for updating `mta.yaml` and other deployment resources as documented in the [deployment guide](deployment/to-cf#_1-using-sap-hana-database). ::: ### Auto-Wired Configuration {.node} The afore-mentioned packages use `cds-plugin` techniques to automatically configure the primary database with `cds.env`. For example, if you added SQLite and SAP HANA, this effectively results in this auto-wired configuration: ```json {"cds":{ "requires": { "db": { "[development]": { "kind": "sqlite", "impl": "@cap-js/sqlite", "credentials": { "url": "memory" } }, "[production]": { "kind": "hana", "impl": "@cap-js/hana", "deploy-format": "hdbtable" } } } }} ``` ::: details In contrast to pre-CDS 7 setups this means... 1. You don't need to — and should not — add direct dependencies to driver packages, like [`hdb`](https://www.npmjs.com/package/hdb) or [`sqlite3`](https://www.npmjs.com/package/sqlite3) anymore in your *package.json* files. 2. You don't need to configure `cds.requires.db` anymore, unless you want to override defaults brought with the new packages. ::: ### Custom Configuration {.node} The auto-wired configuration uses configuration presets, which are automatically enabled via `cds-plugin` techniques. You can always use the basic configuration and override individual properties to create a different setup: 1. Install a database driver package, for example: ```sh npm add @cap-js/sqlite ``` > Add option `-D` if you want this for development only. 2. Configure the primary database as a required service through `cds.requires.db`, for example: ```json {"cds":{ "requires": { "db": { "kind": "sqlite", "impl": "@cap-js/sqlite", "credentials": { "url": "db.sqlite" } } } }} ``` The config options are as follows: - `kind` — a name of a preset, like `sql`, `sqlite`, `postgres`, or `hana` - `impl` — the module name of a CAP database service implementation - `credentials` — an object with db-specific configurations, most commonly `url` ::: warning Don't configure credentials Credentials like `username` and `password` should **not** be added here but provided through service bindings, for example, via `cds bind`. ::: ::: tip Use `cds env` to inspect effective configuration For example, running this command: ```sh cds env cds.requires.db ``` → prints: ```sh { kind: 'sqlite', impl: '@cap-js/sqlite', credentials: { url: 'db.sqlite' } } ``` :::
### Built-in Database Support {.java} CAP Java has built-in support for different SQL-based databases via JDBC. This section describes the different databases and any differences between them with respect to CAP features. There's out of the box support for SAP HANA with CAP currently as well as H2 and SQLite. However, it's important to note that H2 and SQLite aren't enterprise grade databases and are recommended for non-productive use like local development or CI tests only. PostgreSQL is supported in addition, but has various limitations in comparison to SAP HANA, most notably in the area of schema evolution. Database support is enabled by adding a Maven dependency to the JDBC driver, as shown in the following table: | Database | JDBC Driver | Remarks | | ------------------------------ | ------------------------------------------------------------ | ---------------------------------- | | **[SAP HANA Cloud](databases-hana)** | `com.sap.cloud.db.jdbc:ngdbc` | Recommended for productive use | | **[H2](databases-h2)** | `com.h2database:h2` | Recommended for development and CI | | **[SQLite](databases-sqlite)** | `org.xerial:sqlite-jdbc` | Supported for development and CI
Recommended for local MTX | | **[PostgreSQL](databases-postgres)** | `org.postgresql:postgresql` | Supported for productive use | [Learn more about supported databases in CAP Java and their configuration](../java/cqn-services/persistence-services#database-support){ .learn-more} ## Providing Initial Data You can use CSV files to fill your database with initial data - see [Location of CSV Files](#location-of-csv-files).
For example, in our [*cap/samples/bookshop*](https://github.com/SAP-samples/cloud-cap-samples/tree/main/bookshop/db/data) application, we do so for *Books*, *Authors*, and *Genres* as follows: ```zsh bookshop/ ├─ db/ │ ├─ data/ # place your .csv files here │ │ ├─ sap.capire.bookshop-Authors.csv │ │ ├─ sap.capire.bookshop-Books.csv │ │ ├─ sap.capire.bookshop-Books.texts.csv │ │ └─ sap.capire.bookshop-Genres.csv │ └─ schema.cds └─ ... ```
For example, in our [CAP Samples for Java](https://github.com/SAP-samples/cloud-cap-samples-java/tree/main/db/data) application, we do so for some entities such as *Books*, *Authors*, and *Genres* as follows: ```zsh db/ ├─ data/ # place your .csv files here │ ├─ my.bookshop-Authors.csv │ ├─ my.bookshop-Books.csv │ ├─ my.bookshop-Books.texts.csv │ ├─ my.bookshop-Genres.csv │ └─ ... └─ index.cds ```
The **filenames** are expected to match fully qualified names of respective entity definitions in your CDS models, optionally using a dash `-` instead of a dot `.` for cosmetic reasons. ### Using `.csv` Files The **content** of these files is standard CSV content with the column titles corresponding to declared element names, like for `Books`: ::: code-group ```csvc [db/data/sap.capire.bookshop-Books.csv] ID,title,author_ID,stock 201,Wuthering Heights,101,12 207,Jane Eyre,107,11 251,The Raven,150,333 252,Eleonora,150,555 271,Catweazle,170,22 ``` ::: > Note: `author_ID` is the generated foreign key for the managed Association `author` → learn more about that in the [Generating SQL DDL](#generating-sql-ddl) section. If your content contains ... - commas or line breaks → enclose it in double quotes `"..."` - double quotes → escape them with doubled double quotes: `""...""` ```csvc ID,title,descr 252,Eleonora,"""Eleonora"" is a short story by Edgar Allan Poe, first published in 1842 in Philadelphia in the literary annual The Gift." ``` ::: danger On SAP HANA, only use CSV files for _configuration data_ that can't be changed by application users. → See [CSV data gets overridden in the SAP HANA guide for details](databases-hana#csv-data-gets-overridden). ::: ### Use `cds add data` Run this to generate an initial set of empty `.csv` files with header lines based on your CDS model: ```sh cds add data ``` ### Location of CSV Files CSV files can be found in the folders _db/data_ and _test/data_, as well as in any _data_ folder next to your CDS model files. When you use `cds watch` or `cds deploy`, CSV files are loaded by default from _test/data_. However, when preparing for production deployments using `cds build`, CSV files from _test/data_ are not loaded. ::: details Adding initial data next to your data model The content of these 'co-located' `.cds` files actually doesn't matter, but they need to be included in your data model, through a `using` clause in another file for example. If you need to use certain CSV files exclusively for your production deployments, but not for tests, you can achieve this by including them in a separate data folder, for example, _db/hana/data_. Create an _index.cds_ file in the _hana_ folder as outlined earlier. Then, set up this model location in a dummy cds service, for example _hanaDataSrv_, using the `[production]` profile. ```json "cds": { "requires": { "[production]": { "hanaDataSrv ": { "model": "hana" } } } } ```` As a consequence, when you run `cds build -–production` the model folder _hana_ is added, but it's not added when you run `cds deploy` or `cds watch` because the development profile is used by default. You can verify this by checking the cds build logs for the hana build task. Of course, this mechanism can also be used for PostgreSQL database deployments. ::: ::: details On SAP HANA ... CSV and _hdbtabledata_ files found in the _src_ folder of your database module are treated as native SAP HANA artifacts and deployed as they are. This approach offers the advantage of customizing the _hdbtabledata_ files if needed, such as adding a custom `include_filter` setting to mix initial and customer data in one table. However, the downside is that you must redundantly maintain them to keep them in sync with your CSV files. ::: Quite frequently you need to distinguish between sample data and real initial data. CAP supports this by allowing you to provide initial data in two places:
| Location | Deployed... | Purpose | | ----------- | -------------------- | -------------------------------------------------------- | | `db/data` | always | initial data for configurations, code lists, and similar | | `test/data` | if not in production | sample data for tests and demos |
Use the properties [cds.dataSource.csv.*](../java/developing-applications/properties#cds-dataSource-csv) to configure the location of the CSV files. You can configure different sets of CSV files in different [Spring profiles](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#features.profiles). This configuration reads CSV data from `test/data` if the profile `test` is active: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: test cds: dataSource.csv.paths: - test/data/** ``` :::
## Querying at Runtime Most queries to databases are constructed and executed from [generic event handlers of CRUD requests](providing-services#serving-crud), so quite frequently there's nothing to do. The following is for the remaining cases where you have to provide custom logic, and as part of it execute database queries. ### DB-Agnostic Queries
At runtime, we usually construct and execute queries using cds.ql APIs in a database-agnostic way. For example, queries like this are supported for all databases: ```js SELECT.from (Authors, a => { a.ID, a.name, a.books (b => { b.ID, b.title }) }) .where ({name:{like:'A%'}}) .orderBy ('name') ```
At runtime, we usually construct queries using the [CQL Query Builder API](../java/working-with-cql/query-api) in a database-agnostic way. For example, queries like this are supported for all databases: ```java Select.from(AUTHOR) .columns(a -> a.id(), a -> a.name(), a -> a.books().expand(b -> b.id(), b.title())) .where(a -> a.name().startWith("A")) .orderBy(a -> a.name()); ```
### Standard Operators {.node} The database services guarantee identical behavior of these operators: * `==`, `=` — with `=` null being translated to `is null` * `!=`, `<>` — with `!=` translated to `IS NOT` in SQLite, or to `IS DISTINCT FROM` in standard SQL, or to an equivalent polyfill in SAP HANA * `<`, `>`, `<=`, `>=`, `IN`, `LIKE` — are supported as is in standard SQL In particular, the translation of `!=` to `IS NOT` in SQLite — or to `IS DISTINCT FROM` in standard SQL, or to an equivalent polyfill in SAP HANA — greatly improves the portability of your code. ::: warning Runtime Only The operator mappings are available for runtime queries only, but not in CDS files. ::: ### Functions Mappings for Runtime Queries {.node} A specified set of standard functions is supported in a **database-agnostic**, hence portable way, and translated to database-specific variants or polyfills. Note that these functions are only supported within runtime queries, but not in CDS files. This set of functions are by large the same as specified in OData: * `concat(x,y,...)` — concatenates the given strings or numbers * `trim(x)` — removes leading and trailing whitespaces * `contains(x,y)` — checks whether `y` is contained in `x`, may be fuzzy * `startswith(x,y)` — checks whether `y` starts with `x` * `endswith(x,y)` — checks whether `y` ends with `x` * `matchespattern(x,y)` — checks whether `x` matches regex `y` * `substring(x,i,n?)` 1 — Extracts a substring from `x` starting at index `i` (0-based) with optional length `n`. * **`i`**: Positive starts at `i`, negative starts `i` before the end. * **`n`**: Positive extracts `n` items; omitted extracts to the end; negative is invalid. * `indexof(x,y)` 1 — returns the index of the first occurrence of `y` in `x` * `length(x)` — returns the length of string `x` * `tolower(x)` — returns all-lowercased `x` * `toupper(x)` — returns all-uppercased `x` * `ceiling(x)` — rounds the input numeric parameter up to the nearest numeric value * `floor(x)` — rounds the input numeric parameter down to the nearest numeric value * `round(x)` — rounds the input numeric parameter to the nearest numeric value. The mid-point between two integers is rounded away from zero, i.e. 0.5 is rounded to 1 and ‑0.5 is rounded to -1. * `year(x)` `month(x)`, `day(x)`, `hour(x)`, `minute(x)`, `second(x)` — returns parts of a datetime for a given `cds.DateTime` / `cds.Date` / `cds.Time` * `time(x)`, `date(x)` - returns a string representing the `time` / `date` for a given `cds.DateTime` / `cds.Date` / `cds.Time` * `fractionalseconds(x)` - returns a a `Decimal` representing the fractions of a second for a given `cds.Timestamp` * `maxdatetime()` - returns the latest possible point in time: `'9999-12-31T23:59:59.999Z'` * `mindatetime()` — returns the earliest possible point in time: `'0001-01-01T00:00:00.000Z'` * `totalseconds(x)` — returns the duration of the value in total seconds, including fractional seconds. The [OData spec](https://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_totalseconds) defines the input as EDM.Duration: `P12DT23H59M59.999999999999S` * `now()` — returns the current datetime * `min(x)` `max(x)` `sum(x)` `average(x)` `count(x)`, `countdistinct(x)` — aggregate functions * `search(xs,y)` — checks whether `y` is contained in any of `xs`, may be fuzzy → [see Searching Data](../guides/providing-services#searching-data) * `session_context(v)` — with standard variable names → [see Session Variables](#session-variables) > 1 These functions work zero-based. E.g., `substring('abcdef', 1, 3)` returns 'bcd' > You have to write these functions exactly as given; all-uppercase usages aren't supported. In addition to the standard functions, which all `@cap-js` database services support, `@cap-js/sqlite` and `@cap-js/postgres` also support these common SAP HANA functions, to further increase the scope for portable testing: * `years_between` — Computes the number of years between two specified dates. * `months_between` — Computes the number of months between two specified dates. * `days_between` — Computes the number of days between two specified dates. * `seconds_between` — Computes the number of seconds between two specified dates. * `nano100_between` — Computes the time difference between two dates to the precision of 0.1 microseconds. The database service implementation translates these to the best-possible native SQL functions, thus enhancing the extent of **portable** queries. With open source and the new database service architecture, we also have methods in place to enhance this list by custom implementation. > For the SAP HANA functions, both usages are allowed: all-lowercase as given above, as well as all-uppercase. ::: warning Runtime Only The function mappings are available for runtime queries only, but not in CDS files. ::: ### Session Variables {.node} The API shown below, which includes the function `session_context()` and specific pseudo variable names, is supported by **all** new database services, that is, *SQLite*, *PostgreSQL* and *SAP HANA*. This allows you to write respective code once and run it on all these databases: ```sql SELECT session_context('$user.id') SELECT session_context('$user.locale') SELECT session_context('$valid.from') SELECT session_context('$valid.to') ``` Among other things, this allows us to get rid of static helper views for localized data like `localized_de_sap_capire_Books`. ### Native DB Queries If required you can also use native database features by executing native SQL queries:
```js cds.db.run (`SELECT from sqlite_schema where name like ?`, name) ```
Use Spring's [JDBC Template](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jdbc/core/JdbcTemplate.html) to [leverage native database features](../java/cqn-services/persistence-services#jdbctemplate) as follows: ```java @Autowired JdbcTemplate db; ... db.queryForList("SELECT from sqlite_schema where name like ?", name); ```
### Reading `LargeBinary` / BLOB {.node} Formerly, `LargeBinary` elements (or BLOBs) were always returned as any other data type. Now, they are skipped from `SELECT *` queries. Yet, you can still enforce reading BLOBs by explicitly selecting them. Then the BLOB properties are returned as readable streams. ```js SELECT.from(Books) //> [{ ID, title, ..., image1, image2 }] // [!code --] SELECT.from(Books) //> [{ ID, title, ... }] SELECT(['image1', 'image2']).from(Books) //> [{ image1, image2 }] // [!code --] SELECT(['image1', 'image2']).from(Books) //> [{ image1: Readable, image2: Readable }] ``` [Read more about custom streaming in Node.js.](../node.js/best-practices#custom-streaming-beta){.learn-more} ## Generating DDL Files {#generating-sql-ddl}
When you run your server with `cds watch` during development, an in-memory database is bootstrapped automatically, with SQL DDL statements generated based on your CDS models. You can also do this manually with the CLI command `cds compile --to `.
When you've created a CAP Java application with `cds init --java` or with CAP Java's [Maven archetype](../java/developing-applications/building#the-maven-archetype), the Maven build invokes the CDS compiler to generate a `schema.sql` file for your target database. In the `default` profile (development mode), an in-memory database is [initialized by Spring](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto.data-initialization) and the schema is bootstrapped from the `schema.sql` file. [Learn more about adding an inital database schema.](../java/cqn-services/persistence-services#initial-database-schema){.learn-more}
### Using `cds compile` For example, given these CDS models (derived from [*cap/samples/bookshop*](https://github.com/SAP-samples/cloud-cap-samples/tree/main/bookshop)): ::: code-group ```cds [db/schema.cds] using { Currency } from '@sap/cds/common'; namespace sap.capire.bookshop; entity Books { key ID : UUID; title : localized String; descr : localized String; author : Association to Authors; price : { amount : Decimal; currency : Currency; } } entity Authors { key ID : UUID; name : String; books : Association to many Books on books.author = $self; } ``` ::: ::: code-group ```cds [srv/cat-service.cds] using { sap.capire.bookshop as my } from '../db/schema'; service CatalogService { entity ListOfBooks as projection on Books { *, author.name as author } } ``` ::: Generate an SQL DDL script by running this in the root directory containing both *.cds* files:
```sh cds compile srv/cat-service --to sql --dialect sqlite > schema.sql ``` Output: ::: code-group ```sql [schema.sql] CREATE TABLE sap_capire_bookshop_Books ( ID NVARCHAR(36) NOT NULL, title NVARCHAR(5000), descr NVARCHAR(5000), author_ID NVARCHAR(36), price_amount DECIMAL, price_currency_code NVARCHAR(3), PRIMARY KEY(ID) ); CREATE TABLE sap_capire_bookshop_Authors ( ID NVARCHAR(36) NOT NULL, name NVARCHAR(5000), PRIMARY KEY(ID) ); CREATE TABLE sap_common_Currencies ( name NVARCHAR(255), descr NVARCHAR(1000), code NVARCHAR(3) NOT NULL, symbol NVARCHAR(5), minorUnit SMALLINT, PRIMARY KEY(code) ); CREATE TABLE sap_capire_bookshop_Books_texts ( locale NVARCHAR(14) NOT NULL, ID NVARCHAR(36) NOT NULL, title NVARCHAR(5000), descr NVARCHAR(5000), PRIMARY KEY(locale, ID) ); CREATE VIEW CatalogService_ListOfBooks AS SELECT Books.ID, Books.title, Books.descr, author.name AS author, Books.price_amount, Books.price_currency_code FROM sap_capire_bookshop_Books AS Books LEFT JOIN sap_capire_bookshop_Authors AS author ON Books.author_ID = author.ID; --- some more technical views skipped ... ``` :::
```sh cds compile srv/cat-service --to sql > schema.sql ``` Output: ::: code-group ```sql [schema.sql] CREATE TABLE sap_capire_bookshop_Books ( createdAt TIMESTAMP(7), createdBy NVARCHAR(255), modifiedAt TIMESTAMP(7), modifiedBy NVARCHAR(255), ID INTEGER NOT NULL, title NVARCHAR(111), descr NVARCHAR(1111), author_ID INTEGER, genre_ID INTEGER, stock INTEGER, price DECFLOAT, currency_code NVARCHAR(3), image BINARY LARGE OBJECT, PRIMARY KEY(ID) ); CREATE TABLE sap_capire_bookshop_Books ( ID NVARCHAR(36) NOT NULL, title NVARCHAR(5000), descr NVARCHAR(5000), author_ID NVARCHAR(36), price_amount DECIMAL, price_currency_code NVARCHAR(3), PRIMARY KEY(ID) ); CREATE TABLE sap_capire_bookshop_Authors ( ID NVARCHAR(36) NOT NULL, name NVARCHAR(5000), PRIMARY KEY(ID) ); CREATE TABLE sap_common_Currencies ( name NVARCHAR(255), descr NVARCHAR(1000), code NVARCHAR(3) NOT NULL, symbol NVARCHAR(5), minorUnit SMALLINT, PRIMARY KEY(code) ); CREATE TABLE sap_capire_bookshop_Books_texts ( locale NVARCHAR(14) NOT NULL, ID NVARCHAR(36) NOT NULL, title NVARCHAR(5000), descr NVARCHAR(5000), PRIMARY KEY(locale, ID) ); CREATE VIEW CatalogService_ListOfBooks AS SELECT Books_0.createdAt, Books_0.modifiedAt, Books_0.ID, Books_0.title, Books_0.author, Books_0.genre_ID, Books_0.stock, Books_0.price, Books_0.currency_code, Books_0.image FROM CatalogService_Books AS Books_0; CREATE VIEW CatalogService_ListOfBooks AS SELECT Books.ID, Books.title, Books.descr, author.name AS author, Books.price_amount, Books.price_currency_code FROM sap_capire_bookshop_Books AS Books LEFT JOIN sap_capire_bookshop_Authors AS author ON Books.author_ID = author.ID; --- some more technical views skipped ... ``` :::
::: tip Use the specific SQL dialect (`hana`, `sqlite`, `h2`, `postgres`) with `cds compile --to sql --dialect ` to get DDL that matches the target database. ::: ### Rules for Generated DDL A few observations on the generated SQL DDL output: 1. **Tables / Views** — Declared entities become tables, projected entities become views. 2. **Type Mapping** — [CDS types are mapped to database-specific SQL types](../cds/types). 3. **Slugified FQNs** — Dots in fully qualified CDS names become underscores in SQL names. 4. **Flattened Structs** — Structured elements like `Books:price` are flattened with underscores. 5. **Generated Foreign Keys** — For managed to-one Associations, foreign key columns are created. For example, this applies to `Books:author`. In addition, you can use the following annotations to fine-tune generated SQL. ### @cds.persistence.skip Add `@cds.persistence.skip` to an entity to indicate that this entity should be skipped from generated DDL scripts, and also no SQL views to be generated on top of it: ```cds @cds.persistence.skip entity Foo {...} //> No SQL table will be generated entity Bar as select from Foo; //> No SQL view will be generated ``` ### @cds.persistence.exists Add `@cds.persistence.exists` to an entity to indicate that this entity should be skipped from generated DDL scripts. In contrast to `@cds.persistence.skip` a database relation is expected to exist, so we can generate SQL views on top. ```cds @cds.persistence.exists entity Foo {...} //> No SQL table will be generated entity Bar as select from Foo; //> The SQL view will be generated ``` ::: details On SAP HANA ... If the respective entity is a user-defined function or a calculation view, one of the annotations `@cds.persistence.udf` or `@cds.persistence.calcview` also needs to be assigned. See [Calculated Views and User-Defined Functions](../advanced/hana#calculated-views-and-user-defined-functions) for more details. ::: ### @cds.persistence.table Annotate an entity with `@cds.persistence.table` to create a table with the effective signature of the view definition instead of an SQL view. ```cds @cds.persistence.table entity Foo as projection on Bar {...} ``` > All parts of the view definition not relevant for the signature (like `where`, `group by`, ...) are ignored. Use case for this annotation: Use projections on imported APIs as replica cache tables. ### @sql.prepend / append Use `@sql.prepend` and `@sql.append` to add native SQL clauses to before or after generated SQL output of CDS entities or elements. Example: ````cds @sql.append: ```sql GROUP TYPE foo GROUP SUBTYPE bar ``` entity E { ..., @sql.append: 'FUZZY SEARCH INDEX ON' text: String(100); } @sql.append: 'WITH DDL ONLY' entity V as select from E { ... }; ```` Output: ```sql CREATE TABLE E ( ..., text NVARCHAR(100) FUZZY SEARCH INDEX ON ) GROUP TYPE foo GROUP SUBTYPE bar; CREATE VIEW V AS SELECT ... FROM E WITH DDL ONLY; ``` The following rules apply: - The value of the annotation must be a [string literal](../cds/cdl#multiline-literals). - The compiler doesn't check or process the provided SQL snippets in any way. You're responsible to ensure that the resulting statement is valid and doesn't negatively impact your database or your application. We don't provide support for problems caused by using this feature. - If you refer to a column name in the annotation, you need to take care of a potential name mapping yourself, for example, for structured elements. - Annotation `@sql.prepend` is only supported for entities translating to tables. It can't be used with views nor with elements. - For SAP HANA tables, there's an implicit `@sql.prepend: 'COLUMN'` that is overwritten by an explicitly provided `@sql.prepend`. * Both `@sql.prepend` and `@sql.append` are disallowed in SaaS extension projects. If you use native database clauses in combination with `@cds.persistence.journal`, see [Schema Evolution Support of Native Database Clauses](databases-hana#schema-evolution-native-db-clauses). #### Creating a Row Table on SAP HANA By using `@sql.prepend: 'ROW'`, you can create a row table: ```cds @sql.prepend: 'ROW' entity E { key id: Integer; } ``` Run `cds compile - 2 hdbtable` on the previous sample and this is the result: ```sql [E.hdbtable] ROW TABLE E ( id INTEGER NOT NULL, PRIMARY KEY(id) ) ``` [Learn more about Columnar and Row-Based Data Storage](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-administration-guide/columnar-and-row-based-data-storage){.learn-more} ### Reserved Words The CDS compiler and CAP runtimes provide smart quoting for reserved words in SQLite and in SAP HANA so that they can still be used in most situations. But in general reserved words cannot be used as identifiers. The list of reserved words varies per database. Find here a collection of resources on selected databases and their reference documentation: * [SAP HANA SQL Reference Guide for SAP HANA Platform (Cloud Version)](https://help.sap.com/docs/HANA_SERVICE_CF/7c78579ce9b14a669c1f3295b0d8ca16/28bcd6af3eb6437892719f7c27a8a285.html) * [SAP HANA SQL Reference Guide for SAP HANA Cloud](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/28bcd6af3eb6437892719f7c27a8a285.html) * [SQLite Keywords](https://www.sqlite.org/lang_keywords.html) * [H2 Keywords/Reserved Words](https://www.h2database.com/html/advanced.html#keywords) * [PostgreSQL SQL Key Words](https://www.postgresql.org/docs/current/sql-keywords-appendix.html) [There are also reserved words related to SAP Fiori.](../advanced/fiori#reserved-words){.learn-more} ## Database Constraints The information about foreign key relations contained in the associations of CDS models can be used to generate foreign key constraints on the database tables. Within CAP, referential consistency is established only at commit. The ["deferred" concept for foreign key constraints](https://www.sqlite.org/foreignkeys.html) in SQL databases allows the constraints to be checked and enforced at the time of the [COMMIT statement within a transaction](https://www.sqlite.org/lang_transaction.html) rather than immediately when the data is modified, providing more flexibility in maintaining data integrity. Enable generation of foreign key constraints on the database with: cds.features.assert_integrity = db ::: warning Database constraints are not supported for H2 Referential constraints on H2 cannot be defined as "deferred", which is needed for database constraints within CAP. ::: With that switched on, foreign key constraints are generated for managed to-one associations. For example, given this model: ```cds entity Books { key ID : Integer; ... author : Association to Authors; } entity Authors { key ID : Integer; ... } ``` The following `Books_author` constraint would be added to table `Books`: ```sql CREATE TABLE Authors ( ID INTEGER NOT NULL, -- primary key referenced by the constraint ..., PRIMARY KEY(ID) ); CREATE TABLE Books ( ID INTEGER NOT NULL, author_ID INTEGER, -- generated foreign key field ..., PRIMARY KEY(ID), CONSTRAINT Books_author -- constraint is explicitly named // [!code focus] FOREIGN KEY(author_ID) -- link generated foreign key field author_ID ... REFERENCES Authors(ID) -- ... with primary key field ID of table Authors ON UPDATE RESTRICT ON DELETE RESTRICT VALIDATED -- validate existing entries when constraint is created ENFORCED -- validate changes by insert/update/delete INITIALLY DEFERRED -- validate only at commit ) ``` No constraints are generated for... * Unmanaged associations or compositions * To-many associations or compositions * Associations annotated with `@assert.integrity: false` * Associations where the source or target entity is annotated with `@cds.persistence.exists` or `@cds.persistence.skip` If the association is the backlink of a **composition**, the constraint's delete rule changes to `CASCADE`. That applies, for example, to the `parent` association in here: ```cds entity Genres { key ID : Integer; parent : Association to Genres; children : Composition of many Genres on children.parent = $self; } ``` As a special case, a referential constraint with `delete cascade` is also generated for the text table of a [localized entity](../guides/localized-data#localized-data), although no managed association is present in the `texts` entity. Add a localized element to entity `Books` from the previous example: ```cds entity Books { key ID : Integer; ... title : localized String; } ``` The generated text table then is: ```sql CREATE TABLE Books_texts ( locale NVARCHAR(14) NOT NULL, ID INTEGER NOT NULL, title NVARCHAR(5000), PRIMARY KEY(locale, ID), CONSTRAINT Books_texts_texts // [!code focus] FOREIGN KEY(ID) REFERENCES Books(ID) ON UPDATE RESTRICT ON DELETE CASCADE VALIDATED ENFORCED INITIALLY DEFERRED ) ``` ::: warning Database constraints aren't intended for checking user input Instead, they protect the integrity of your data in the database layer against programming errors. If a constraint violation occurs, the error messages coming from the database aren't standardized by the runtimes but presented as-is. → Use [`@assert.target`](providing-services#assert-target) for corresponding input validations. ::: ## Using Native Features { #native-db-functions} In general, the CDS 2 SQL compiler doesn't 'understand' SQL functions but translates them to SQL generically as long as they follow the standard call syntax of `function(param1, param2)`. This allows you to use native database functions inside your CDS models. Example: ```cds entity BookPreview as select from Books { IFNULL (descr, title) as shorttext //> using HANA function IFNULL }; ``` The `OVER` clause for SQL Window Functions is supported, too: ```cds entity RankedBooks as select from Books { name, author, rank() over (partition by author order by price) as rank }; ``` #### Using Native Functions with Different DBs { #sqlite-and-hana-functions} In case of conflicts, follow these steps to provide different models for different databases: 1. Add database-specific schema extensions in specific subfolders of `./db`: ::: code-group ```cds [db/sqlite/index.cds] using { AdminService } from '..'; extend projection AdminService.Authors with { strftime('%Y',dateOfDeath)-strftime('%Y',dateOfBirth) as age : Integer } ``` ```cds [db/hana/index.cds] using { AdminService } from '..'; extend projection AdminService.Authors with { YEARS_BETWEEN(dateOfBirth, dateOfDeath) as age : Integer } ``` ::: 2. Add configuration in specific profiles to your *package.json*, to use these database-specific extensions: ```json { "cds": { "requires": { "db": { "kind": "sql", "[development]": { "model": "db/sqlite" }, "[production]": { "model": "db/hana" } } }}} ```
:::info The following steps are only needed when you use two different local databases. 3. For CAP Java setups you might need to reflect the different profiles in your CDS Maven plugin configuration. This might not be needed for all setups, like using a standard local database (sqlite, H2, or PostgreSQL) and a production SAP HANA setup. In that case the local build defaults to the `development` profile. But for other setups, like using a local PostgreSQL and a local SQLite you'll need two (profiled) `cds deploy` commands: ```xml cds.build cds build --for java deploy --profile development --dry --out "${project.basedir}/src/main/resources/schema-h2.sql" deploy --profile production --dry --out "${project.basedir}/src/main/resources/schema-postresql.sql" ``` 4. For the Spring Boot side it's similar. If you have a local development database and a hybrid profile with a remote SAP HANA database, you only need to run in default (or any other) profile. For the SAP HANA part, the build and deploy part is done separately and the application just needs to be started using `cds bind`. Once you have 2 non-HANA local databases you need to have 2 distinct database configurations in your Spring Boot configuration (in most cases application.yaml). ```yaml spring: config: activate: on-profile: default,h2 sql: init: schema-locations: classpath:schema-h2.sql --- spring: config: activate: on-profile: postgresql sql: init: schema-locations: classpath:schema-postgresql.sql datasource: url: "jdbc:postgresql://localhost:5432/my_schema" driver-class-name: org.postgresql.Driver hikari: maximum-pool-size: 1 max-lifetime: 0 ``` In case you use 2 different databases you also need to make sure that you have the JDBC drivers configured (on the classpath). :::
CAP samples demonstrate this in [cap/samples/fiori](https://github.com/SAP-samples/cloud-cap-samples/commit/65c8c82f745e0097fab6ca8164a2ede8400da803).
There's also a [code tour](https://github.com/SAP-samples/cloud-cap-samples#code-tours) available for that. # Using SQLite for Development {#sqlite} CAP provides extensive support for [SQLite](https://www.sqlite.org/index.html), which allows projects to speed up development by magnitudes at minimized costs. We strongly recommend using this option as much as possible during development and testing.
::: tip New SQLite Service This guide focuses on the new SQLite Service provided through *[@cap-js/sqlite](https://www.npmjs.com/package/@cap-js/sqlite)*, which has many advantages over the former one, as documented in the [*Features*](#features) section. To migrate from the old service, find instructions in the [*Migration*](#migration) section. :::
[Learn more about the features and limitations of using CAP with SQlite.](../java/cqn-services/persistence-services#sqlite){.learn-more}
## Setup & Configuration
Run this to use SQLite for development: ```sh npm add @cap-js/sqlite -D ``` ### Auto-Wired Configuration {.node} The `@cap-js/sqlite` package uses the `cds-plugin` technique to auto-configure your application for using an in-memory SQLite database for development. You can inspect the effective configuration using `cds env`: ```sh cds env requires.db ``` Output: ```js { impl: '@cap-js/sqlite', credentials: { url: ':memory:' }, kind: 'sqlite' } ``` [See also the general information on installing database packages.](databases#setup-configuration){.learn-more}
### Using the Maven Archetype {.java} When a new CAP Java project is created with the [Maven Archetype](../java/developing-applications/building#the-maven-archetype), you can specify the in-memory database to be used. Use the option `-DinMemoryDatabase=sqlite` to create a project that uses SQLite as in-memory database. ### Manual Configuration {.java} To use SQLite, add a Maven dependency to the SQLite JDBC driver: ```xml org.xerial sqlite-jdbc runtime ``` Further configuration depends on whether you run SQLite as an [in-memory database](#in-memory-databases) or as a [file-based](#persistent-databases) database. ## Deployment
### Initial Database Schema Configure the build to create an initial _schema.sql_ file for SQLite using `cds deploy --to sqlite --dry --out srv/src/main/resources/schema.sql`. ::: code-group ```xml [srv/pom.xml] schema.sql cds deploy --to sqlite --dry --out srv/src/main/resources/schema.sql ``` ::: [Learn more about creating an initial database schema](/java/cqn-services/persistence-services#initial-database-schema-1){.learn-more}
### In-Memory Databases
As stated previously, `@cap-js/sqlite` uses an in-memory SQLite database by default. For example, when starting your application with `cds watch`, you can see this in the log output: ```log ... [cds] - connect to db > sqlite { url: ':memory:' } // [!code focus] > init from db/init.js > init from db/data/sap.capire.bookshop-Authors.csv > init from db/data/sap.capire.bookshop-Books.csv > init from db/data/sap.capire.bookshop-Books.texts.csv > init from db/data/sap.capire.bookshop-Genres.csv /> successfully deployed to in-memory database. // [!code focus] ... ``` ::: tip Using in-memory databases is the most recommended option for test drives and test pipelines. :::
The database content is stored in-memory. Configure the DB connection in the non-productive `default` profile: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: default sql: init: mode: always datasource: url: "jdbc:sqlite:file::memory:?cache=shared" driver-class-name: org.sqlite.JDBC hikari: maximum-pool-size: 1 max-lifetime: 0 ``` ::: [Learn how to configure an in-memory SQLite database.](../java/cqn-services/persistence-services#in-memory-storage){.learn-more}
### Persistent Databases
You can also use persistent SQLite databases. Follow these steps to do so:
You can also use persistent SQLite databases. In this case, the schema is initialized by `cds deploy` and not by Spring. Follow these steps:
1. Specify a database filename in your `db` configuration as follows: ::: code-group ```json [package.json] { "cds": { "requires": { "db": { "kind": "sqlite", "credentials": { "url": "db.sqlite" } // [!code focus] } }}} ``` ::: 2. Run `cds deploy`: ```sh cds deploy ``` This will: 1. Create a database file with the given name. 2. Create the tables and views according to your CDS model. 3. Fill in initial data from the provided _.csv_ files.
With that in place, the server will use this prepared database instead of bootstrapping an in-memory one upon startup: ```log ... [cds] - connect to db > sqlite { url: 'db.sqlite' } ... ```
Finally, configure the DB connection - ideally in a dedicated `sqlite` profile: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: sqlite datasource: url: "jdbc:sqlite:sqlite.db" driver-class-name: org.sqlite.JDBC hikari: maximum-pool-size: 1 ``` ::: [Learn how to configure a file-based SQLite database](../java/cqn-services/persistence-services#file-based-storage){.learn-more}
::: tip Redeploy on changes Remember to always redeploy your database whenever you change your models or your data. Just run `cds deploy` again to do so. ::: ### Drop-Create Schema When you redeploy your database, it will always drop-create all tables and views. This is **most suitable for development environments**, where schema changes are very frequent and broad. ### Schema Evolution While drop-create is most appropriate for development, it isn't suitable for database upgrades in production, as all customer data would be lost. To avoid this, `cds deploy` also supports automatic schema evolution, which you can use as follows: 1. Enable automatic schema evolution in your `db` configuration: ::: code-group ```json [package.json] { "cds": { "requires": { "db": { "kind": "sqlite", "credentials": { "url": "db.sqlite" }, "schema_evolution": "auto" // [!code focus] } }}} ``` ::: 2. Run `cds deploy`: ```sh cds deploy ``` [Learn more about automatic schema evolution in the PostgreSQL guide.
The information in there is also applicable to SQLite with persistent databases.](databases-postgres#schema-evolution) {.learn-more} ## Features
CAP supports most of the major features on SQLite: * [Path Expressions](../java/working-with-cql/query-api#path-expressions) & Filters * [Expands](../java/working-with-cql/query-api#projections) * [Localized Queries](../guides/localized-data#read-operations) * [Comparison Operators](../java/working-with-cql/query-api#comparison-operators) * [Predicate Functions](../java/working-with-cql/query-api#predicate-functions) [Learn about features and limitations of SQLite.](../java/cqn-services/persistence-services#sqlite){.learn-more}
The following is an overview of advanced features supported by the new database services. > These apply to all new database services, including SQLiteService, HANAService, and PostgresService. ### Path Expressions & Filters {.node} The new database service provides **full support** for all kinds of [path expressions](../cds/cql#path-expressions), including [infix filters](../cds/cql#with-infix-filters) and [exists predicates](../cds/cql#exists-predicate). For example, you can try this out with *[cap/samples](https://github.com/sap-samples/cloud-cap-samples)* as follows: ```js // $ cds repl --profile better-sqlite var { server } = await cds.test('bookshop'), { Books, Authors } = cds.entities await INSERT.into (Books) .entries ({ title: 'Unwritten Book' }) await INSERT.into (Authors) .entries ({ name: 'Upcoming Author' }) await SELECT `from ${Books} { title as book, author.name as author, genre.name as genre }` await SELECT `from ${Authors} { books.title as book, name as author, books.genre.name as genre }` await SELECT `from ${Books} { title as book, author[ID<170].name as author, genre.name as genre }` await SELECT `from ${Books} { title as book, author.name as author, genre.name as genre }` .where ({'author.name':{like:'Ed%'},or:{'author.ID':170}}) await SELECT `from ${Books} { title as book, author.name as author, genre.name as genre } where author.name like 'Ed%' or author.ID=170` await SELECT `from ${Books}:author[name like 'Ed%' or ID=170] { books.title as book, name as author, books.genre.name as genre }` await SELECT `from ${Books}:author[150] { books.title as book, name as author, books.genre.name as genre }` await SELECT `from ${Authors} { ID, name, books { ID, title }}` await SELECT `from ${Authors} { ID, name, books { ID, title, genre { ID, name }}}` await SELECT `from ${Authors} { ID, name, books.genre { ID, name }}` await SELECT `from ${Authors} { ID, name, books as some_books { ID, title, genre.name as genre }}` await SELECT `from ${Authors} { ID, name, books[genre.ID=11] as dramatic_books { ID, title, genre.name as genre }}` await SELECT `from ${Authors} { ID, name, books.genre[name!='Drama'] as no_drama_books_count { count(*) as sum }}` await SELECT `from ${Authors} { books.genre.ID }` await SELECT `from ${Authors} { books.genre }` await SELECT `from ${Authors} { books.genre.name }` ``` ### Optimized Expands {.node} The old database service implementation(s) used to translate deep reads, that is, SELECTs with expands, into several database queries and collect the individual results into deep result structures. The new service uses `json_object` and other similar functions to instead do that in one single query, with sub selects, which greatly improves performance. For example: ```sql SELECT.from(Authors, a => { a.ID, a.name, a.books (b => { b.title, b.genre (g => { g.name }) }) }) ``` While this used to require three queries with three roundtrips to the database, now only one query is required. ### Localized Queries {.node} With the old implementation, running queries like `SELECT.from(Books)` would always return localized data, without being able to easily read the non-localized data. The new service does only what you asked for, offering new `SELECT.localized` options: ```js let books = await SELECT.from(Books) //> non-localized data let lbooks = await SELECT.localized(Books) //> localized data ``` Usage variants include: ```js SELECT.localized(Books) SELECT.from.localized(Books) SELECT.one.localized(Books) ``` ### Using Lean Draft {.node} The old implementation was overly polluted with draft handling. But as draft is actually a Fiori UI concept, none of that should show up in database layers. Hence, we eliminated all draft handling from the new database service implementations, and implemented draft in a modular, non-intrusive way — called *'Lean Draft'*. The most important change is that we don't do expensive UNIONs anymore but work with single (cheap) selects. ### Consistent Timestamps {.node} Values for elements of type `DateTime` and `Timestamp` are handled in a consistent way across all new database services along these lines: :::tip *Timestamps* = `Timestamp` as well as `DateTime` When we say *Timestamps*, we mean elements of type `Timestamp` as well as `DateTime`. Although they have different precision levels, they are essentially the same type. `DateTime` elements have seconds precision, while `Timestamp` elements have milliseconds precision in SQLite, and microsecond precision in SAP HANA and PostgreSQL. ::: #### Writing Timestamps When writing data using INSERT, UPSERT or UPDATE, you can provide values for `DateTime` and `Timestamp` elements as JavaScript `Date` objects or ISO 8601 Strings. All input is normalized to ensure `DateTime` and `Timestamp` values can be safely compared. In case of SAP HANA and PostgreSQL, they're converted to native types. In case of SQLite, they're stored as ISO 8601 Strings in Zulu timezone as returned by JavaScript's `Date.toISOString()`. For example: ```js await INSERT.into(Books).entries([ { createdAt: new Date }, //> stored .toISOString() { createdAt: '2022-11-11T11:11:11Z' }, //> padded with .000Z { createdAt: '2022-11-11T11:11:11.123Z' }, //> stored as is { createdAt: '2022-11-11T11:11:11.1234563Z' }, //> truncated to .123Z { createdAt: '2022-11-11T11:11:11+02:00' }, //> converted to zulu time ]) ``` #### Reading Timestamps Timestamps are returned as they're stored in a normalized way, with milliseconds precision, as supported by the JavaScript `Date` object. For example, the entries inserted previously would return the following: ```js await SELECT('createdAt').from(Books).where({title:null}) ``` ```js [ { createdAt: '2023-08-10T14:24:30.798Z' }, { createdAt: '2022-11-11T11:11:11.000Z' }, { createdAt: '2022-11-11T11:11:11.123Z' }, { createdAt: '2022-11-11T11:11:11.123Z' }, { createdAt: '2022-11-11T09:11:11.000Z' } ] ``` `DateTime` elements are returned with seconds precision, with all fractional second digits truncated. That is, if the `createdAt` in our examples was a `DateTime`, the previous query would return this: ```js [ { createdAt: '2023-08-10T14:24:30Z' }, { createdAt: '2022-11-11T11:11:11Z' }, { createdAt: '2022-11-11T11:11:11Z' }, { createdAt: '2022-11-11T11:11:11Z' }, { createdAt: '2022-11-11T09:11:11Z' } ] ``` #### Comparing DateTimes & Timestamps You can safely compare DateTimes & Timestamps with each other and with input values. The input values have to be `Date` objects or ISO 8601 Strings in Zulu timezone with three fractional digits. For example, all of these would work: ```js SELECT.from(Foo).where `someTimestamp = anotherTimestamp` SELECT.from(Foo).where `someTimestamp = someDateTime` SELECT.from(Foo).where `someTimestamp = ${new Date}` SELECT.from(Foo).where `someTimestamp = ${req.timestamp}` SELECT.from(Foo).where `someTimestamp = ${'2022-11-11T11:11:11.123Z'}` ``` While these would fail, because the input values don't comply to the rules: ```js SELECT.from(Foo).where `createdAt = ${'2022-11-11T11:11:11+02:00'}` // non-Zulu time zone SELECT.from(Foo).where `createdAt = ${'2022-11-11T11:11:11Z'}` // missing 3-digit fractions ``` > This is because we can never reliably infer the types of input to `where` clause expressions. Therefore, that input will not receive any normalisation, but be passed down as is as plain string. :::tip Always ensure proper input in `where` clauses Either use strings strictly in `YYYY-MM-DDThh:mm:ss.fffZ` format, or `Date` objects, as follows: ```js SELECT.from(Foo).where ({ createdAt: '2022-11-11T11:11:11.000Z' }) SELECT.from(Foo).where ({ createdAt: new Date('2022-11-11T11:11:11Z') }) ``` ::: The rules regarding Timestamps apply to all comparison operators: `=`, `<`, `>`, `<=`, `>=`. ### Improved Performance {.node} The combination of the above-mentioned improvements commonly leads to significant performance improvements. For example, displaying the list page of Travels in [cap/sflight](https://github.com/SAP-samples/cap-sflight) took **>250ms** in the past, and **~15ms** now. ## Migration {.node} While we were able to keep all public APIs stable, we had to apply changes and fixes to some **undocumented behaviours and internal APIs** in the new implementation. While not formally breaking changes, you may have used or relied on these undocumented APIs and behaviours. In that case, you can find instructions about how to resolve this in the following sections. > These apply to all new database services: SQLiteService, HANAService, and PostgresService. ### Use Old and New in Parallel {.node} During migration, you may want to occasionally run and test your app with both the new SQLite service and the old one. You can accomplish this as follows: 1. Add the new service with `--no-save`: ```sh npm add @cap-js/sqlite --no-save ``` > This bypasses the *cds-plugin* mechanism, which works through package dependencies. 2. Run or test your app with the `better-sqlite` profile using one of these options: ```sh cds watch bookshop --profile better-sqlite ``` ```sh CDS_ENV=better-sqlite cds watch bookshop ``` ```sh CDS_ENV=better-sqlite jest --silent ``` 3. Run or test your app with the old SQLite service as before: ```sh cds watch bookshop ``` ```sh jest --silent ``` ### Avoid UNIONs and JOINs {.node} Many advanced features supported by the new database services, like path expressions or deep expands, rely on the ability to infer queries from CDS models. This task gets extremely complex when adding UNIONs and JOINs to the equation — at least the effort and overhead is hardly matched by generated value. Therefore, we dropped support of UNIONs and JOINs in CQN queries. For example, this means queries like these are deprecated / not supported any longer: ```js SELECT.from(Books).join(Authors,...) ``` Mitigations: 1. Use [path expressions](#path-expressions-filters) instead of joins. (The former lack of support for path expressions was the most common reason for having to use joins at all.) 2. Use plain SQL queries like so: ```js await db.run(`SELECT from ${Books} join ${Authors} ...`) ``` 3. Use helper views modeled in CDS, which still supports all complex UNIONs and JOINs, then use this view via `cds.ql`. ### Fixed Localized Data {.node} Formerly, when reading data using `cds.ql`, this *always* returned localized data. For example: ```js SELECT.from(Books) // always read from localized.Books instead ``` This wasn't only wrong, but also expensive. Localized data is an application layer concept. Database services should return what was asked for, and nothing else. → Use [*Localized Queries*](#localized-queries) if you really want to read localized data from the database: ```js SELECT.localized(Books) // reads localized data SELECT.from(Books) // reads plain data ``` ::: details No changes to app services behaviour Generic application service handlers use *SELECT.localized* to request localized data from the database. Hence, CAP services automatically serve localized data as before. ::: ### Skipped Virtuals {.node} In contrast to their former behaviour, new database services ignore all virtual elements and hence don't add them to result set entries. Selecting only virtual elements in a query leads to an error. ::: details Reasoning Virtual elements are meant to be calculated and filled in by custom handlers of your application services. Nevertheless, the old database services always returned `null`, or specified `default` values for virtual elements. This behavior was removed, as it provides very little value, if at all. ::: For example, given this definition: ```cds entity Foo { foo : Integer; virtual bar : Integer; } ``` The behaviour has changed to: ```js [dev] cds repl > SELECT.from('Foo') //> [{ foo:1, bar:null }, ...] // [!code --] > SELECT.from('Foo') //> [{ foo:1 }, ...] > SELECT('bar').from('Foo') //> ERROR: no columns to read ``` ### <> Operator {.node} Before, both `<>` and `!=` were translated to `name <> 'John' OR name is null`. * The operator `<>` now works as specified in the SQL standard. * `name != 'John'` is translated as before to `name <> 'John' OR name is null`. ::: warning This is a breaking change in regard to the previous implementation. ::: ### Miscellaneous {.node} - Only `$now` and `$user` are supported as values for `@cds.on.insert/update`. - CQNs with subqueries require table aliases to refer to elements of outer queries. - Table aliases must not contain dots. - CQNs with an empty columns array now throw an error. - `*` isn't a column reference. Use `columns: ['*']` instead of `columns: [{ref:'*'}]`. - Column names in CSVs must map to physical column names: ```csvc ID;title;author_ID;currency_code // [!code ++] ID;title;author.ID;currency.code // [!code --] ``` ### Adopt Lean Draft {.node} As mentioned in [Using Lean Draft](#using-lean-draft), we eliminated all draft handling from new database service implementations, and instead implemented draft in a modular, non-intrusive, and optimized way — called *'Lean Draft'*. When using the new service, the new `cds.fiori.lean_draft` mode is automatically switched on. You may additionally switch on cds.fiori.draft_compat:true in case you run into problems. More detailed documentation for that is coming. ### Finalizing Migration {.node} When you have finished migration, remove the old [*sqlite3* driver](https://www.npmjs.com/package/sqlite3) : ```sh npm rm sqlite3 ``` And activate the new one as cds-plugin: ```sh npm add @cap-js/sqlite --save ```
## SQLite in Production? As stated in the beginning, SQLite is mostly intended to speed up development, but is not fit for production. This is not because of limited warranties or lack of support, but rather because of suitability. A major criterion is this: cloud applications are usually served by server clusters, in which each server is connected to a shared database. SQLite could only be used in such setups with the persistent database file accessed through a network file system. This is rarely available and results in slow performance. Hence, an enterprise client-server database is a more fitting choice for these scenarios. Having said this, there can indeed be scenarios where SQLite might also be used in production, such as using SQLite as in-memory caches. → [Find a detailed list of criteria on the sqlite.org website](https://www.sqlite.org/whentouse.html). ::: warning SQLite only has limited support for concurrent database access due to its very coarse lock granularity. This makes it badly suited for applications with high concurrency. ::: # Using H2 for Development in CAP Java For local development and testing, CAP Java supports the [H2](https://www.h2database.com/) database, which can be configured to run in-memory. [Learn more about features and limitations of using CAP with H2](../java/cqn-services/persistence-services#h2){.learn-more}
::: warning Not supported for CAP Node.js. :::
## Setup & Configuration {.java} ### Using the Maven Archetype {.java} When a new CAP Java project is created with the [Maven Archetype](../java/developing-applications/building#the-maven-archetype) or with `cds init`, H2 is automatically configured as in-memory database used for development and testing in the `default` profile. ### Manual Configuration {.java} To use H2, just add a Maven dependency to the H2 JDBC driver: ```xml com.h2database h2 runtime ``` Next, configure the build to [create an initial _schema.sql_ file](../java/cqn-services/persistence-services#initial-database-schema-1) for H2 using `cds deploy --to h2 --dry`. In Spring, H2 is automatically initialized as in-memory database when the driver is present on the classpath. [Learn more about the configuration of H2 ](../java/cqn-services/persistence-services#h2){.learn-more} ## Features {.java} CAP supports most of the major features on H2: * [Path Expressions](../java/working-with-cql/query-api#path-expressions) & Filters * [Expands](../java/working-with-cql/query-api#projections) * [Localized Queries](../guides/localized-data#read-operations) * [Comparison Operators](../java/working-with-cql/query-api#comparison-operators) * [Predicate Functions](../java/working-with-cql/query-api#predicate-functions) [Learn about features and limitations of H2](../java/cqn-services/persistence-services#h2){.learn-more} # Using PostgreSQL
This guide focuses on the new PostgreSQL Service provided through *[@cap-js/postgres](https://www.npmjs.com/package/@cap-js/postgres)*, which is based on the same new database services architecture as the new [SQLite Service](databases-sqlite). This architecture brings significantly enhanced feature sets and feature parity, as documented in the [*Features* section of the SQLite guide](databases-sqlite#features). *Learn about migrating from the former `cds-pg` in the [Migration](#migration) chapter.*{.learn-more}
CAP Java 3 is tested on [PostgreSQL](https://www.postgresql.org/) 16 and most CAP features are supported on PostgreSQL. [Learn more about features and limitations of using CAP with PostgreSQL](../java/cqn-services/persistence-services#postgresql){.learn-more}
## Setup & Configuration
Run this to use [PostgreSQL](https://www.postgresql.org/) for production:
To run CAP Java on PostgreSQL, add a Maven dependency to the PostgreSQL feature in `srv/pom.xml`: ```xml com.sap.cds cds-feature-postgresql runtime ``` In order to use the CDS tooling with PostgreSQL, you also need to install the module `@cap-js/postgres`:
```sh npm add @cap-js/postgres ```
After that, you can use the `cds deploy` command to [deploy](#using-cds-deploy) to a PostgreSQL database or to [create a DDL script](#deployment-using-liquibase) for PostgreSQL.
### Auto-Wired Configuration {.node} The `@cap-js/postgres` package uses `cds-plugin` technique to auto-configure your application and use a PostgreSQL database for production. You can inspect the effective configuration using `cds env`: ```sh cds env requires.db --for production ``` Output: ```js { impl: '@cap-js/postgres', dialect: 'postgres', kind: 'postgres' } ``` [See also the general information on installing database packages](databases#setup-configuration){.learn-more} ## Provisioning a DB Instance To connect to a PostgreSQL offering from the cloud provider in Production, leverage the [PostgreSQL on SAP BTP, hyperscaler option](https://discovery-center.cloud.sap/serviceCatalog/postgresql-hyperscaler-option). For local development and testing convenience, you can run PostgreSQL in a [docker container](#using-docker).
To consume a PostgreSQL instance from a CAP Java application running on SAP BTP, consider the following: - Only the Java buildpack `java_buildpack` provided by the Cloud Foundry community allows to consume a PostgreSQL service from a CAP Java application. - By default, the `java_buildpack` initializes a PostgreSQL datasource with the Java CFEnv library. However, to work properly with CAP, the PostgreSQL datasource must be created by the CAP Java runtime and not by the buildpack. You need to disable the [datasource initialization by the buildback](https://docs.cloudfoundry.org/buildpacks/java/configuring-service-connections.html) using `CFENV_SERVICE__ENABLED: false` at your CAP Java service module. The following example shows these configuration settings applied to a CAP Java service: ::: code-group ```yaml [mta.yaml] modules: - name: bookshop-pg-srv type: java path: srv parameters: buildpack: java_buildpack properties: SPRING_PROFILES_ACTIVE: cloud JBP_CONFIG_COMPONENTS: '{jres: ["JavaBuildpack::Jre::SapMachineJRE"]}' JBP_CONFIG_SAP_MACHINE_JRE: '{ jre: { version: "17.+" } }' CFENV_SERVICE_BOOKSHOP-PG-DB_ENABLED: false ``` ::: > `BOOKSHOP-PG-DB` is the real PostgreSQL service instance name in this example.
### Using Docker You can use Docker to run a PostgreSQL database locally as follows: 1. Install and run [Docker Desktop](https://www.docker.com) 2. Create the following file in your project root directory: ::: code-group ```yaml [pg.yml] services: db: image: postgres:alpine environment: { POSTGRES_PASSWORD: postgres } ports: [ '5432:5432' ] restart: always ``` ::: 3. Create and run the docker container: ```sh docker-compose -f pg.yml up -d ```
::: tip With the introduction of [Testcontainers support](https://spring.io/blog/2023/06/23/improved-testcontainers-support-in-spring-boot-3-1) in Spring Boot 3.1, you can create PostgreSQL containers on the fly for local development or testing purposes. :::
## Service Bindings You need a service binding to connect to the PostgreSQL database. In the cloud, use given techniques to bind a cloud-based instance of PostgreSQL to your application.
For local development provide the credentials using a suitable [`cds env`](../node.js/cds-env) technique, like one of the following.
### Configure Connection Data {.java} If a PostgreSQL service binding exists, the corresponding `DataSource` is auto-configured. You can also explicitly [configure the connection data](../java/cqn-services/persistence-services#postgres-connection) of your PostgreSQL database in the _application.yaml_ file. If you run the PostgreSQL database in a [docker container](#using-docker) your connection data might look like this: ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: postgres-docker datasource: url: jdbc:postgresql://localhost:5432/postgres username: postgres password: postgres driver-class-name: org.postgresql.Driver ``` ::: To start the application with the new profile `postgres-docker`, the `spring-boot-maven-plugin` can be used: `mvn spring-boot:run -Dspring-boot.run.profiles=postgres-docker`. [Learn more about the configuration of a PostgreSQL database](../java/cqn-services/persistence-services#postgresql-1){ .learn-more} ### Service Bindings for CDS Tooling {.java} #### Using Defaults with `[pg]` Profile {.java} `@cds-js/postgres` comes with a set of default credentials under the profile `[pg]` that matches the defaults used in the [docker setup](#using-docker). So, if you stick to these defaults you can skip to deploying your database with: ```sh cds deploy --profile pg ``` #### In Your Private `.cdsrc-private.json` {.java} If you don't use the default credentials and want to use just `cds deploy`, you need to configure the service bindings (connection data) for the CDS tooling. Add the connection data to your private `.cdsrc-private.json`: ```json { "requires": { "db": { "kind": "postgres", "credentials": { "host": "localhost", "port": 5432, "user": "postgres", "password": "postgres", "database": "postgres" } } } } ``` ### Configure Service Bindings {.node} #### Using Defaults with `[pg]` Profile The `@cds-js/postgres` comes with default credentials under profile `[pg]` that match the defaults used in the [docker setup](#using-docker). So, in case you stick to these defaults you can skip the next sections and just go ahead, deploy your database: ```sh cds deploy --profile pg ``` Run your application: ```sh cds watch --profile pg ``` Learn more about that in the [Deployment](#deployment) chapter below.{.learn-more} #### In Your private `~/.cdsrc.json` Add it to your private `~/.cdsrc.json` if you want to use these credentials on your local machine only: ::: code-group ```json [~/.cdsrc.json] { "requires": { "db": { "[pg]": { "kind": "postgres", "credentials": { "host": "localhost", "port": 5432, "user": "postgres", "password": "postgres", "database": "postgres" } } } } } ``` ::: #### In Project `.env` Files Alternatively, use a `.env` file in your project's root folder if you want to share the same credentials with your team: ::: code-group ```properties [.env] cds.requires.db.[pg].kind = postgres cds.requires.db.[pg].credentials.host = localhost cds.requires.db.[pg].credentials.port = 5432 cds.requires.db.[pg].credentials.user = postgres cds.requires.db.[pg].credentials.password = postgres cds.requires.db.[pg].credentials.database = postgres ``` ::: ::: tip Using Profiles The previous configuration examples use the [`cds.env` profile](../node.js/cds-env#profiles) `[pg]` to allow selectively testing with PostgreSQL databases from the command line as follows: ```sh cds watch --profile pg ``` The profile name can be freely chosen, of course. ::: ## Deployment ### Using `cds deploy` Deploy your database as usual with that: ```sh cds deploy ``` Or with that if you used profile `[pg]` as introduced in the setup chapter above: ```sh cds deploy --profile pg ``` ### With a Deployer App When deploying to Cloud Foundry, this can be accomplished by providing a simple deployer app. Similar to SAP HANA deployer apps, it is auto-generated for PostgreSQL-enabled projects by running ```sh cds build --production ``` ::: details What `cds build` does… 1. Compiles the model into _gen/pg/db/csn.json_. 2. Copies required `.csv` files into _gen/pg/db/data_. 3. Adds a _gen/pg/package.json_ with this content: ```json { "dependencies": { "@sap/cds": "^8", "@cap-js/postgres": "^1" }, "scripts": { "start": "cds-deploy" } } ``` > **Note the dash in `cds-deploy`**, which is required as we don't use `@cds-dk` for deployment and runtime, so the `cds` CLI executable isn't available. ::: ### Add PostgreSQL Deployment Configuration ```sh cds add postgres ``` ::: details See what this does… 1. Adds `@cap-js/postgres` dependency to your _package.json_ `dependencies`. 2. Sets up deployment descriptors such as _mta.yaml_ to use a PostgreSQL instance deployer application. 3. Wires up the PostgreSQL service to your deployer app and CAP backend. ::: ### Deploy You can package and deploy that application, for example using [MTA-based deployment](deployment/to-cf#build-mta). ## Automatic Schema Evolution { #schema-evolution } When redeploying after you changed your CDS models, like adding fields, automatic schema evolution is applied. Whenever you run `cds deploy` (or `cds-deploy`) it executes these steps: 1. Read a CSN of a former deployment from table `cds_model`. 2. Calculate the **delta** to current model. 3. Generate and run DDL statements with: - `CREATE TABLE` statements for new entities - `CREATE VIEW` statements for new views - `ALTER TABLE` statements for entities with new or changed elements - `DROP & CREATE VIEW` statements for views affected by changed entities 4. Fill in initial data from provided _.csv_ files using `UPSERT` commands. 5. Store a CSN representation of the current model in `cds_model`. > You can disable automatic schema evolution, if necessary, by setting cds.requires.db.schema_evolution = false. ::: danger No manual altering Manually altering the database will most likely break automatic schema evolution! ::: ### Limitations Automatic schema evolution only allows changes without potential data loss. #### Allowed{.good} - Adding entities and elements - Increasing the length of Strings - Increasing the size of Integers #### Disallowed{.bad} - Removing entities or elements - Changes to primary keys - All other type changes For example the following type changes are allowed: ```cds entity Foo { anInteger : Int64; // from former: Int32 aString : String(22); // from former: String(11) } ``` ::: tip If you need to apply such disallowed changes during development, just drop and re-create your database, for example by killing it in docker and re-create it using the `docker-compose` command, [see Using Docker](#using-docker). ::: ### Dry-Run Offline You can use `cds deploy` with option `--dry` to simulate and inspect how things work. 1. Capture your current model in a CSN file: ```sh cds deploy --dry --model-only --out cds-model.csn ``` 2. Change your models, for example in *[cap/samples/bookshop/db/schema.cds](https://github.com/SAP-samples/cloud-cap-samples/blob/main/bookshop/db/schema.cds)*: ```cds entity Books { ... title : localized String(222); //> increase length from 111 to 222 foo : Association to Foo; //> add a new relationship bar : String; //> add a new element } entity Foo { key ID: UUID } //> add a new entity ``` 3. Generate delta DDL statements: ```sh cds deploy --dry --delta-from cds-model.csn --out delta.sql ``` 4. Inspect the generated SQL statements, which should look like this: ::: code-group ```sql [delta.sql] -- Drop Affected Views DROP VIEW localized_CatalogService_ListOfBooks; DROP VIEW localized_CatalogService_Books; DROP VIEW localized_AdminService_Books; DROP VIEW CatalogService_ListOfBooks; DROP VIEW localized_sap_capire_bookshop_Books; DROP VIEW CatalogService_Books_texts; DROP VIEW AdminService_Books_texts; DROP VIEW CatalogService_Books; DROP VIEW AdminService_Books; -- Alter Tables for New or Altered Columns ALTER TABLE sap_capire_bookshop_Books ALTER title TYPE VARCHAR(222); ALTER TABLE sap_capire_bookshop_Books_texts ALTER title TYPE VARCHAR(222); ALTER TABLE sap_capire_bookshop_Books ADD foo_ID VARCHAR(36); ALTER TABLE sap_capire_bookshop_Books ADD bar VARCHAR(255); -- Create New Tables CREATE TABLE sap_capire_bookshop_Foo ( ID VARCHAR(36) NOT NULL, PRIMARY KEY(ID) ); -- Re-Create Affected Views CREATE VIEW AdminService_Books AS SELECT ... FROM sap_capire_bookshop_Books AS Books_0; CREATE VIEW CatalogService_Books AS SELECT ... FROM sap_capire_bookshop_Books AS Books_0 LEFT JOIN sap_capire_bookshop_Authors AS author_1 O ... ; CREATE VIEW AdminService_Books_texts AS SELECT ... FROM sap_capire_bookshop_Books_texts AS texts_0; CREATE VIEW CatalogService_Books_texts AS SELECT ... FROM sap_capire_bookshop_Books_texts AS texts_0; CREATE VIEW localized_sap_capire_bookshop_Books AS SELECT ... FROM sap_capire_bookshop_Books AS L_0 LEFT JOIN sap_capire_bookshop_Books_texts AS localized_1 ON localized_1.ID = L_0.ID AND localized_1.locale = session_context( '$user.locale' ); CREATE VIEW CatalogService_ListOfBooks AS SELECT ... FROM CatalogService_Books AS Books_0; CREATE VIEW localized_AdminService_Books AS SELECT ... FROM localized_sap_capire_bookshop_Books AS Books_0; CREATE VIEW localized_CatalogService_Books AS SELECT ... FROM localized_sap_capire_bookshop_Books AS Books_0 LEFT JOIN localized_sap_capire_bookshop_Authors AS author_1 O ... ; CREATE VIEW localized_CatalogService_ListOfBooks AS SELECT ... FROM localized_CatalogService_Books AS Books_0; ``` ::: > **Note:** If you use SQLite, ALTER TYPE commands are not necessary and so, are not supported, as SQLite is essentially typeless. ### Generate Scripts You can use `cds deploy` with option `--script` to generate a script as a starting point for a manual migration. The effect of `--script` essentially is the same as for `--dry`, but it also allows changes that could lead to data loss and therefore are not supported in the automatic schema migration (see [Limitations](#limitations)). For generating such a script, perform the same steps as in section [Dry-Run Offline](#dry-run-offline) above, but replace the command in step 3 by ```sh cds deploy --script --delta-from cds-model.csn --out delta_script.sql ``` If your model change includes changes that could lead to data loss, there will be a warning and a respective comment is added to the dangerous statements in the resulting script. For example, deleting an element or reducing the length of an element would look like this: ::: code-group ```sql [delta_script.sql] ... -- [WARNING] this statement is lossy ALTER TABLE sap_capire_bookshop_Books DROP price; -- [WARNING] this statement could be lossy: length reduction of element "title" ALTER TABLE sap_capire_bookshop_Books ALTER title TYPE VARCHAR(11); ... ``` ::: :::warning Always check and, if necessary, adapt the generated script before you apply it to your database! ::: ## Deployment Using Liquibase { .java } You can also use [Liquibase](https://www.liquibase.org/) to control when, where, and how database changes are deployed. Liquibase lets you define database changes [in an SQL file](https://docs.liquibase.com/change-types/sql-file.html), use `cds deploy` to quickly generate DDL scripts which can be used by Liquibase. Add a Maven dependency to Liquibase in `srv/pom.xml`: ```xml org.liquibase liquibase-core runtime ``` Once `liquibase-core` is on the classpath, [Spring runs database migrations](https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto.data-initialization.migration-tool.liquibase) automatically on application startup and before your tests run. ### ① Initial Schema Version Once you're ready to release an initial version of your database schema, you can create a DDL file that defines the initial database schema. Create a `db/changelog` subfolder under `srv/src/main/resources`, place the Liquibase _change log_ file as well as the DDL scripts for the schema versions here. The change log is defined by the [db/changelog/db.changelog-master.yml](https://docs.liquibase.com/concepts/changelogs/home.html) file: ```yml databaseChangeLog: - changeSet: id: 1 author: me changes: - sqlFile: dbms: postgresql path: db/changelog/v1/model.sql ``` Use `cds deploy` to create the _v1/model.sql_ file: ```sh cds deploy --profile pg --dry --out srv/src/main/resources/db/changelog/v1/model.sql ``` Finally, store the CSN file, which corresponds to this schema version: ```sh cds deploy --model-only --dry --out srv/src/main/resources/db/changelog/v1/model.csn ``` The CSN file is needed as an input to compute the delta DDL script for the next change set. If you start your application with `mvn spring-boot:run` Liquibase initializes the database schema to version `v1`, unless it has already been initialized. ::: warning Don't change the _model.sql_ after it has been deployed by Liquibase as the [checksum](https://docs.liquibase.com/concepts/changelogs/changeset-checksums.html) of the file is validated. These files should be checked into your version control system. Follow step ② to make changes. ::: ### ② Schema Evolution { #schema-evolution-with-liquibase } If changes of the CDS model require changes on the database, you can create a new change set that captures the necessary changes. Use `cds deploy` to compute the delta DDL script based on the previous model versions (_v1/model.csn_) and the current model. Write the diff into a _v2/delta.sql_ file: ```sh cds deploy --profile pg --dry --delta-from srv/src/main/resources/db/changelog/v1/model.csn --out \ srv/src/main/resources/db/changelog/v2/model.sql ``` Next, add a corresponding change set in the _changelog/db.changelog-master.yml_ file: ```yml databaseChangeLog: - changeSet: id: 1 author: me changes: - sqlFile: dbms: postgresql path: db/changelog/v1/model.sql - changeSet: id: 2 author: me changes: - sqlFile: dbms: postgresql path: db/changelog/v2/model.sql ``` Finally, store the CSN file, which corresponds to this schema version: ```sh cds deploy --model-only --dry --out srv/src/main/resources/db/changelog/v2/model.csn ``` If you now start the application, Liquibase executes all change sets, which haven't yet been deployed to the database. For further schema versions, repeat step ②. ::: info Only compatible changes A delta DDL script is only produced for changes without potential data loss. If the changes in the model could lead to data loss, an error is raised. ::: ## Migration { .node } Thanks to CAP's database-agnostic cds.ql API, we're confident that the new PostgreSQL service comes without breaking changes. Nevertheless, please check the instructions in the [SQLite Migration guide](databases-sqlite#migration), with by and large applies also to the new PostgreSQL service. ### `cds deploy --model-only` Not a breaking change, but definitely required to migrate former `cds-pg` databases, is to prepare it for schema evolution. To do so run `cds deploy` once with the `--model-only` flag: ```sh cds deploy --model-only ``` This will...: - Create the `cds_model` table in your database. - Fill it with the current model obtained through `cds compile '*'`. ::: warning IMPORTANT: Your `.cds` models are expected to reflect the deployed state of your database. ::: ### With Deployer App When you have a SaaS application, upgrade all your tenants using the [deployer app](#with-deployer-app) with CLI option `--model-only` added to the start script command of your *package.json*. After having done that, don't forget to remove the `--model-only` option from the start script, to activate actual schema evolution. ## MTX Support ::: warning [Multitenancy](../guides/multitenancy/) and [extensibility](../guides/extensibility/) aren't yet supported on PostgreSQL. ::: # Using SAP HANA Cloud for Production [SAP HANA Cloud](https://www.sap.com/products/technology-platform/hana.html) is supported as the CAP standard database and recommended for productive use with full support for schema evolution and multitenancy. ::: warning CAP isn't validated with other variants of SAP HANA, like "SAP HANA Database as a Service" or "SAP HANA (on premise)". ::: ## Setup & Configuration
Run this to use SAP HANA Cloud for production: ```sh npm add @cap-js/hana ``` ::: details Using other SAP HANA drivers... Package `@cap-js/hana` uses the [`hdb`](https://www.npmjs.com/package/hdb) driver by default. You can override that by running [`npm add @sap/hana-client`](https://www.npmjs.com/package/@sap/hana-client), thereby adding it to your package dependencies, which then takes precedence over the default driver. ::: ::: tip Prefer `cds add` ... as documented in the [deployment guide](deployment/to-cf#_1-using-sap-hana-database), which also does the equivalent of `npm add @cap-js/hana` but in addition cares for updating `mta.yaml` and other deployment resources. :::
To use SAP HANA Cloud, [configure a module](../java/developing-applications/building#standard-modules), which includes the feature `cds-feature-hana`. For example, add a Maven runtime dependency to the `cds-feature-hana` feature: ```xml com.sap.cds cds-feature-hana runtime ``` ::: tip The [modules](../java/developing-applications/building#standard-modules) `cds-starter-cloudfoundry` and `cds-starter-k8s` include `cds-feature-hana`. ::: The datasource for HANA is then auto-configured based on available service bindings of type *service-manager* and *hana*. [Learn more about the configuration of an SAP HANA Cloud Database](../java/cqn-services/persistence-services#sap-hana){ .learn-more}
## Running `cds build` Deployment to SAP HANA is done via the [SAP HANA Deployment Infrastructure (HDI)](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/sap-hdi-deployer?) which in turn requires running `cds build` to generate all the deployable HDI artifacts. For example, run this in [cap/samples/bookshop](https://github.com/SAP-samples/cloud-cap-samples/tree/main/bookshop): ```sh cds build --for hana ``` Which should display this log output: ```log [cds] - done > wrote output to: gen/db/init.js gen/db/package.json gen/db/src/gen/.hdiconfig gen/db/src/gen/.hdinamespace gen/db/src/gen/AdminService.Authors.hdbview gen/db/src/gen/AdminService.Books.hdbview gen/db/src/gen/AdminService.Books_texts.hdbview gen/db/src/gen/AdminService.Currencies.hdbview gen/db/src/gen/AdminService.Currencies_texts.hdbview gen/db/src/gen/AdminService.Genres.hdbview gen/db/src/gen/AdminService.Genres_texts.hdbview gen/db/src/gen/CatalogService.Books.hdbview gen/db/src/gen/CatalogService.Books_texts.hdbview gen/db/src/gen/CatalogService.Currencies.hdbview gen/db/src/gen/CatalogService.Currencies_texts.hdbview gen/db/src/gen/CatalogService.Genres.hdbview gen/db/src/gen/CatalogService.Genres_texts.hdbview gen/db/src/gen/CatalogService.ListOfBooks.hdbview gen/db/src/gen/data/sap.capire.bookshop-Authors.csv gen/db/src/gen/data/sap.capire.bookshop-Authors.hdbtabledata gen/db/src/gen/data/sap.capire.bookshop-Books.csv gen/db/src/gen/data/sap.capire.bookshop-Books.hdbtabledata gen/db/src/gen/data/sap.capire.bookshop-Books.texts.csv gen/db/src/gen/data/sap.capire.bookshop-Books.texts.hdbtabledata gen/db/src/gen/data/sap.capire.bookshop-Genres.csv gen/db/src/gen/data/sap.capire.bookshop-Genres.hdbtabledata gen/db/src/gen/localized.AdminService.Authors.hdbview gen/db/src/gen/localized.AdminService.Books.hdbview gen/db/src/gen/localized.AdminService.Currencies.hdbview gen/db/src/gen/localized.AdminService.Genres.hdbview gen/db/src/gen/localized.CatalogService.Books.hdbview gen/db/src/gen/localized.CatalogService.Currencies.hdbview gen/db/src/gen/localized.CatalogService.Genres.hdbview gen/db/src/gen/localized.CatalogService.ListOfBooks.hdbview gen/db/src/gen/localized.sap.capire.bookshop.Authors.hdbview gen/db/src/gen/localized.sap.capire.bookshop.Books.hdbview gen/db/src/gen/localized.sap.capire.bookshop.Genres.hdbview gen/db/src/gen/localized.sap.common.Currencies.hdbview gen/db/src/gen/sap.capire.bookshop.Authors.hdbtable gen/db/src/gen/sap.capire.bookshop.Books.hdbtable gen/db/src/gen/sap.capire.bookshop.Books_author.hdbconstraint gen/db/src/gen/sap.capire.bookshop.Books_currency.hdbconstraint gen/db/src/gen/sap.capire.bookshop.Books_foo.hdbconstraint gen/db/src/gen/sap.capire.bookshop.Books_genre.hdbconstraint gen/db/src/gen/sap.capire.bookshop.Books_texts.hdbtable gen/db/src/gen/sap.capire.bookshop.Genres.hdbtable gen/db/src/gen/sap.capire.bookshop.Genres_parent.hdbconstraint gen/db/src/gen/sap.capire.bookshop.Genres_texts.hdbtable gen/db/src/gen/sap.common.Currencies.hdbtable gen/db/src/gen/sap.common.Currencies_texts.hdbtable ``` ### Generated HDI Artifacts As we see from the log output `cds build` generates these [deployment artifacts as expected by HDI](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-deployment-infrastructure-hdi-reference/sap-hdi-artifact-types-and-build-plug-ins-reference?), based on CDS models and .csv files provided in your projects: - `.hdbtable` files for entities - `.hdbview` files for views / projections - `.hdbconstraint` files for database constraints - `.hdbtabledata` files for CSV content - a few technical files required by HDI, such as [`.hdinamespace`](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/sap-hdi-name-space-configuration-syntax?version=2024_1_QRC&q=hdinamespace) and [`.hdiconfig`](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/sap-hdi-container-configuration-file?) ### Custom HDI Artifacts In addition to the generated HDI artifacts, you can add custom ones by adding according files to folder `db/src`. For example, let's add an index for Books titles... 1. Add a file `db/src/sap.capire.bookshop.Books.hdbindex` and fill it with this content: ::: code-group ```sql [db/src/sap.capire.bookshop.Books.hdbindex] INDEX sap_capire_bookshop_Books_title_index ON sap_capire_bookshop_Books (title) ``` ::: 2. Run cds build again → this time you should see this additional line in the log output: ```log [cds] - done > wrote output to: [...] gen/db/src/sap.capire.bookshop.Books.hdbindex // [!code focus] ``` [Learn more about HDI Design-Time Resources and Build Plug-ins](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/hdi-design-time-resources-and-build-plug-ins?){.learn-more} ## Deploying to SAP HANA There are two ways to include SAP HANA in your setup: Use SAP HANA in a [hybrid mode](#cds-deploy-hana), meaning running your services locally and connecting to your database in the cloud, or running your [whole application](deployment/) on SAP Business Technology Platform. This is possible either in trial accounts or in productive accounts. To make the following configuration steps work, we assume that you've provisioned, set up, and started, for example, your SAP HANA Cloud instance in the [trial environment](https://cockpit.hanatrial.ondemand.com). If you need to prepare your SAP HANA first, see [How to Get an SAP HANA Cloud Instance for SAP Business Technology Platform, Cloud Foundry environment](../get-started/troubleshooting#get-hana) to learn about your options. ### Prepare for Production { #configure-hana .node } To prepare the project, execute: ```sh cds add hana --for hybrid ``` This configures deployment for SAP HANA to use the _hdbtable_ and _hdbview_ formats. The configuration is added to a `[hybrid]` profile in your _package.json_. ::: tip The profile `hybrid` relates to [the hybrid testing](../advanced/hybrid-testing) scenario If you want to prepare your project for production and use the profile `production`, read the [Deploy to Cloud Foundry](deployment/) guide. ::: No further configuration is necessary for Node.js. For Java, see the [Use SAP HANA as the Database for a CAP Java Application](https://developers.sap.com/tutorials/cp-cap-java-hana-db.html#880cf07a-1788-4fda-b6dd-b5a6e5259625) tutorial for the rest of the configuration. ### Using `cds deploy` for Ad-Hoc Deployments { #cds-deploy-hana .node } `cds deploy` lets you deploy _just the database parts_ of the project to an SAP HANA instance. The server application (the Node.js or Java part) still runs locally and connects to the remote database instance, allowing for fast development roundtrips. Make sure that you're [logged in to Cloud Foundry](deployment/to-cf#deploy) with the correct target, that is, org and space. Then in the project root folder, just execute: ```sh cds deploy --to hana ``` > To connect to your SAP HANA Cloud instance use `cds watch --profile hybrid`. Behind the scenes, `cds deploy` does the following: * Compiles the CDS model to SAP HANA files (usually in _gen/db_, or _db/src/gen_) * Generates _[.hdbtabledata](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-deployment-infrastructure-hdi-reference/table-data-hdbtabledata?)_ files for the [CSV files](databases#providing-initial-data) in the project. If a _[.hdbtabledata](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-deployment-infrastructure-hdi-reference/table-data-hdbtabledata?)_ file is already present next to the CSV files, no new file is generated. * Creates a Cloud Foundry service of type `hdi-shared`, which creates an HDI container. Also, you can explicitly specify the name like so: `cds deploy --to hana:`. * Starts `@sap/hdi-deploy` locally. If you need a tunnel to access the database, you can specify its address with `--tunnel-address `. * Stores the binding information with profile `hybrid` in the _.cdsrc-private.json_ file of your project. You can use a different profile with parameter `--for`. With this information, `cds watch`/`run` can fetch the SAP HANA credentials at runtime, so that the server can connect to it. Specify `--profile` when running `cds deploy` as follows: ```sh cds deploy --to hana --profile hybrid ``` Based on these profile settings, `cds deploy` executes `cds build` and also resolves additionally binding information. If a corresponding binding exists, its service name and service key are used. The development profile is used by default. [Learn more about the deployment using HDI.](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/sap-hdi-deployer?){.learn-more} [Learn more about hybrid testing using service bindings to Cloud services.](../advanced/hybrid-testing#run-with-service-bindings){.learn-more} If you run into issues, see the [Troubleshooting](../get-started/troubleshooting#hana) guide. #### Deploy Parameters When using the option `--to hana`, you can specify the service name and logon information in several ways.
`cds deploy --to hana` In this case the service name and service key either come from the environment variable `VCAP_SERVICES` or are defaulted from the project name, for example, `myproject-db` with `myproject-db-key`. Service instances and key either exist and will be used, or otherwise they're created. ##### `cds deploy --to hana:myservice` This overwrites any information coming from environment variables. The service name `myservice` is used and the current Cloud Foundry client logon information is taken to connect to the system. ##### `cds deploy --vcap-file someEnvFile.json` This takes the logon information and the service name from the `someEnvFile.json` file and overwrite any environment variable that is already set. ##### `cds deploy --to hana:myservice --vcap-file someEnvFile.json` This is equivalent to `cds deploy --to hana:myservice` and ignores information coming from `--vcap-file`. A warning is printed after deploying. ### Using `cf deploy` or `cf push` { .node } See the [Deploying to Cloud Foundry](deployment/) guide for information about how to deploy the complete application to SAP Business Technology Platform, including a dedicated deployer application for the SAP HANA database. ## Native SAP HANA Features The HANA Service provides dedicated support for native SAP HANA features as follows. ### Vector Embeddings { #vector-embeddings } Vector embeddings are numerical representations that capture important features and semantics of unstructured data - such as text, images, or audio. This representation makes vector embeddings of similar data have high similarity and low distance to each other. These properties of vector embeddings facilitate tasks like similarity search, anomaly detection, recommendations and Retrieval Augmented Generation (RAG). Vector embeddings from a vector datastore such as the [SAP HANA Cloud Vector Engine](https://community.sap.com/t5/technology-blogs-by-sap/sap-hana-cloud-s-vector-engine-announcement/ba-p/13577010) can help get better generative AI (GenAI) results. This is achieved when the embeddings are used as context to the large language models (LLMs) prompts. Typically vector embeddings are computed using an **embedding model**. The embedding model is specifically designed to capture important features and semantics of a specific type of data, it also determines the dimensionality of the vector embedding space. Unified consumption of embedding models and LLMs across different vendors and open source models is provided via the [SAP Generative AI Hub](https://community.sap.com/t5/technology-blogs-by-sap/how-sap-s-generative-ai-hub-facilitates-embedded-trustworthy-and-reliable/ba-p/13596153). In CAP, vector embeddings are stored in elements of type [cds.Vector](../cds/types): ```cds entity Books : cuid { // [!code focus] title : String(111); description : LargeString; // [!code focus] embedding : Vector(1536); // vector space w/ 1536 dimensions // [!code focus] } // [!code focus] ``` At runtime, you can compute the similarity and distance of vectors in the SAP HANA vector store using the `cosineSimilarity` and `l2Distance` (Euclidean distance) functions in queries: ::: code-group ```js [Node.js] let embedding; // vector embedding as string '[0.3,0.7,0.1,...]'; let similarBooks = await SELECT.from('Books') .where`cosine_similarity(embedding, to_real_vector(${embedding})) > 0.9` ``` ```java [Java] // Vector embedding of text, for example, from SAP GenAI Hub or via LangChain4j float[] embedding = embeddingModel.embed(bookDescription).content().vector(); Result similarBooks = service.run(Select.from(BOOKS).where(b -> CQL.cosineSimilarity(b.embedding(), CQL.vector(embedding)).gt(0.9))); ``` ::: [Learn more about Vector Embeddings in CAP Java](../java/cds-data#vector-embeddings) {.learn-more} ### Geospatial Functions CDS supports the special syntax for SAP HANA geospatial functions: ```cds entity Geo as select from Foo { geoColumn.ST_Area() as area : Decimal, new ST_Point(2.25, 3.41).ST_X() as x : Decimal }; ``` *Learn more in the [SAP HANA Spatial Reference](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-spatial-reference/accessing-and-manipulating-spatial-data?).*{.learn-more} ### Spatial Grid Generators SAP HANA Spatial has some built-in [grid generator table functions](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-spatial-reference/grid-generators?). To use them in a CDS model, first define corresponding facade entities in CDS. Example for function `ST_SquareGrid`: ```cds @cds.persistence.exists entity ST_SquareGrid(size: Double, geometry: hana.ST_GEOMETRY) { geom: hana.ST_GEOMETRY; i: Integer; j: Integer; } ``` Then the function can be called, parameters have to be passed by name: ```cds entity V as select from ST_SquareGrid(size: 1.0, geometry: ST_GeomFromWkt('Point(1.5 -2.5)')) { geom, i, j }; ``` ### Functions Without Arguments SAP HANA allows to omit the parentheses for functions that don't expect arguments. For example: ```cds entity Foo { key ID : UUID; } entity Bar as select from Foo { ID, current_timestamp }; ``` Some of which are well-known standard functions like `current_timestamp` in the previous example, which can be written without parentheses in CDS models. However, there are many unknown ones, that aren't known to the compiler, for example: - `current_connection` - `current_schema` - `current_transaction_isolation_level` - `current_utcdate` - `current_utctime` - `current_utctimestamp` - `sysuuid` To use these in CDS models, you have to add the parentheses so that CDS generic support for using native features can kick in: ```cds entity Foo { key ID : UUID; } entity Bar as select from Foo { ID, current_timestamp, sysuuid() as sysid // [!code focus] }; ``` ### Regex Functions CDS supports SAP HANA Regex functions (`locate_regexpr`, `occurrences_regexpr`, `replace_regexpr`, and `substring_regexpr`), and SAP HANA aggregate functions with an additional `order by` clause in the argument list. Example: ```sql locate_regexpr(pattern in name from 5) first_value(name order by price desc) ``` Restriction: `COLLATE` isn't supported. For other functions, where the syntax isn't supported by the compiler (for example, `xmltable(...)`), a native _.hdbview_ can be used. See [Using Native SAP HANA Artifacts](../advanced/hana) for more details. ## HDI Schema Evolution CAP supports database schema updates by detecting changes to the CDS model when executing the CDS build. If the underlying database offers built-in schema migration techniques, compatible changes can be applied to the database without any data loss or the need for additional migration logic. Incompatible changes like deletions are also detected, but require manual resolution, as they would lead to data loss. | Change | Detected Automatically | Applied Automatically | | ---------------------------------- | :--------------------: | :-------------------: | | Adding fields | **Yes** | **Yes** | | Deleting fields | **Yes** | No | | Renaming fields | n/a 1 | No | | Changing datatype of fields | **Yes** | No | | Changing type parameters | **Yes** | **Yes** | | Changing associations/compositions | **Yes** | No 2 | | Renaming associations/compositions | n/a 1 | No | | Renaming entities | n/a | No | > 1 Rename field or association operations aren't detected as such. Instead, corresponding ADD and DROP statements are rendered requiring manual resolution activities. > > 2 Changing targets may lead to renamed foreign keys. Possibly hard to detect data integrity issues due to non-matching foreign key values if target key names remain the same (for example "ID"). ::: warning No support for incompatible schema changes Currently there's no framework support for incompatible schema changes that require scripted data migration steps (like changing field constraints NULL > NOT NULL). However, the CDS build does detect those changes and renders them as non-executable statements, requesting the user to take manual resolution steps. We recommend avoiding those changes in productive environments. ::: ### Schema Evolution and Multitenancy/Extensibility There's full support for schema evolution when the _cds-mtxs_ library is used for multitenancy handling. It ensures that all schema changes during base-model upgrades are rolled out to the tenant databases. ::: warning Tenant-specific extensibility using the _cds-mtxs_ library isn't supported yet Right now, you can't activate extensions on entities annotated with `@cds.persistence.journal`. ::: ### Schema Updates with SAP HANA {#schema-updates-with-sap-hana} All schema updates in SAP HANA are applied using SAP HANA Deployment Infrastructure (HDI) design-time artifacts, which are auto-generated during CDS build execution. Schema updates using _.hdbtable_ deployments are a challenge for tables with large data volume. Schema changes with _.hdbtable_ are applied using temporary table generation to preserve the data. As this could lead to long deployment times, the support for _.hdbmigrationtable_ artifact generation has been added. The [Migration Table artifact type](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-deployment-infrastructure-hdi-reference/migration-tables-hdbmigrationtable?version=2024_1_QRC) uses explicit versioning and migration tasks. Modifications of the database table are explicitly specified in the design-time file and carried out on the database table exactly as specified. This saves the cost of an internal table-copy operation. When a new version of an already existing table is deployed, HDI performs the migration steps that haven't been applied. #### Deploy Artifact Transitions as Supported by HDI {#deploy-artifact-transitions} | Current format | hdbcds | hdbtable | hdbmigrationtable | |-------------------|:------:|:--------:|:-----------------:| | hdbcds | | yes | n/a | | hdbtable | n/a | | yes | | hdbmigrationtable | n/a | Yes | | ::: warning Direct migration from _.hdbcds_ to _.hdbmigrationtable_ isn't supported by HDI. A deployment using _.hdbtable_ is required up front. [Learn more in the **Enhance Project Configuration for SAP HANA Cloud** section.](#configure-hana){.learn-more} During the transition from _.hdbtable_ to _.hdbmigrationtable_ you have to deploy version=1 of the _.hdbmigrationtable_ artifact, which must not include any migration steps. ::: HDI supports the _hdbcds → hdbtable → hdbmigrationtable_ migration flow without data loss. Even going back from _.hdbmigrationtable_ to _.hdbtable_ is possible. Keep in mind that you lose the migration history in this case. For all transitions you want to execute in HDI, you need to specify an undeploy allowlist as described in [HDI Delta Deployment and Undeploy Allow List](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/hdi-delta-deployment-and-undeploy-allow-list?) in the SAP HANA documentation. :::tip Moving From _.hdbcds_ To _.hdbtable_ There a migration guide providing you step-by-step instructions for making the switch. [Learn more about Moving From _.hdbcds_ To _.hdbtable_](../cds/compiler/hdbcds-to-hdbtable){.learn-more} ::: #### Enabling hdbmigrationtable Generation for Selected Entities During CDS Build {#enabling-hdbmigrationtable-generation} If you're migrating your already deployed scenario to _.hdbmigrationtable_ deployment, you've to consider the remarks in [Deploy Artifact Transitions as Supported by HDI](#deploy-artifact-transitions). By default, all entities are still compiled to _.hdbtable_ and you only selectively choose the entities for which you want to build _.hdbmigrationtable_ by annotating them with `@cds.persistence.journal`. Example: ```cds namespace data.model; @cds.persistence.journal entity LargeBook { key id : Integer; title : String(100); content : LargeString; } ``` CDS build generates _.hdbmigrationtable_ source files for annotated entities as well as a _last-dev/csn.json_ source file representing the CDS model state of the last build. > These source files have to be checked into the version control system. Subsequent model changes are applied automatically as respective migration versions including the required schema update statements to accomplish the new target state. There are cases where you have to resolve or refactor the generated statements, like for reducing field lengths. As they can't be executed without data loss (for example, `String(100)` -> `String(50)`), the required migration steps are only added as comments for you to process explicitly. Example: ```txt >>>> Manual resolution required - DROP statements causing data loss are disabled >>>> by default. >>>> You may either: >>>> uncomment statements to allow incompatible changes, or >>>> refactor statements, e.g. replace DROP/ADD by single RENAME statement >>>> After manual resolution delete all lines starting with >>>>> -- ALTER TABLE my_bookshop_Books DROP (title); -- ALTER TABLE my_bookshop_Books ADD (title NVARCHAR(50)); ``` Changing the type of a field causes CDS build to create a corresponding ALTER TABLE statement. [Data type conversion rules](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/data-type-conversion?) are applied by the SAP HANA database as part of the deployment step. This may cause the deployment to fail if the column contents can't be converted to the new format. Examples: 1. Changing the type of a field from String to Integer may cause tenant updates to fail if existing content can't be converted. 2. Changing the type of a field from Decimal to Integer can succeed, but decimal places are truncated. Conversion fails if the content exceeds the maximum Integer length. We recommend keeping _.hdbtable_ deployment for entities where you expect low data volume. Every _.hdbmigrationtable_ artifact becomes part of your versioned source code, creating a new migration version on every model change/build cycle. In turn, each such migration can require manual resolution. You can switch large-volume tables to _.hdbmigrationtable_ at any time, keeping in mind that the existing _.hdbtable_ design-time artifact needs to be undeployed. ::: tip Sticking to _.hdbtable_ for the actual application development phase avoids lots of initial migration versions that would need to be applied to the database schema. ::: CDS build performs rudimentary checks on generated _.hdmigrationtable_ files: - CDS build fails if inconsistencies are encountered between the generated _.hdbmigrationtable_ files and the _last-dev/csn.json_ model state. For example, the last migration version not matching the table version is such an inconsistency. - CDS build fails if manual resolution comments starting with `>>>>>` exist in one of the generated _.hdbmigrationtable_ files. This ensures that manual resolution is performed before deployment. ### Native Database Clauses {#schema-evolution-native-db-clauses} Not all clauses supported by SQL can directly be written in CDL syntax. To use native database clauses also in a CAP CDS model, you can provide arbitrary SQL snippets with the annotations [`@sql.prepend` and `@sql.append`](databases#sql-prepend-append). In this section, we're focusing on schema evolution specific details. Schema evolution requires that any changes are applied by corresponding ALTER statements. See [ALTER TABLE statement reference](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/alter-table-statement-data-definition?version=2024_1_QRC) for more information. A new migration version is generated whenever an `@sql.append` or `@sql.prepend` annotation is added, changed, or removed. ALTER statements define the individual changes that create the final database schema. This schema has to match the schema defined by the TABLE statement in the _.hdbmigrationtable_ artifact. Please note that the compiler doesn't evaluate or process these SQL snippets. Any snippet is taken as is and inserted into the TABLE statement and the corresponding ALTER statement. The deployment fails in case of syntax errors. CDS Model: ```cds @cds.persistence.journal @sql.append: 'PERSISTENT MEMORY ON' entity E { ..., @sql.append: 'FUZZY SEARCH INDEX ON' text: String(100); } ``` Result in hdbmigrationtable file: ```sql == version=2 COLUMN TABLE E ( ..., text NVARCHAR(100) FUZZY SEARCH INDEX ON ) PERSISTENT MEMORY ON == migration=2 ALTER TABLE E PERSISTENT MEMORY ON; ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON); ``` It's important to understand that during deployment new migration versions will be applied on the existing database schema. If the resulting schema doesn't match the schema as defined by the TABLE statement, deployment fails and any changes are rolled-back. In consequence, when removing or replacing an existing `@sql.append` annotation, the original ALTER statements need to be undone. As the required statements can't automatically be determined, manual resolution is required. The CDS build generates comments starting with `>>>>` in order to provide some guidance and enforce manual resolution. Generated file with comments: ```txt == migration=3 >>>>> Manual resolution required - insert ALTER statement(s) as described below. >>>>> After manual resolution delete all lines starting with >>>>> >>>>> Insert ALTER statement for: annotation @sql.append of artifact E has been removed (previous value: "PERSISTENT MEMORY ON") >>>>> Insert ALTER statement for: annotation @sql.append of element E:text has been removed (previous value: "FUZZY SEARCH INDEX ON") ``` Manually resolved file: ```sql == migration=3 ALTER TABLE E PERSISTENT MEMORY DEFAULT; ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX OFF); ``` Appending text to an existing annotation is possible without manual resolution. A valid ALTER statement will be generated in this case. For example, appending the `NOT NULL` column constraint to an existing `FUZZY SEARCH INDEX ON` annotation generates the following statement: ```sql ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON NOT NULL); ``` ::: warning You can use `@sql.append` to partition your table initially, but you can't subsequently change the partitions using schema evolution techniques as altering partitions isn't supported yet. ::: ### Advanced Options The following CDS configuration options are supported to manage _.hdbmigrationtable_ generation. ::: warning This hasn't been finalized yet. ::: ```js { "hana" : { "journal": { "enable-drop": false, "change-mode": "alter" // "drop" }, // ... } } ``` The `"enable-drop"` option determines whether incompatible model changes are rendered as is (`true`) or manual resolution is required (`false`). The default value is `false`. The `change-mode` option determines whether `ALTER TABLE ... ALTER` (`"alter"`) or `ALTER TABLE ... DROP` (`"drop"`) statements are rendered for data type related changes. To ensure that any kind of model change can be successfully deployed to the database, you can switch the `"change-mode"` to `"drop"`, keeping in mind that any existing data will be deleted for the corresponding column. See [hdbmigrationtable Generation](#enabling-hdbmigrationtable-generation) for more details. The default value is `"alter"`. ## Caveats ### CSV Data Gets Overridden HDI deploys CSV data as _.hdbtabledata_ and assumes exclusive ownership of the data. It's overridden with the next application deployment; hence: ::: tip Only use CSV files for _configuration data_ that can't be changed by application users. ::: Yet, if you need to support initial data with user changes, you can use the `include_filter` option that _[.hdbtabledata](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-deployment-infrastructure-hdi-reference/table-data-hdbtabledata?version=2024_1_QRC)_ offers. ### Undeploying Artifacts As documented in the [HDI Deployer docs](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/hdi-delta-deployment-and-undeploy-allow-list?), an HDI deployment by default never deletes artifacts. So, if you remove an entity or CSV files, the respective tables, and content remain in the database. By default, `cds add hana` creates an `undeploy.json` like this: ::: code-group ```json [db/undeploy.json] [ "src/gen/**/*.hdbview", "src/gen/**/*.hdbindex", "src/gen/**/*.hdbconstraint", "src/gen/**/*_drafts.hdbtable", "src/gen/**/*.hdbcalculationview" ] ``` ::: If you need to remove deployed CSV files, also add this entry: ::: code-group ```json [db/undeploy.json] [ [...] "src/gen/**/*.hdbtabledata" ] ``` ::: *See this [troubleshooting](../get-started/troubleshooting#hana-csv) entry for more information.*{.learn-more} ### SAP HANA Cloud System Limits All limitations for the SAP HANA Cloud database can be found in the [SAP Help Portal](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/system-limitations?version=2024_2_QRC). ### Native Associations For SAP HANA, CDS associations are by default reflected in the respective database tables and views by _Native HANA Associations_ (HANA SQL clause `WITH ASSOCIATIONS`). CAP no longer needs these native associations (provided you use the new database service _@cap-js/hana_ for the CAP Node.js stack). Unless you explicitly use them in other native HANA objects, we recommend switching off the generation of native HANA associations, as they increase deploy times: They need to be validated in the HDI deployment, and they can introduce indirect dependencies between other objects, which can trigger other unnecessary revalidations or even unnecessary drop/create of indexes. By switching them off, all this effort is saved. ::: code-group ```json [package.json] { "cds": { "sql": { "native_hana_associations": false } } } ``` ```json [cdsrc.json] { "sql": { "native_hana_associations": false } } ``` ::: For new projects, `cds add hana` automatically adds this configuration. ::: warning Initial full table migration Be aware, that the first deployment after this **configuration change may take longer**. For each entity with associations, the respective database object will be touched (DROP/CREATE for views, full table migration via shadow table and data copy for tables). This is also the reason why we haven't changed the default so far. Subsequent deployments will benefit, however. ::: # Localization, i18n Guides you through the steps to internationalize your application to provide localized versions with respect to both Localized Models as well as Localized Data. _'Localization'_ is a means to adapting your app to the languages of specific target markets. This guide focuses on static texts such as labels. See [CDS](../cds/) and [Localized Data](localized-data) for information about how to manage and serve actual payload data in different translations. ## Externalizing Texts Bundles All you have to do to internationalize your models is to externalize all of your literal texts to text bundles and refer to the respective keys from your models as annotation values. Here is a sample of a model and the corresponding bundle. ::: code-group ```cds [srv/my-service.1.cds] service Bookshop { entity Books @( UI.HeaderInfo: { Title.Label: '{i18n>Book}', TypeName: '{i18n>Book}', TypeNamePlural: '{i18n>Books}', }, ){/*...*/} } ``` ::: ::: code-group ```properties [_i18n/i18n.properties] Book = Book Books = Books foo = Foo ``` ::: > You can define the keys of your properties entries. [Learn more about annotations in CSN.](../cds/csn#annotations){ .learn-more} Then you can translate the texts in localized bundles, each with a language/locale code appended to its name, for example: ```sh _i18n/ i18n.properties # dev main --› 'default fallback' i18n_en.properties # English --› 'default language' i18n_de.properties # German i18n_zh_TW.properties # Traditional Chinese ... ``` ## Where to Place Text Bundles? Recommendation is to put your properties files in a folder named `_i18n` in the root of your project, as in this example: ```zsh bookshop/ ├─ _i18n/ │ ├─ i18n_en.properties │ ├─ i18n_de.properties │ ├─ i18n_fr.properties │ └─ i18n.properties │ ... ``` By default, text bundles are fetched from folders named *_i18n* or *i18n* in the neighborhood of models, i.e. all folders that contain `.cds` sources or parent folders thereof. For example, given the following project layout and sources: ```zsh bookshop/ ├─ app/ │ ├─ browse/ │ │ └─ fiori.cds │ ├─ common.cds │ └─ index.cds ├─ srv/ │ ├─ admin-service.cds │ └─ cat-service.cds ├─ db/ │ └─ schema.cds └─ readme.md ``` We will be loading i18n bundles from all of these locations, if exist: ``` bookshop/app/browse/_i18n bookshop/app/_i18n bookshop/srv/_i18n bookshop/db/_i18n bookshop/_i18n ``` [Learn more about the underlying machinery in the reference docs for `cds.i18n`](../node.js/cds-i18n){.learn-more} ## CSV-Based Text Bundles For smaller projects you can use CSV files instead of _.properties_ files, which you can easily edit in _Excel_, _Numbers_, etc. The format is as follows: | key | en | de | zh_CN | ... | | --- | --- | --- | --- | --- | | Book | Book | Buch | ... | | Books | Books | Bücher | ... | | ... | {} With this CSV source: ```csv key;en;de;zh_CN;... Book;Book;Buch;... Books;Books;Bücher;... ... ``` ## Merging Algorithm Each localized model is constructed by applying: 1. The _default fallback_ bundle (that is, *i18n.properties*), then ... 2. The _default language_ bundle (usually *i18n_en.properties*), then ... 3. The requested bundle (for example, *i18n_de.properties*) In that order. So, the complete stack of overlaid models for the given example would look like this (higher ones override lower ones): | Source | Content | |:--- |:--- | | *_i18n/i18n_de.properties* | specific language bundle | | *_i18n/i18n_en.properties* | default language bundle | | *_i18n/i18n.properties* | default fallback bundle | | *srv/my-service.cds* | service definition | | *db/schema.cds* | underlying data model | ::: tip Set default language The _default language_ is usually `en` but can be overridden by configuring cds.i18n.default_language in your project's _package.json_. ::: ## Merging Reuse Bundles If your application is [importing models from a reuse package](extensibility/composition), that package comes with its own language bundles for localization. These are applied upon import, so they can be overridden in your models as well as in your language bundles and their translations. For example, assuming that your data model imports from a _foundation_ package, then the overall stack of overlays would look like this: | Source | |:--- | | *./_i18n/i18n_de.properties* | | *./_i18n/i18n_en.properties* | | *./_i18n/i18n.properties* | | *./srv/my-service.cds* | | *./db/schema.cds* | | *foundation/_i18n/i18n_de.properties* | | *foundation/_i18n/i18n_en.properties* | | *foundation/_i18n/i18n.properties* | | *foundation/index.cds* | | *foundation/\.cds* | | *foundation/\.cds* | | ... | ## Determining User Locales { #user-locale} Upon incoming requests at runtime, the user's preferred language is determined as follows: 1. Read the preferred language from the first of: 1. The value of the `sap-locale` URL parameter, if present. 2. The value of the `sap-language` URL parameter, but only if it's `1Q`, `2Q` or `3Q` as described below. 3. The first entry from the request's `Accept-Language` header. 4. The `default_language` configured on the app level. 2. Narrow to normalized locales as described below. ::: tip CAP Node.js accepts formats following the available standards of POSIX and RFC 1766, and transforms them into normalized locales. CAP Java only accepts language codes following the standard of RFC 1766 (or [IETF's BCP 47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt)). ::: ## Normalized Locales { #normalized-locales} To reduce the number of required translations, most determined locales are normalized by narrowing them to their main language codes only, for example, `en_US`, `en_CA`, `en_AU` → `en`, except for these preserved language codes: | Locale | Language | | --- | --- | | zh_CN | Chinese - China | | zh_HK | Chinese - Hong Kong, China | | zh_TW | Chinese traditional - Taiwan, China | | en_GB | English - English | | fr_CA | French - Canada | | pt_PT | Portuguese - Portugal | | es_CO | Spanish - Colombia | | es_MX | Spanish - Mexico | | en_US_x_saptrc | SAP tracing translations w/ `sap-language=1Q` | | en_US_x_sappsd | SAP pseudo translations w/ `sap-language=2Q` | | en_US_x_saprigi | Rigi language w/ `sap-language=3Q` | #### Configuring Normalized Locales For CAP Node.js, the list of preserved locales is configurable, for example in the _package.json_ file, using the configuration option cds.i18n.preserved_locales as follows: ```jsonc {"cds":{ "i18n": { "preserved_locales": [ "en_GB", "fr_CA", "pt_PT", "pt_BR", "zh_CN", "zh_HK", "zh_TW" ] } }} ``` In this example we removed `es_CO` and `es_MX` from the list, and added `pt_BR`. In CAP Java the preserved locales can be configured via the cds.locales.normalization.includeList [property](../java/developing-applications/properties#cds-locales-normalization). ::: warning *Note:* However this list is configured, ensure to have translations for the listed locales, as the fallback language will otherwise be `en`. ::: #### Use Hyphens in File Names Due to the ambiguity regarding standards, for example, the usage of hyphens (`-`) in contrast to underscores (`_`), CAP follows the approach of the [SAP Translation Hub](https://discovery-center.cloud.sap/serviceCatalog/sap-translation-hub). Using that approach, CAP normalizes locales to **underscores** as our de facto standard. In effect, this means: - We support incoming locales as [language tags](https://www.ietf.org/rfc/bcp/bcp47.txt) using hyphens to separate sub tags 1, for example `en-GB`. - We always normalize these to underscores, which is `en_GB`. - Always use underscores in filenames, for example, `i18n_en_GB.properties` 1 CAP Node.js also supports underscore separated tags, for example `en_GB`.
# Localized Data This guide extends the localization/i18n of static content, such as labels or messages, to serve localized versions of actual application data. Localized data refers to the maintenance of different translations of textual data and automatically fetching the translations matching the users' preferred language, with per-row fallback to default languages, if the required translations aren't available. Language codes are in ISO 639-1 format. > Find a **working sample** at . ## Declaring Localized Data Use the `localized` modifier to mark entity elements that require translated texts. ```cds entity Books { key ID : UUID; title : localized String; descr : localized String; price : Decimal; currency : Currency; } ``` [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples/blob/ea6e27481071a765dfd701ddb239ed89b92bf426/bookshop/db/schema.cds#L4-L7){ .learn-more} ::: warning _Restriction_ If you want to use the `localized` modifier, the entity's keys must not be associations. ::: > `localized` in entity sub elements isn't currently supported and is ignored. > This includes `localized` in structured elements and structured types. ## Behind the Scenes The `cds` compiler automatically unfolds the previous definition as follows, applying the basic mechanisms of [Managed Compositions](../cds/cdl#managed-compositions), and [Scoped Names](../cds/cdl#scoped-names): First, a separate _Books.texts_ entity is added to hold translated texts: ```cds entity Books.texts { key locale : sap.common.Locale; key ID : UUID; //= source's primary key title : String; descr : String; } ``` [See the definition of `sap.common.Locale`.](../cds/common#locale-type){ .learn-more} Second, the source entity is extended with associations to _Books.texts_: ```cds extend entity Books with { texts : Composition of many Books.texts on texts.ID=ID; localized : Association to Books.texts on localized.ID=ID and localized.locale = $user.locale; } ``` The composition `texts` points to all translated texts for the given entity, whereas the `localized` association points to the translated texts and is narrowed to the request's locale. Third, views are generated in SQL DDL to easily read localized texts with an equivalent fallback: ```cds entity localized.Books as select from Books {*, coalesce (localized.title, title) as title, coalesce (localized.descr, descr) as descr }; ``` ::: warning Note: In contrast to former versions, with CDS compiler v2 we don't add such entities to CSN anymore, but only on generated SQL DDL output. ::: ### Resolving localized texts via views As we already mentioned, the CDS compiler is already creating views that resolve the translated texts internally. Once a CDS runtime detects a request with a user locale, it uses those views instead of the table of the involved entity. Note that SQLite doesn't support locales like _SAP HANA_ does. For _SQLite_, additional views are generated for different languages. Currently those views are generated for the locales 'de' and 'fr' and the default locale is handled as 'en'. ```json "i18n": { "for_sqlite": ["en", ...] } ``` > In _package.json_ put this snippet in the `cds` block, but don't do so for _.cdsrc.json_. > For testing with SQLite: Make sure that the _Books_ table contains the English texts and that the other languages go into the _Books.texts_ table. For _H2_, you need to use the property as follows. ```json "i18n": { "for_sql": ["en", ...] } ``` ### Resolving search over localized texts at runtime { #resolving-localized-texts-at-runtime} Although the approach with the generated localized views is very convenient, it's limited on SQLite and shows suboptimal performance with large data sets on _SAP HANA_. Especially for search operations the performance penalty is very critical. Therefore, both CAP runtimes have implemented a solution targeted for search operations. If the `localized` association of your entity is present and accessible by the given CQL statement, the runtimes generate SQL statements that resolve the localized texts. This is optimized for the underlying database. When your CQL queries select entities directly there is no issue as the `localized` association is automatically accessible in an entity with localized elements. If your CQL query selects from a view, it is important that your views' projection preserves the `localized` association. The following view definitions preserve the `localized` association in the view, allowing you to optimize query execution, or for broader language support on SQLite, H2, and PostgreSQL. **Preferred -** Exclude elements that mustn't be exposed: ```cds entity OpenBookView as select from Books {*} excluding { price, currency }; ``` Include the `localized` association: ```cds entity ClosedBookView as select from Books { ID, title, descr, localized }; ``` ### Base Entities Stay Intact In contrast to similar strategies, all texts aren't externalized but the original texts are kept in the source entity. This saves one join when reading localized texts with fallback to the original ones. ### Extending *.texts* Entities { #extending-texts-entities} It's possible to collectively extend all generated *.texts* entities by extending the aspect `sap.common.TextsAspect`, which is defined in [*common.cds*](../cds/common#texts-aspects). For example, the aspect can be used to add an association to the `Languages` code list entity, or to add flags that help you to control the translation process. Example: ```cds extend sap.common.TextsAspect with { language : Association to sap.common.Languages on language.code = locale; } ``` The earlier description is simplified, *.texts* entities are generated with an include on `sap.common.TextsAspect`, if the aspect exists. For the *Books* entity, the generated *.texts* entity looks like: ```cds entity Books.texts : sap.common.TextsAspect { key ID : UUID; title : String; descr : String; } ``` When the include is expanded, the key element `locale` is inserted into *.texts* entities, alongside all the other elements that have been added to `sap.common.TextsAspect` via extensions. ```cds entity Books.texts { // from sap.common.TextsAspect key locale: sap.common.Locale; language : Association to sap.common.Languages on language.code = locale; // from Books key ID : UUID; title : String; descr : String; } ``` It isn't allowed to extend `sap.common.TextsAspect` with * [Managed Compositions of Aspects](../cds/cdl#managed-compositions) * localized elements * key elements For entities that have an annotation `@fiori.draft.enabled`, the corresponding *.texts* entities also include the aspect, but the element `locale` isn't marked as a key and an element `key ID_texts : UUID` is added. ## Pseudo var `$user.locale` { #user-locale} [`$user.locale`]: #user-locale As shown in the second step, the pseudo variable `$user.locale` is used to refer to the user's preferred locale and join matching translations from `.texts` tables. This pseudo variable allows expressing such queries in a database-independent way, which is realized in the service runtimes as follows: ### Determining `$user.locale` from Inbound Requests The user's preferred locale is determined from request parameters, user settings, or the _accept-language_ header of inbound requests [as explained in the Localization guide](i18n#user-locale). ### Programmatic Access to `$user.locale` The resulting [normalized locale](i18n#normalized-locales) is available programmatically, in your event handlers. * Node.js: `req.locale` * Java: `eventContext.getParameterInfo().getLocale()` ### Propagating `$user.locale` to Databases {#propagating-of-user-locale} [propagation]: #propagating-of-user-locale Finally, the [normalized locale](i18n#normalized-locales) is **propagated** to underlying databases using session variables, that is, `$user.locale` translates to `session_context('locale')` in native SQL of SAP HANA and most databases. Not all databases support session variables. For example, for _SQLite_ we currently would just create stand-in views for selected languages. With that, the APIs are kept stable but have restricted feature support. ## Reading Localized Data Given the asserted unfolding and user locales propagated to the database, you can read localized data as follows: ### In Agnostic Code Read _original_ texts, that is, the ones in the originally created data entry: ```sql SELECT ID, title, descr from Books ``` ### For End Users Reading texts for end users uses the `localized` association, which requires prior [propagation] of [`$user.locale`] to the underlying database. Read _localized_ texts in the user's preferred language: ```sql SELECT ID, localized.title, localized.descr from Books ``` ### For Translation UIs Translation UIs read and write texts in all languages, independent from the current user's preferred one. They use the to-many `texts` association, which is independent from [`$user.locale`]. Read texts in **different** translations: ```sql SELECT ID, texts[locale='fr'].title, texts[locale='fr'].descr from Books ``` Read texts in **all** translations: ```sql SELECT ID, texts.locale, texts.title, texts.descr from Books ``` ## Serving Localized Data The generic handlers of the service runtimes automatically serve read requests from `localized` views. Users see all texts in their preferred language or the fallback language. [See also **Enabling Draft for Localized Data**.](../advanced/fiori#draft-for-localized-data){ .learn-more} For example, given this service definition: ```cds using { Books } from './books'; service CatalogService { entity BooksList as projection on Books { ID, title, price }; entity BooksDetails as projection on Books; entity BooksShort as projection on Books { ID, price, substr(title, 0, 10) as title : localized String(10), }; } ``` ### `localized.` Helper Views For each exposed entity in a service definition, and all intermediate views, a corresponding `localized.` entity is created. It has the same query clauses and all annotations, except for the `from` clause being redirected to the underlying entity's `localized.` counterpart. A helper view is only created if the corresponding entity contains at least one element with a `localized` property, or it exposes an association to an entity that is localized. You may need to cast an element if that property is not propagated, for example for expressions such as in `CatalogService.BooksShort`. ```cds using { localized.Books } from './books_localized'; entity localized.CatalogService.BooksList as SELECT from localized.Books { ID, title, price }; entity localized.CatalogService.BooksDetails as SELECT from localized.Books; entity localized.CatalogService.BooksShort as SELECT from localized.Books { ID, price, substr(title, 0, 10) as title : localized String(10), }; ``` ::: warning Note: Note that these `localized.` entities are not part of CSN and aren't exposed through OData. They are only generated for SQL. ::: ### Read Operations The generic handlers in the service framework will automatically redirect all incoming read requests to the `localized_` helper views in the SQL database, unless in SAP Fiori draft mode. The `@cds.localized: false` annotation can be used to explicitly switch off the automatic redirection to the localized views. All incoming requests to an entity annotated with `@cds.localized: false` will directly access the base entity. ```cds using { Books } from './books'; service CatalogService { @cds.localized: false //> direct access to base entity; all fields are non-localized defaults entity BooksDetails as projection on Books; } ``` In Node.js applications, for requests with an `$expand` query option on entities annotated with `@cds.localized: false`, the expanded properties are not translated. ```http GET /BooksDetails?$expand=authors //> all fields from authors are non-localized defaults, if BooksDetails is annotated with `@cds.localized: false` ``` ### Write Operations Since the corresponding text table is linked through composition, you can use deep inserts or upserts to fill in language-specific texts. ```http POST /Entity HTTP/1.1 Content-Type: application/json { "name": "Some name", "description": "Some description", "texts": [ {"name": "Ein Name", "description": "Eine Beschreibung", "locale": "de"} ] } ``` If you want to add a language-specific text to an existing entity, perform a `POST` request to the text table of the entity through navigation. ```http POST /Entity()/texts HTTP/1.1 Content-Type: application/json { {"name": "Ein Name", "description": "Eine Beschreibung", "locale": "de"} } ``` ### Update Operations To update the language-specific texts of an entity along with the default fallback text, you can perform a deep update as a `PUT` or `PATCH` request to the entity through navigation. ```http PUT/PATCH /Entity() HTTP/1.1 Content-Type: application/json { "name": "Some new name", "description": "Some new description", "texts": [ {"name": "Ein neuer Name", "description": "Eine neue Beschreibung", "locale": "de"} ] } ``` To update a single language-specific text field, perform a `PUT` or a `PATCH` request to the entity's text field via navigation. ```http PUT/PATCH /Entity()/texts(ID=,locale='')/ HTTP/1.1 Content-Type: application/json { {"name": "Ein neuer Name"} ] } ``` ::: warning *Note:* Accepted language codes in the `locale` property need to follow the [BCP 47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) standard but use __underscore__ (`_`) instead of __hyphen__ (`-`), for example `en_GB`. ::: ### Delete Operations To delete a locale's language-specific texts of an entity, perform a `DELETE` request to the entity's texts table through navigation. Specify the entity's key and the locale that you want to delete. ```http DELETE /Entity()/texts(ID=,locale='') HTTP/1.1 ``` ## Nested Localized Data The definition of books has a `currency` element that is effectively an association to the `sap.common.Currencies` code list entity, which in turn has localized texts. Find the respective definitions in the reference docs for `@sap/cds/common`, in the section on [Common Code Lists](../cds/common#code-lists). Upon unfolding, all associations to other entities with localized texts are automatically redirected as follows: ```cds entity localized.Currencies as select from Currencies AS c {* /*...*/}; entity localized.Books as select from Books AS p mixin { // association is redirected to localized.Currencies country : Association to localized.Currencies on country = p.country; } into {* /*...*/}; ``` Given that, nested localized data can be easily read with independent fallback logic: ```sql SELECT from localized.Books { ID, title, descr, currency.name as currency } where title like '%pen%' or currency.name like '%land%' ``` In the result sets for this query, values for `title`, `descr`, as well as the `currency` name are localized. ## Adding Initial Data To add initial data, two _.csv_ files are required. The first _.csv_ file, for example _Books.csv_, should contain all the data in the default language. The second file, for example _Books_texts.csv_ (please note **_texts** in the file name) should contain the translated data in all other languages your application is using. For example, _Books.csv_ can look as follows: ::: code-group ```csv [Books.csv] ID;title;descr;author_ID;stock;price;currency_code;genre_ID 201;Wuthering Heights;Wuthering Heights, Emily Brontë's only novel ...;101;12;11.11;GBP;11 207;Jane Eyre;Jane Eyre is a novel by English writer ...;107;11;12.34;GBP;11 251;The Raven;The Raven is a narrative poem by ...;150;333;13.13;USD;16 252;Eleonora;Eleonora is a short story by ...;150;555;14;USD;16 271;Catweazle;Catweazle is a British fantasy ...;170;22;150;JPY;13 ... ``` ::: This is the corresponding _Books_texts.csv_: ::: code-group ```csv [Books_texts.csv] ID;locale;title;descr 201;de;Sturmhöhe;Sturmhöhe (Originaltitel: Wuthering Heights) ist der einzige Roman... 201;fr;Les Hauts de Hurlevent;Les Hauts de Hurlevent (titre original : Wuthering Heights)... 207;de;Jane Eyre;Jane Eyre. Eine Autobiographie (Originaltitel: Jane Eyre. An Autobiography)... 252;de;Eleonora;Eleonora ist eine Erzählung von Edgar Allan Poe. Sie wurde 1841... ... ``` ::: # Temporal Data CAP provides out-of-the-box support for declaring and serving date-effective entities with application-controlled validity, in particular to serve as-of-now and time-travel queries. Temporal data allows you to maintain information relating to past, present, and future application time. Built-in support for temporal data follows the general principle of CDS to capture intent with models while staying conceptual, concise, and comprehensive, and minimizing pollution by technical artifacts. > For an introduction to this topic, see [Temporal database](https://en.wikipedia.org/w/index.php?title=Temporal_database&oldid=911558203) (Wikipedia) and [Temporal features in SQL:2011](https://files.ifi.uzh.ch/dbtg/ndbs/HS17/SQL2011.pdf). ## Starting with 'Timeless' Models {#timeless-model} For the following explanation, let's start with a base model to manage employees and their work assignments, which is free of any traces of temporal data management. ### Timeless Model ::: code-group ```cds [timeless-model.cds] namespace com.acme.hr; using { com.acme.common.Persons } from './common'; entity Employees : Persons { jobs : Composition of many WorkAssignments on jobs.empl=$self; job1 : Association to one /*of*/ WorkAssignments; } entity WorkAssignments { key ID : UUID; role : String(111); empl : Association to Employees; dept : Association to Departments; } entity Departments { key ID : UUID; name : String(111); head : Association to Employees; members : Association to many Employees on members.jobs.dept = $self; } ``` ::: > An employee can have several work assignments at the same time. > Each work assignment links to one department. ### Timeless Data A set of sample data entries for this model, which only captures the latest state, can look like this: ![Alice has the job a a developer and consultant. Bob is a builder. Alice works in her roles for the departments core development and app development. Bob's work assignment is linked to the construction department.](assets/temporal-data/timeless-data.drawio.svg) > Italic titles indicate to-one associations; actual names of the respective foreign key columns in SQL are `job1_ID`, `empl_ID`, and `dept_ID`. ## Declaring Temporal Entities _Temporal Entities_ represent _logical_ records of information for which we track changes over time by recording each change as individual _time slices_ in the database with valid from/to boundaries. For example, we could track the changes of Alice's primary work assignment _WA1_ over time: ![Alice progressed from developer to senior developer to architect.](assets/temporal-data/time-slices.drawio.svg) ::: tip Validity periods are expected to be **non-overlapping** and **closed-open** intervals; same as in SQL:2011. ::: ### Using Annotations `@cds.valid.from/to` To track temporal data, just add a pair of date/time elements to the respective entities annotated with `@cds.valid.from/to`, as follows: ```cds entity WorkAssignments { //... start : Date @cds.valid.from; end : Date @cds.valid.to; } ``` ::: tip The annotation pair `@cds.valid.from/to` actually triggers the built-in mechanisms for [serving temporal data](#serving-temporal-data). It specifies which elements form the **application-time** period, similar to SQL:2011. ::: ### Using Common Aspect `temporal` Alternatively, use the predefined aspect [`temporal`](../cds/common#aspect-temporal) to declare temporal entities: ```cds using { temporal } from '@sap/cds/common'; entity WorkAssignments : temporal {/*...*/} ``` Aspect [`temporal`](../cds/common#aspect-temporal) is defined in _[@sap/cds/common](../cds/common)_ as follows: ```cds aspect temporal { validFrom : Timestamp @cds.valid.from; validTo : Timestamp @cds.valid.to; } ``` ### Separate Temporal Details The previous samples would turn the whole _WorkAssignment_ entity into a temporal one. Frequently though, only some parts of an entity are temporal, while others stay timeless. You can reflect this by separating temporal elements from non-temporal ones: ```cds entity WorkAssignments { // non-temporal head entity key ID : UUID; empl : Association to Employees; details : Composition of WorkDetails on details.ID = $self.ID; } entity WorkDetails : temporal { // temporal details entity key ID : UUID; // logical record ID role : String(111); dept : Association to Departments; } ``` The data situation would change as follows: ![Alice has two work assignments. Her first work assignment is stable but the roles in this assignment change over time. She progressed from developer to senior developer to architect. Each role has specific validity defined.](assets/temporal-data/temporal-details.drawio.svg) ## Serving Temporal Data We expose the entities from the following timeless model in a service as follows: ::: code-group ```cds [service.cds] using { com.acme.hr } from './temporal-model'; service HRService { entity Employees as projection on hr.Employees; entity WorkAssignments as projection on hr.WorkAssignments; entity Departments as projection on hr.Departments; } ``` ::: > You can omit composed entities like _WorkAssignments_ from the service, as they would get [auto-exposed](providing-services#auto-exposed-entities) automatically.
## Reading Temporal Data ### As-of-now Queries READ requests without specifying any temporal query parameter will automatically return data valid _as of now_. For example, assumed the following OData query to read all employees with their current work assignments is processed on March 2019: ```cds GET Employees? $expand=jobs($select=role&$expand=dept($select=name)) ``` The values of `$at`, and so also the respective session variables, would be set to, for example: | | | | |--------------|----------------------------------|----------------------------| | `$at.from` = | _session_context('valid-from')_= | _2019-03-08T22:11:00Z_ | | `$at.to` = | _session_context('valid-to')_ = | _2019-03-08T22:11:00.001Z_ | The result set would be: ```json [ { "ID": "E1", "name": "Alice", "jobs": [ { "role": "Architect", "dept": {"name": "Core Development"}}, { "role": "Consultant", "dept": {"name": "App Development"}} ]}, { "ID": "E2", "name": "Bob", "jobs": [ { "role": "Builder", "dept": {"name": "Construction"}} ]} ] ``` ### Time-Travel Queries We can run the same OData query as in the previous sample to read a snapshot data as valid on January 1, 2017 using the `sap-valid-at` query parameter: ```cds GET Employees?sap-valid-at=date'2017-01-01' $expand=jobs($select=role&$expand=dept($select=name)) ``` The values of `$at` and hence the respective session variables would be set to, for example: | | | | |--------------|----------------------------------|----------------------------| | `$at.from` = | _session_context('valid-from')_= | _2017-01-01T00:00:00Z_ | | `$at.to` = | _session_context('valid-to')_ = | _2017-01-01T00:00:00.001Z_ | The result set would be: ```json [ { "ID": "E1", "name": "Alice", "jobs": [ { "role": "Developer", "dept": {"name": "Core Development"}}, { "role": "Consultant", "dept": {"name": "App Development"}} ]}, ... ] ``` ::: warning Time-travel queries aren't supported on SQLite due to the lack of *session_context* variables. ::: ### Time-Period Queries We can run the same OData query as in the previous sample to read all history of data as valid since 2016 using the `sap-valid-from` query parameter: ```cds GET Employees?sap-valid-from=date'2016-01-01' $expand=jobs($select=role&$expand=dept($select=name)) ``` The result set would be: ```json [ { "ID": "E1", "name": "Alice", "jobs": [ { "role": "Developer", "dept": {"name": "App Development"}}, { "role": "Developer", "dept": {"name": "Core Development"}}, { "role": "Senior Developer", "dept": {"name": "Core Development"}}, { "role": "Consultant", "dept": {"name": "App Development"}} ]}, ... ] ``` > You would add `validFrom` in such time-period queries, for example: ```cds GET Employees?sap-valid-from=date'2016-01-01' $expand=jobs($select=validFrom,role,dept/name) ``` ::: warning Time-series queries aren't supported on SQLite due to the lack of *session_context* variables. ::: ::: tip Writing temporal data must be done in custom handlers. ::: ### Transitive Temporal Data The basic techniques and built-in support for reading temporal data serves all possible use cases with respect to as-of-now and time-travel queries. Special care has to be taken though if time-period queries transitively expand across two or more temporal data entities. As an example, assume that both, _WorkAssignments_ and _Departments_ are temporal: ```cds using { temporal } from '@sap/cds/common'; entity WorkAssignments : temporal {/*...*/ dept : Association to Departments; } entity Departments : temporal {/*...*/} ``` When reading employees with all history since 2016, for example: ```cds GET Employees?sap-valid-from=date'2016-01-01' $expand=jobs( $select=validFrom,role&$expand=dept( $select=validFrom,name ) ) ``` The results for `Alice` would be: ```json [ { "ID": "E1", "name": "Alice", "jobs": [ { "validFrom":"2014-01-01", "role": "Developer", "dept": [ {"validFrom":"2013-04-01", "name": "App Development"} ]}, { "validFrom":"2017-01-01", "role": "Consultant", "dept": [ {"validFrom":"2013-04-01", "name": "App Development"} ]}, { "validFrom":"2017-01-01", "role": "Developer", "dept": [ {"validFrom":"2014-01-01", "name": "Tech Platform Dev"}, {"validFrom":"2017-07-01", "name": "Core Development"} ]}, { "validFrom":"2017-04-01", "role": "Senior Developer", "dept": [ {"validFrom":"2014-01-01", "name": "Tech Platform Dev"}, {"validFrom":"2017-07-01", "name": "Core Development"} ]}, { "validFrom":"2018-09-15", "role": "Architect", "dept": [ {"validFrom":"2014-01-01", "name": "Tech Platform Dev"}, {"validFrom":"2017-07-01", "name": "Core Development"} ]} ]}, ... ] ``` That is, all-time slices for changes to departments since 2016 are repeated for each time slice of work assignments in that time frame, which is a confusing and redundant piece of information. You can fix this by adding an alternative association to departments as follows: ```cds using { temporal } from '@sap/cds/common'; entity WorkAssignments : temporal {/*...*/ dept : Association to Departments; dept1 : Association to Departments on dept1.id = dept.id and dept1.validFrom <= validFrom and validFrom < dept1.validTo; } entity Departments : temporal {/*...*/} ``` ## Primary Keys of Time Slices While timeless entities are uniquely identified by the declared primary `key` — we call that the _conceptual_ key in CDS — time slices are uniquely identified by _the conceptual `key` **+** `validFrom`_. In effect the SQL DDL statement for the _WorkAssignments_ would look like this: ```sql CREATE TABLE com_acme_hr_WorkAssignments ( ID : nvarchar(36), validFrom : timestamp, validTo : timestamp, -- ... PRIMARY KEY ( ID, validFrom ) ) ``` In contrast to that, the exposed API preserves the timeless view, to easily serve as-of-now and time-travel queries out of the box [as described above](#serving-temporal-data): ```xml ... ``` Reading an explicit time slice can look like this: ```sql SELECT from WorkAssignments WHERE ID='WA1' and validFrom='2017-01-01' ``` Similarly, referring to individual time slices by an association: ```cds entity SomeSnapshotEntity { //... workAssignment : Association to WorkAssignments { ID, validFrom } } ```
# CAP Security Guide This guide addresses - Developers and operators - User administrators - Security experts of CAP applications who need to understand how to develop, deploy and operate CAP applications in a secure way.
# CDS-based Authorization Authorization means restricting access to data by adding respective declarations to CDS models, which are then enforced in service implementations. By adding such declarations, we essentially revoke all default access and then grant individual privileges. ## Authentication as Prerequisite { #prerequisite-authentication} In essence, authentication verifies the user's identity and the presented claims such as granted roles and tenant membership. Briefly, **authentication** reveals _who_ uses the service. In contrast, **authorization** controls _how_ the user can interact with the application's resources according to granted privileges. As the access control needs to rely on verified claims, authentication is a prerequisite to authorization. From perspective of CAP, the authentication method is freely customizable. For convenience, a set of authentication methods is supported out of the box to cover most common scenarios: - [XS User and Authentication and Authorization service](https://help.sap.com/docs/CP_AUTHORIZ_TRUST_MNG) (XSUAA) is a full-fleged [OAuth 2.0](https://oauth.net/2/) authorization server which allows to protect your endpoints in productive environments. JWT tokens issued by the server not only contain information about the user for authentication, but also assigned scopes and attributes for authorization. - [Identity Authentication Service](https://help.sap.com/docs/IDENTITY_AUTHENTICATION) (IAS) is an [OpenId Connect](https://openid.net/connect/) compliant service for next-generation identity and access management. As of today, CAP provides IAS authentication for incoming requests only. Authorization has to be explicitly managed by the application. - For _local development_ and _test_ scenario mock user authentication is provided as built-in feature. Find detailed instructions for setting up authentication in these runtime-specific guides: - [Set up authentication in Node.js.](/node.js/authentication) - [Set up authentication in Java.](/java/security#authentication) In _productive_ environment with security middleware activated, **all protocol adapter endpoints are authenticated by default**1, even if no [restrictions](#restrictions) are configured. Multi-tenant SaaS-applications require authentication to provide tenant isolation out of the box. In case there is the business need to expose open endpoints for anonymous users, it's required to take extra measures depending on runtime and security middleware capabilities. > 1 Starting with CAP Node.js 6.0.0 resp. CAP Java 1.25.0. _In previous versions endpoints without restrictions are public in single-tenant applications_. ### Defining Internal Services CDS services which are only meant for *internal* usage, shouldn't be exposed via protocol adapters. In order to prevent access from external clients, annotate those services with `@protocol: 'none'`: ```cds @protocol: 'none' service InternalService { ... } ``` The `InternalService` service can only receive events sent by in-process handlers. ## User Claims { #user-claims} CDS authorization is _model-driven_. This basically means that it binds access rules for CDS model elements to user claims. For instance, access to a service or entity is dependent on the role a user has been assigned to. Or you can even restrict access on an instance level, for example, to the user who created the instance.
The generic CDS authorization is built on a _CAP user concept_, which is an _abstraction_ of a concrete user type determined by the platform's identity service. This design decision makes different authentication strategies pluggable to generic CDS authorization.
After successful authentication, a (CAP) user is represented by the following properties: - Unique (logon) _name_ identifying the user. Unnamed users have a fixed name such as `system` or `anonymous`. - _Tenant_ for multitenant applications. - _Roles_ that the user has been granted by an administrator (see [User Roles](#roles)) or that are derived by the authentication level (see [Pseudo Roles](#pseudo-roles)). - _Attributes_ that the user has been assigned by an administrator. In the CDS model, some of the user properties can be referenced with the `$user` prefix: | User Property | Reference | |-------------------------------|---------------------| | Name | `$user` | | Tenant | `$user.tenant` | | Attribute (name \) | `$user.` | > A single user attribute can have several different values. For instance, the `$user.language` attribute can contain `['DE','FR']`. ### User Roles { #roles} As a basis for access control, you can design conceptual roles that are application specific. Such a role should reflect how a user can interact with the application. For instance, the role `Vendor` could describe users who are allowed to read sales articles and update sales figures. In contrast, a `ProcurementManager` can have full access to sales articles. Users can have several roles, that are assigned by an administrative user in the platform's authorization management solution. ::: tip CDS-based authorization deliberately refrains from using technical concepts, such as _scopes_ as in _OAuth_, in favor of user roles, which are closer to the conceptual domain of business applications. This also results in much **smaller JWT tokens**. ::: ### Pseudo Roles { #pseudo-roles} It's frequently required to define access rules that aren't based on an application-specific user role, but rather on the _authentication level_ of the request. For instance, a service could be accessible not only for identified, but also for anonymous (for example, unauthenticated) users. Such roles are called pseudo roles as they aren't assigned by user administrators, but are added at runtime automatically. The following predefined pseudo roles are currently supported by CAP: * `authenticated-user` refers to named or unnamed users who have presented a valid authentication claim such as a logon token. * [`system-user` denotes an unnamed user used for technical communication.](#system-user) * [`internal-user` is dedicated to distinguish application internal communication.](#internal-user) * `any` refers to all users including anonymous ones (that means, public access without authentication). #### system-user The pseudo role `system-user` allows you to separate access by _technical_ users from access by _business_ users. Note that the technical user can come from a SaaS or the PaaS tenant. Such technical user requests typically run in a _privileged_ mode without any restrictions on an instance level. For example, an action that implements a data replication into another system needs to access all entities of subscribed SaaS tenants and can’t be exposed to any business user. Note that `system-user` also implies `authenticated-user`. ::: tip For XSUAA or IAS authentication, the request user is attached with the pseudo role `system-user` if the presented JWT token has been issued with grant type `client_credentials` or `client_x509` for a trusted client application. ::: #### internal-user Pseudo-role `internal-user` allows to define application endpoints that can be accessed exclusively by the own PaaS tenant (technical communication). The advantage is that similar to `system-user` no technical CAP roles need to be defined to protect such internal endpoints. However, in contrast to `system-user`, the endpoints protected by this pseudo-role do not allow requests from any external technical clients. Hence is suitable for **technical intra-application communication**, see [Security > Application Zone](/guides/security/overview#application-zone). ::: tip For XSUAA or IAS authentication, the request user is attached with the pseudo role `internal-user` if the presented JWT token has been issued with grant type `client_credentials` or `client_x509` on basis of the **identical** XSUAA or IAS service instance. ::: ::: warning All technical clients that have access to the application's XSUAA or IAS service instance can call your service endpoints as `internal-user`. **Refrain from sharing this service instance with untrusted clients**, for instance by passing services keys or [SAP BTP Destination Service](https://help.sap.com/docs/connectivity/sap-btp-connectivity-cf/create-destinations-from-scratch) instances. ::: ### Mapping User Claims Depending on the configured [authentication](#prerequisite-authentication) strategy, CAP derives a *default set* of user claims containing the user's name, tenant and attributes: | CAP User Property | XSUAA JWT Property | IAS JWT Property | |---------------------|----------------------------------|-------------------------| | `$user` | `user_name` | `sub` | | `$user.tenant` | `zid` | `app_tid` | | `$user.` | `xs.user.attributes.` | All non-meta attributes | ::: tip CAP does not make any assumptions on the presented claims given in the token. String values are copied as they are. ::: In most cases, CAP's default mapping will match your requirements, but CAP also allows you to customize the mapping according to specific needs. For instance, `user_name` in XSUAA tokens is generally not unique if several customer IdPs are connected to the underlying identity service. Here a combination of `user_name` and `origin` mapped to `$user` might be a feasible solution that you implement in a custom adaptation. Similarly, attribute values can be normalized and prepared for [instance-based authorization](#instance-based-auth). Find details and examples how to programmatically redefine the user mapping here: - [Set up Authentication in Node.js.](/node.js/authentication) - [Custom Authentication in Java.](/java/security#custom-authentication) ::: warning Be very careful when redefining `$user` The user name is frequently stored with business data (for example, `managed` aspect) and might introduce migration efforts. Also consider data protection and privacy regulations when storing user data. ::: ## Restrictions { #restrictions} According to [authentication](#prerequisite-authentication), CAP endpoints are closed to anonymous users. But **by default, CDS services have no access control** which means that authenticated users are not restricted. To protect resources according to your business needs, you can define [restrictions](#restrict-annotation) that make the runtime enforce proper access control. Alternatively, you can add custom authorization logic by means of an [authorization enforcement API](#enforcement). Restrictions can be defined on *different CDS resources*: - Services - Entities - (Un)bound actions and functions You can influence the scope of a restriction by choosing an adequate hierarchy level in the CDS model. For instance, a restriction on the service level applies to all entities in the service. Additional restrictions on entities or actions can further limit authorized requests. See [combined restrictions](#combined-restrictions) for more details. Beside the scope, restrictions can limit access to resources with regards to *different dimensions*: - The [event](#restricting-events) of the request, that is, the type of the operation (what?) - The [roles](#roles) of the user (who?) - [Filter-condition](#instance-based-auth) on instances to operate on (which?) ### @readonly and @insertonly { #restricting-events} Annotate entities with `@readonly` or `@insertonly` to statically restrict allowed operations for **all** users as demonstrated in the example: ```cds service BookshopService { @readonly entity Books {...} @insertonly entity Orders {...} } ``` Note that both annotations introduce access control on an entity level. In contrast, for the sake of [input validation](/guides/providing-services#input-validation), you can also use `@readonly` on a property level. In addition, annotation `@Capabilities` from standard OData vocabulary is enforced by the runtimes analogously: ```cds service SomeService { @Capabilities: { InsertRestrictions.Insertable: true, UpdateRestrictions.Updatable: true, DeleteRestrictions.Deletable: false } entity Foo { key ID : UUID } } ``` #### Events to Auto-Exposed Entities { #events-and-auto-expose} In general, entities can be exposed in services in different ways: it can be **explicitly exposed** by the modeler (for example, by a projection), or it can be **auto-exposed** by the CDS compiler due to some reason. Access to auto-exposed entities needs to be controlled in a specific way. Consider the following example: ```cds context db { @cds.autoexpose entity Categories : cuid { // explicitly auto-exposed (by @cds.autoexpose) ... } entity Issues : cuid { // implicitly auto-exposed (by composition) category: Association to Categories; ... } entity Components : cuid { // explicitly exposed (by projection) issues: Composition of many Issues; ... } } service IssuesService { entity Components as projection on db.Components; } ``` As a result, the `IssuesService` service actually exposes *all* three entities from the `db` context: * `db.Components` is explicitly exposed due to the projection in the service. * `db.Issues` is implicitly auto-exposed by the compiler as it is a composition entity of `Components`. * `db.Categories` is explicitly auto-exposed due to the `@cds.autoexpose` annotation. In general, **implicitly auto-exposed entities cannot be accessed directly**, that means, only access via a navigation path (starting from an explicitly exposed entity) is allowed. In contrast, **explicitly auto-exposed entities can be accessed directly, but only as `@readonly`**. The rationale behind that is that entities representing value lists need to be readable at the service level, for instance to support value help lists. See details about `@cds.autoexpose` in [Auto-Exposed Entities](/guides/providing-services#auto-exposed-entities). This results in the following access matrix: | Request | `READ` | `WRITE` | |--------------------------------------------------------|:------:|:-------:| | `IssuesService.Components` | | | | `IssuesService.Issues` | | | | `IssuesService.Categories` | | | | `IssuesService.Components[].issues` | | | | `IssuesService.Components[].issues[].category` | | | ::: tip CodeLists such as `Languages`, `Currencies`, and `Countries` from `sap.common` are annotated with `@cds.autoexpose` and so are explicitly auto-exposed. ::: ### @requires { #requires} You can use the `@requires` annotation to control which (pseudo-)role a user requires to access a resource: ```cds annotate BrowseBooksService with @(requires: 'authenticated-user'); annotate ShopService.Books with @(requires: ['Vendor', 'ProcurementManager']); annotate ShopService.ReplicationAction with @(requires: 'system-user'); ``` In this example, the `BrowseBooksService` service is open for authenticated but not for anonymous users. A user who has the `Vendor` _or_ `ProcurementManager` role is allowed to access the `ShopService.Books` entity. Unbound action `ShopService.ReplicationAction` can only be triggered by a technical user. ::: tip When restricting service access through `@requires`, the service's metadata endpoints (that means, `/$metadata` as well as the service root `/`) are restricted by default as well. If you require public metadata, you can disable the check with [a custom express middleware](../../node.js/cds-serve#add-mw-pos) using the [privileged user](../../node.js/authentication#privileged-user) (Node.js) or through config cds.security.authentication.authenticateMetadataEndpoints = false (Java), respectively. Please be aware that the `/$metadata` endpoint is *not* checking for authorizations implied by `@restrict` annotation. ::: ### @restrict { #restrict-annotation} You can use the `@restrict` annotation to define authorizations on a fine-grained level. In essence, all kinds of restrictions that are based on static user roles, the request operation, and instance filters can be expressed by this annotation.
The building block of such a restriction is a single **privilege**, which has the general form: ```cds { grant:, to:, where: } ``` whereas the properties are: * `grant`: one or more events that the privilege applies to * `to`: one or more [user roles](#roles) that the privilege applies to (optional) * `where`: a filter condition that further restricts access on an instance level (optional). The following values are supported: - `grant` accepts all standard [CDS events](../../about/best-practices#events) (such as `READ`, `CREATE`, `UPDATE`, and `DELETE`) as well as action and function names. `WRITE` is a virtual event for all standard CDS events with write semantic (`CREATE`, `DELETE`, `UPDATE`, `UPSERT`) and `*` is a wildcard for all events. - The `to` property lists all [user roles](#roles) or [pseudo roles](#pseudo-roles) that the privilege applies to. Note that the `any` pseudo-role applies for all users and is the default if no value is provided. - The `where`-clause can contain a Boolean expression in [CQL](/cds/cql)-syntax that filters the instances that the event applies to. As it allows user values (name, attributes, etc.) and entity data as input, it's suitable for *dynamic authorizations based on the business domain*. Supported expressions and typical use cases are presented in [instance-based authorization](#instance-based-auth). A privilege is met, if and only if **all properties are fulfilled** for the current request. In the following example, orders can only be read by an `Auditor` who meets `AuditBy` element of the instance: ```cds entity Orders @(restrict: [ { grant: 'READ', to: 'Auditor', where: 'AuditBy = $user' } ]) {/*...*/} ``` If a privilege contains several events, only one of them needs to match the request event to comply with the privilege. The same holds, if there are multiple roles defined in the `to` property: ```cds entity Reviews @(restrict: [ { grant:['READ', 'WRITE'], to: ['Reviewer', 'Customer'] } ]) {/*...*/} ``` In this example, all users that have the `Reviewer` *or* `Customer` role can read *or* write to `Reviews`. You can build restrictions based on *multiple privileges*: ```cds entity Orders @(restrict: [ { grant: ['READ','WRITE'], to: 'Admin' }, { grant: 'READ', where: 'buyer = $user' } ]) {/*...*/} ``` A request passes such a restriction **if at least one of the privileges is met**. In this example, `Admin` users can read and write the `Orders` entity. But a user can also read all orders that have a `buyer` property that matches the request user. Similarly, the filter conditions of matched privileges are combined with logical OR: ```cds entity Orders @(restrict: [ { grant: 'READ', to: 'Auditor', where: 'country = $user.country' }, { grant: ['READ','WRITE'], where: 'CreatedBy = $user' }, ]) {/*...*/} ``` Here an `Auditor` user can read all orders with matching `country` or that they have created. > Annotations such as @requires or @readonly are just convenience shortcuts for @restrict, for example: - `@requires: 'Viewer'` is equivalent to `@restrict: [{grant:'*', to: 'Viewer'}]` - `@readonly` is the same as `@restrict: [{ grant:'READ' }]` Currently, the security annotations **are only evaluated on the target entity of the request**. Restrictions on associated entities touched by the operation aren't regarded. This has the following implications: - Restrictions of (recursively) expanded or inlined entities of a `READ` request aren't checked. - Deep inserts and updates are checked on the root entity only. See [solution sketches](#limitation-deep-authorization) for information about how to deal with that.{.learn-more} #### Supported Combinations with CDS Resources Restrictions can be defined on different types of CDS resources, but there are some limitations with regards to supported privileges: | CDS Resource | `grant` | `to` | `where` | Remark | |-----------------|:-------:|:----:|:-----------------:|---------------| | service | | | | = `@requires` | | entity | | | 1 | | | action/function | | | 2 | = `@requires` | > 1For bound actions and functions that aren't bound against a collection, Node.js supports instance-based authorization at the entity level. For example, you can use `where` clauses that *contain references to the model*, such as `where: CreatedBy = $user`. For all bound actions and functions, Node.js supports simple static expressions at the entity level that *don't have any reference to the model*, such as `where: $user.level = 2`. > 2 For unbound actions and functions, Node.js supports simple static expressions that *don't have any reference to the model*, such as `where: $user.level = 2`. Unsupported privilege properties are ignored by the runtime. Especially, for bound or unbound actions, the `grant` property is implicitly removed (assuming `grant: '*'` instead). The same also holds for functions: ```cds service CatalogService { entity Products as projection on db.Products { ... } actions { @(requires: 'Admin') action addRating (stars: Integer); } function getViewsCount @(restrict: [{ to: 'Admin' }]) () returns Integer; } ``` ### Combined Restrictions { #combined-restrictions} Restrictions can be defined on different levels in the CDS model hierarchy. Bound actions and functions refer to an entity, which in turn refers to a service. Unbound actions and functions refer directly to a service. As a general rule, **all authorization checks of the hierarchy need to be passed** (logical AND). This is illustrated in the following example: ```cds service CustomerService @(requires: 'authenticated-user') { entity Products @(restrict: [ { grant: 'READ' }, { grant: 'WRITE', to: 'Vendor' }, { grant: 'addRating', to: 'Customer'} ]) {/*...*/} actions { action addRating (stars: Integer); } entity Orders @(restrict: [ { grant: '*', to: 'Customer', where: 'CreatedBy = $user' } ]) {/*...*/} action monthlyBalance @(requires: 'Vendor') (); } ``` > The privilege for the `addRating` action is defined on an entity level. The resulting authorizations are illustrated in the following access matrix: | Operation | `Vendor` | `Customer` | `authenticated-user` | not authenticated | |--------------------------------------|:--------:|:----------------:|:--------------------:|-------------------| | `CustomerService.Products` (`READ`) | | | | | | `CustomerService.Products` (`WRITE`) | | | | | | `CustomerService.Products.addRating` | | | | | | `CustomerService.Orders` (*) | | 1 | | | | `CustomerService.monthlyBalance` | | | | | > 1 A `Vendor` user can only access the instances that they created.
The example models access rules for different roles in the same service. In general, this is _not recommended_ due to the high complexity. See [best practices](#dedicated-services) for information about how to avoid this. ### Draft Mode {#restrictions-and-draft-mode} Basically, the access control for entities in draft mode differs from the [general restriction rules](#restrict-annotation) that apply to (active) entities. A user, who has created a draft, should also be able to edit (`UPDATE`) or cancel the draft (`DELETE`). The following rules apply: - If a user has the privilege to create an entity (`CREATE`), he or she also has the privilege to create a **new** draft entity and update, delete, and activate it. - If a user has the privilege to update an entity (`UPDATE`), he or she also has the privilege to **put it into draft mode** and update, delete, and activate it. - Draft entities can only be edited by the creator user. + In the Node.js runtime, this includes calling bound actions/functions on the draft entity. ::: tip As a result of the derived authorization rules for draft entities, you don't need to take care of draft events when designing the CDS authorization model. ::: ### Auto-Exposed and Generated Entities { #autoexposed-restrictions} In general, **a service actually exposes more than the explicitly modeled entities from the CDS service model**. This stems from the fact that the compiler auto-exposes entities for the sake of completeness, for example, by adding composition entities. Another reason is generated entities for localization or draft support that need to appear in the service. Typically, such entities don't have restrictions. The emerging question is, how can requests to these entities be authorized? For illustration, let's extend the service `IssuesService` from [Events to Auto-Exposed Entities](#events-and-auto-expose) by adding a restriction to `Components`: ```cds annotate IssuesService.Components with @(restrict: [ { grant: '*', to: 'Supporter' }, { grant: 'READ', to: 'authenticated-user' } ]); ``` Basically, users with the `Supporter` role aren't restricted, whereas authenticated users can only read the `Components`. But what about the auto-exposed entities such as `IssuesService.Issues` and `IssuesService.Categories`? They could be a target of an (indirect) request as outlined in [Events to Auto-Exposed Entities](#events-and-auto-expose), but none of them are annotated with a concrete restriction. In general, the same also holds for service entities, which are generated by the compiler, for example, for localization or draft support. To close the gap with auto-exposed and generated entities, the authorization of such entities is delegated to a so-called **authorization entity**, which is the last entity in the request path, which bears authorization information, that means, which fulfills at least one of the following properties: - Explicitly exposed in the service - Annotated with a concrete restriction - Annotated with `@cds.autoexpose` So, the authorization for the requests in the example is delegated as follows: | Request Target | Authorization Entity | |--------------------------------------------------------|:--------------------------------------:| | `IssuesService.Components` | 1 | | `IssuesService.Issues` | 1 | | `IssuesService.Categories` | `IssuesService.Categories`2 | | `IssuesService.Components[].issues` | `IssuesService.Components`3 | | `IssuesService.Components[].issues[].category` | `IssuesService.Categories`2 | > 1 Request is rejected.
> 2 `@readonly` due to `@cds.autoexpose`
> 3 According to the restriction. `` is relevant for instance-based filters. ### Inheritance of Restrictions Service entities inherit the restriction from the database entity, on which they define a projection. An explicit restriction defined on a service entity *replaces* inherited restrictions from the underlying entity. Entity `Books` on a database level: ```cds namespace db; entity Books @(restrict: [ { grant: 'READ', to: 'Buyer' }, ]) {/*...*/} ``` Services `BuyerService` and `AdminService` on a service level: ```cds service BuyerService @(requires: 'authenticated-user'){ entity Books as projection on db.Books; /* inherits */ } service AdminService @(requires: 'authenticated-user'){ entity Books @(restrict: [ { grant: '*', to: 'Admin'} /* overrides */ ]) as projection on db.Books; } ``` | Events | `Buyer` | `Admin` | `authenticated-user` | |-------------------------------|:-------:|:-------:|:--------------------:| | `BuyerService.Books` (`READ`) | | | | | `AdminService.Books` (`*`) | | | | ::: tip We recommend defining restrictions on a database entity level only in exceptional cases. Inheritance and override mechanisms can lead to an unclear situation. ::: ::: warning _Warning_ A service level entity can't inherit a restriction with a `where` condition that doesn't match the projected entity. The restriction has to be overridden in this case. ::: ## Instance-Based Authorization { #instance-based-auth } The [restrict annotation](#restrict-annotation) for an entity allows you to enforce authorization checks that statically depend on the event type and user roles. In addition, you can define a `where`-condition that further limits the set of accessible instances. This condition, which acts like a filter, establishes an *instance-based authorization*. The condition defined in the `where`-clause typically associates domain data with static [user claims](#user-claims). Basically, it *either filters the result set in queries or accepts only write operations on instances that meet the condition*. This means that, the condition applies to following standard CDS events only1: - `READ` (as result filter) - `UPDATE` (as reject condition2) - `DELETE` (as reject condition2) > 1 Node.js supports _static expressions_ that *don't have any reference to the model* such as `where: $user.level = 2` for all events. > 2 CAP Java uses a filter condition by default. For instance, a user is allowed to read or edit `Orders` (defined with the `managed` aspect) that they have created: ```cds annotate Orders with @(restrict: [ { grant: ['READ', 'UPDATE', 'DELETE'], where: 'CreatedBy = $user' } ]); ``` Or a `Vendor` can only edit articles on stock (that means `Articles.stock` positive): ```cds annotate Articles with @(restrict: [ { grant: ['UPDATE'], to: 'Vendor', where: 'stock > 0' } ]); ``` You can define `where`-conditions in restrictions based on [CQL](/cds/cql)-where-clauses.
Supported features are: * Predicates with arithmetic operators. * Combining predicates to expressions with `and` and `or` logical operators. * Value references to constants, [user attributes](#user-attrs), and entity data (elements including [paths](#association-paths)) * [Exists predicate](#exists-predicate) based on subselects.
CAP Java offers the option to enable rejection conditions for `UPDATE`, `DELETE` and custom events. Enable it using the configuration option cds.security.authorization.instance-based.reject-selected-unauthorized-entity.enabled: true.
::: info Avoid enumerable keys In case the filter condition is not met in an `UPDATE` or `DELETE` request, the runtime rejects the request (response code 403) even if the user is not even allowed to read the entity. To avoid to disclosure the existence of such entities to unauthorized users, make sure that the key is not efficiently enumerable. ::: ### User Attribute Values { #user-attrs} To refer to attribute values from the user claim, prefix the attribute name with '`$user.`' as outlined in [static user claims](#user-claims). For instance, `$user.country` refers to the attribute with the name `country`. In general, `$user.` contains a **list of attribute values** that are assigned to the user. The following rules apply: * A predicate in the `where` clause evaluates to `true` if one of the attribute values from the list matches the condition. * An empty (or not defined) list means that the user is fully restricted with regards to this attribute (that means that the predicate evaluates to `false`). For example, the condition `where: $user.country = countryCode` will grant a user with attribute values `country = ['DE', 'FR']` access to entity instances that have `countryCode = DE` _or_ `countryCode = FR`. In contrast, the user has no access to any entity instances if the value list of country is empty or the attribute is not available at all. #### Unrestricted XSUAA Attributes By default, all attributes defined in [XSUAA instances](#xsuaa-configuration) require a value (`valueRequired:true`) which is well-aligned with the CAP runtime that enforces restrictions on empty attributes. If you explicitly want to offer unrestricted attributes to customers, you need to do the following: 1. Switch your XSUAA configuration to `valueRequired:false` 2. Adjust the filter-condition accordingly, for example: `where: $user.country = countryCode or $user.country is null`. > If `$user.country` is undefined or empty, the overall expression evaluates to `true` reflecting the unrestricted attribute. ::: warning Refrain from unrestricted XSUAA attributes as they need to be designed very carefully as shown in the following example. ::: Consider this bad example with *unrestricted* attribute `country` (assuming `valueRequired:false` in XSUAA configuration): ```cds service SalesService @(requires: ['SalesAdmin', 'SalesManager']) { entity SalesOrgs @(restrict: [ { grant: '*', to: ['SalesAdmin', 'SalesManager'], where: '$user.country = countryCode or $user.country is null' } ]) { countryCode: String; /*...*/ } } ``` Let's assume a customer creates XSUAA roles `SalesManagerEMEA` with dedicated values (`['DE', 'FR', ...]`) and 'SalesAdmin' with *unrestricted* values. As expected, a user assigned only to 'SalesAdmin' has access to all `SalesOrgs`. But when role `SalesManagerEMEA` is added, *only* EMEA orgs are accessible suddenly! The preferred way is to model with restricted attribute `country` (`valueRequired:true`) and an additional grant: ```cds service SalesService @(requires: ['SalesAdmin', 'SalesManager']) { entity SalesOrgs @(restrict: [ { grant: '*', to: 'SalesManager', where: '$user.country = countryCode' }, { grant: '*', to: 'SalesAdmin' } ]) { countryCode: String; /*...*/ } } ``` ### Exists Predicate { #exists-predicate } In many cases, the authorization of an entity needs to be derived from entities reachable via association path. See [domain-driven authorization](#domain-driven-authorization) for more details. You can leverage the `exists` predicate in `where` conditions to define filters that directly apply to associated entities defined by an association path: ```cds service ProjectService @(requires: 'authenticated-user') { entity Projects @(restrict: [ { grant: ['READ', 'WRITE'], where: 'exists members[userId = $user and role = `Editor`]' } ]) { members: Association to many Members; /*...*/ } @readonly entity Members { key userId : User; key role: String enum { Viewer; Editor; }; /*...*/ } } ``` In the `ProjectService` example, only projects for which the current user is a member with role `Editor` are readable and editable. Note that with exception of the user ID (`$user`) **all authorization information originates from the business data**. Supported features of `exists` predicate: * Combine with other predicates in the `where` condition (`where: 'exists a1[...] or exists a2[...]`). * Define recursively (`where: 'exists a1[exists b1[...]]`). * Use target paths (`where: 'exists a1.b1[...]`). * Usage of [user attributes](#user-attrs). ::: warning Paths *inside* the filter (`where: 'exists a1[b1.c = ...]`) are not yet supported. ::: The following example demonstrates the last two features: ```cds service ProductsService @(requires: 'authenticated-user') { entity Products @(restrict: [ { grant: '*', where: 'exists producers.division[$user.division = name]'}]): cuid { producers : Association to many ProducingDivisions on producers.product = $self; } @readonly entity ProducingDivisions { key product : Association to Products; key division : Association to Divisions; } @readonly entity Divisions : cuid { name : String; producedProducts : Association to many ProducingDivisions on producedProducts.division = $self; } } ``` Here, the authorization of `Products` is derived from `Divisions` by leveraging the _n:m relationship_ via entity `ProducingDivisions`. Note that the path `producers.division` in the `exists` predicate points to target entity `Divisions`, where the filter with the user-dependent attribute `$user.division` is applied. ::: warning Consider Access Control Lists Be aware that deep paths might introduce a performance bottleneck. Access Control List (ACL) tables, managed by the application, allow efficient queries and might be the better option in this case. :::
### Association Paths { #association-paths} The `where`-condition in a restriction can also contain [CQL path expressions](/cds/cql#path-expressions) that navigate to elements of associated entities: ```cds service SalesOrderService @(requires: 'authenticated-user') { entity SalesOrders @(restrict: [ { grant: 'READ', where: 'product.productType = $user.productType' } ]) { product: Association to one Products; } entity Products { productType: String(32); /*...*/ } } ``` Paths on 1:n associations (`Association to many`) are only supported, _if the condition selects at most one associated instance_. It's highly recommended to use the [exists](#exists-predicate) predicate instead. ::: tip Be aware of increased execution time when modeling paths in the authorization check of frequently requested entities. Working with materialized views might be an option for performance improvement in this case. ::: ::: warning _Warning_ In Node.js association paths in `where`-clauses are currently only supported when using SAP HANA. ::: ## Best Practices CAP authorization allows you to control access to your business data on a fine granular level. But keep in mind that the high flexibility can end up in security vulnerabilities if not applied appropriately. In this perspective, lean and straightforward models are preferred. When modeling your access rules, the following recommendations can support you to design such models. ### Choose Conceptual Roles When defining user roles, one of the first options could be to align roles to the available _operations_ on entities, which results in roles such as `SalesOrders.Read`, `SalesOrders.Create`, `SalesOrders.Update`, and `SalesOrders.Delete`, etc. What is the problem with this approach? Think about the resulting number of roles that the user administrator has to handle when assigning them to business users. The administrator would also have to know the domain model precisely and understand the result of combining the roles. Similarly, assigning roles to operations only (`Read`, `Create`, `Update`, ...) typically doesn't fit your business needs.
We strongly recommend defining roles that describe **how a business user interacts with the system**. Roles like `Vendor`, `Customer`, or `Accountant` can be appropriate. With this approach, the application developers define the set of accessible resources in the CDS model for each role - and not the user administrator. ### Prefer Single-Purposed, Use-Case Specific Services { #dedicated-services} Have a closer look at this example: ```cds service CatalogService @(requires: 'authenticated-user') { entity Books @(restrict: [ { grant: 'READ' }, { grant: 'WRITE', to: 'Vendor', where: '$user.publishers = publisher' }, { grant: 'WRITE', to: 'Admin' } ]) as projection on db.Books; action doAccounting @(requires: ['Accountant', 'Admin']) (); } ``` Four different roles (`authenticated-user`, `Vendor`, `Accountant`, `Admin`) *share* the same service - `CatalogService`. As a result, it's confusing how a user can use `Books` or `doAccounting`. Considering the complexity of this small example (4 roles, 1 service, 2 resources), this approach can introduce a security risk, especially if the model is larger and subject to adaptation. Moreover, UIs defined for this service will likely appear unclear as well.
The fundamental purpose of services is to expose business data in a specific way. Hence, the more straightforward way is to **use a service for each of the roles**: ```cds @path:'browse' service CatalogService @(requires: 'authenticated-user') { @readonly entity Books as select from db.Books { title, publisher, price }; } @path:'internal' service VendorService @(requires: 'Vendor') { entity Books @(restrict: [ { grant: 'READ' }, { grant: 'WRITE', to: 'vendor', where: '$user.publishers = publisher' } ]) as projection on db.Books; } @path:'internal' service AccountantService @(requires: 'Accountant') { @readonly entity Books as projection on db.Books; action doAccounting(); } /*...*/ ``` ::: tip You can tailor the exposed data according to the corresponding role, even on the level of entity elements like in `CatalogService.Books`. ::: ### Prefer Dedicated Actions for Specific Use-Cases { #dedicated-actions} In some cases it can be helpful to restrict entity access as much as possible and create actions with dedicated restrictions for specific use cases, like in the following example: ```cds service GitHubRepositoryService @(requires: 'authenticated-user') { @readonly entity Organizations as projection on GitHub.Organizations actions { @(requires: 'Admin') action rename(newName : String); @(requires: 'Admin') action delete(); }; } ``` This service allows querying organizations for all authenticated users. In addition, `Admin` users are allowed to rename or delete. Granting `UPDATE` to `Admin` would allow administrators to change organization attributes that aren't meant to change. ### Think About Domain-Driven Authorization { #domain-driven-authorization} Static roles often don't fit into an intuitive authorization model. Instead of making authorization dependent from static properties of the user, it's often more appropriate to derive access rules from the business domain. For instance, all users assigned to a department (in the domain) are allowed to access the data of the organization comprising the department. Relationships in the entity model (for example, a department assignment to organization), influence authorization rules at runtime. In contrast to static user roles, **dynamic roles** are fully domain-driven. Revisit the [ProjectService example](#exists-predicate), which demonstrates how to leverage instance-based authorization to induce dynamic roles. Advantages of dynamic roles are: - The most flexible way to define authorizations - Induced authorizations according to business domain - Application-specific authorization model and intuitive UIs - Decentralized role management for application users (no central user administrator required) Drawbacks to be considered are: - Additional effort for modeling and designing application-specific role management (entities, services, UI) - Potentially higher security risk due to lower use of the framework functionality - Sharing authorization management with other (non-CAP) applications is harder to achieve - Dynamic role enforcement can introduce a performance penalty ### Control Exposure of Associations and Compositions { #limitation-deep-authorization} Note that exposed associations (and compositions) can disclose unauthorized data. Consider the following scenario: ```cds namespace db; entity Employees : cuid { // autoexposed! name: String(128); team: Association to Teams; contract: Composition of Contracts; } entity Contracts @(requires:'Manager') : cuid { // autoexposed! salary: Decimal; } entity Teams : cuid { members: Composition of many Employees on members.team = $self; } service ManageTeamsService @(requires:'Manager') { entity Teams as projection on db.Teams; } service BrowseEmployeesService @(requires:'Employee') { @readonly entity Teams as projection on db.Teams; // navigate to Contracts! } ``` A team (entity `Teams`) contains members of type `Employees`. An employee refers to a single contract (entity `Contracts`) which contains sensitive information that should be visible only to `Manager` users. `Employee` users should be able to browse the teams and their members, but aren't allowed to read or even edit their contracts.
As `db.Employees` and `db.Contracts` are auto-exposed, managers can navigate to all instances through the `ManageTeamsService.Teams` service entity (for example, OData request `/ManageTeamsService/Teams?$expand=members($expand=contract)`).
It's important to note that this also holds for an `Employee` user, as **only the target entity** `BrowseEmployeesService.Teams` **has to pass the authorization check in the generic handler, and not the associated entities**.
To solve this security issue, introduce a new service entity `BrowseEmployeesService.Employees` that removes the navigation to `Contracts` from the projection: ```cds service BrowseEmployeesService @(requires:'Employee') { @readonly entity Employees as projection on db.Employees excluding { contracts }; // hide contracts! @readonly entity Teams as projection on db.Teams; } ``` Now, an `Employee` user can't expand the contracts as the composition isn't reachable anymore from the service. ::: tip Associations without navigation links (for example, when an associated entity isn't exposed) are still critical with regards to security. ::: ### Design Authorization Models from the Start As shown before, defining an adequate authorization strategy has a deep impact on the service model. Apart from the fundamental decision, if you want to build your authorizations on [dynamic roles](#domain-driven-authorization), authorization requirements can result in rearranging service and entity definitions completely. In the worst case, this means rewriting huge parts of the application (including the UI). For this reason, it's *strongly* recommended to take security design into consideration at an early stage of your project. ### Keep it as Simple as Possible * If different authorizations are needed for different operations, it's easier to have them defined at the service level. If you start defining them at the entity level, all possible operations must be specified, otherwise the not mentioned operations are automatically forbidden. * If possible, try to define your authorizations either on the service or on the entity level. Mixing both variants increases complexity and not all combinations are supported either. ### Separation of Concerns Consider using [CDS Aspects](/cds/cdl#aspects) to separate the actual service definitions from authorization annotations as follows: ::: code-group ```cds [services.cds] service ReviewsService { /*...*/ } service CustomerService { entity Orders {/*...*/} entity Approval {/*...*/} } ``` ::: ::: code-group ```cds [services-auth.cds] service ReviewsService @(requires: 'authenticated-user'){ /*...*/ } service CustomerService @(requires: 'authenticated-user'){ entity Orders @(restrict: [ { grant: ['READ','WRITE'], to: 'admin' }, { grant: 'READ', where: 'buyer = $user' }, ]){/*...*/} entity Approval @(restrict: [ { grant: 'WRITE', where: '$user.level > 2' } ]){/*...*/} } ``` ::: This keeps your actual service definitions concise and focused on structure only. It also allows you to give authorization models separate ownership and lifecycle. ## Programmatic Enforcement { #enforcement} The service provider frameworks **automatically enforce** restrictions in generic handlers. They evaluate the annotations in the CDS models and, for example: * Reject incoming requests if static restrictions aren't met. * Add corresponding filters to queries for instance-based authorization, etc. If generic enforcement doesn't fit your needs, you can override or adapt it with **programmatic enforcement** in custom handlers: - [Authorization Enforcement in Node.js](/node.js/authentication#enforcement) - [Enforcement API & Custom Handlers in Java](/java/security#enforcement-api) ## Role Assignments with IAS and AMS The Authorization Management Service (AMS) as part of SAP Cloud Identity Services (SCI) provides libraries and services for developers of cloud business applications to declare, enforce and manage instance based authorization checks. When used together with CAP the AMS "Policies” can contain the CAP roles as well as additional filter criteria for instance based authorizations that can be defined in the CAP model. transformed to AMS policies and later on refined by customers user and authorization administrators in the SCI administration console and assigned to business users. ### Use AMS as Authorization Management System on SAP BTP SAP BTP is currently replacing the authorization management done with XSUAA by an integrated solution with AMS. AMS is integrated into SAP Cloud Identity (SCI), which will offer authentication, authorization, user provisioning and management in one place. For newly build applications the usage of AMS is generally recommended. The only constraint that comes with the usage of AMS is that customers need to copy their users to the Identity Directory Service as the central place to manage users for SAP BTP applications. This is also the general SAP strategy to simplify user management in the future. ### Case For XSUAA There is one use case where currently an XSUAA based authorization management is preferable: When XSUAA based services to be consumed by a CAP application come with their own business user roles and thus make user role assignment in the SAP Cloud Cockpit necessary. This will be resolved in the future when the authorization management will be fully based on the SCI Admin console. For example, SAP Task Center you want to consume an XSUAA-based service that requires own end user role. Apart from this, most services should be technical services that do not require an own authorization management that is not yet integrated in AMS. [Learn more about using IAS and AMS with CAP Node.js](https://github.com/SAP-samples/btp-developer-guide-cap/blob/main/documentation/xsuaa-to-ams/README.md){.learn-more} ## Role Assignments with XSUAA { #xsuaa-configuration} Information about roles and attributes has to be made available to the UAA platform service. This information enables the respective JWT tokens to be constructed and sent with the requests for authenticated users. In particular, the following happens automatically behind-the-scenes upon build: ### 1. Roles and Attributes Are Filled into the XSUAA Configuration Derive scopes, attributes, and role templates from the CDS model: ```sh cds add xsuaa --for production ``` This generates an _xs-security.json_ file: ::: code-group ```json [xs-security.json] { "scopes": [ { "name": "$XSAPPNAME.admin", "description": "admin" } ], "attributes": [ { "name": "level", "description": "level", "valueType": "s" } ], "role-templates": [ { "name": "admin", "scope-references": [ "$XSAPPNAME.admin" ], "description": "generated" } ] } ``` ::: For every role name in the CDS model, one scope and one role template are generated with the exact name of the CDS role. ::: tip Re-generate on model changes You can have such a file re-generated via ```sh cds compile srv --to xsuaa > xs-security.json ``` ::: See [Application Security Descriptor Configuration Syntax](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/6d3ed64092f748cbac691abc5fe52985.html) in the SAP HANA Platform documentation for the syntax of the _xs-security.json_ and advanced configuration options. ::: warning Avoid invalid characters in your models Roles modeled in CDS may contain characters considered invalid by the XSUAA service. ::: If you modify the _xs-security.json_ manually, make sure that the scope names in the file exactly match the role names in the CDS model, as these scope names will be checked at runtime. ### 2. XSUAA Configuration Is Completed and Published #### Through MTA Build If there's no _mta.yaml_ present, run this command: ```sh cds add mta ``` ::: details See what this does in the background… 1. It creates an _mta.yaml_ file with an `xsuaa` service. 2. The created service added to the `requires` section of your backend, and possibly other services requiring authentication. ::: code-group ```yaml [mta.yaml] modules: - name: bookshop-srv requires: - bookshop-auth // [!code ++] resources: name: bookshop-auth // [!code ++] type: org.cloudfoundry.managed-service // [!code ++] parameters: // [!code ++] service: xsuaa // [!code ++] service-plan: application // [!code ++] path: ./xs-security.json # include cds managed scopes and role templates // [!code ++] config: // [!code ++] xsappname: bookshop-${org}-${space} // [!code ++] tenant-mode: dedicated # 'shared' for multitenant deployments // [!code ++] ``` ::: Inline configuration in the _mta.yaml_ `config` block and the _xs-security.json_ file are merged. If there are conflicts, the [MTA security configuration](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/6d3ed64092f748cbac691abc5fe52985.html) has priority. [Learn more about **building and deploying MTA applications**.](/guides/deployment/){ .learn-more} ### 3. Assembling Roles and Assigning Roles to Users This is a manual step an administrator would do in SAP BTP Cockpit. See [Set Up the Roles for the Application](/node.js/authentication#auth-in-cockpit) for more details. If a user attribute isn't set for a user in the IdP of the SAP BTP Cockpit, this means that the user has no restriction for this attribute. For example, if a user has no value set for an attribute "Country", they're allowed to see data records for all countries. In the _xs-security.json_, the `attribute` entity has a property `valueRequired` where the developer can specify whether unrestricted access is possible by not assigning a value to the attribute. ### 4. Scopes Are Narrowed to Local Roles Based on this, the JWT token for an administrator contains a scope `my.app.admin`. From within service implementations of `my.app` you can reference the scope: ```js req.user.is ("admin") ``` ... and, if necessary, from others by: ```js req.user.is ("my.app.admin") ```
> See the following sections for more details: - [Developing Security Artifacts in SAP BTP](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/419ae2ef1ddd49dca9eb65af2d67c6ec.html) - [Maintaining Application Security in XS Advanced](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/35d910ee7c7a445a950b6aad989a5a26.html) # Platform Security This section provides an overview about the security architecture of CAP applications on different platforms. ## Platform Compliance { #platform-compliance } CAP applications run in a certain environment, that is, in the context of some platform framework that has specific characteristics. The underlying framework has a major impact on the security of the application, regardless of whether it runs a [cloud](#cloud) environment or [local](#local) environment. Moreover, CAP applications are tightly integrated with [platform services](#btp-services), in particular with identity and persistence service. ::: warning End-to-end security necessarily requires compliance with all security policies of all involved components. CAP application security requires consistent security configuration of the underlying platform and all consumed services. Consult the relevant security documentation accordingly. ::: ### CAP in Cloud Environment { #cloud } Currently, CAP supports to run on two cloud runtimes of [SAP Business Technology Platform](https://help.sap.com/docs/btp): - [SAP BTP, Cloud Foundry Runtime](https://help.sap.com/docs/btp/sap-business-technology-platform/cloud-foundry-environment) - [SAP BTP, Kyma Runtime](https://help.sap.com/docs/btp/sap-business-technology-platform/kyma-environment) Application providers are responsible to ensure a **secure platform environment**. In particular, this includes *configuring* [platform services](#btp-services) the application consumes. For instance, the provider (user) administrator needs to configure the [identity service](#identity-service) to separate platform users from business users that come from different identity providers. Likewise login policies (for example, multifactor authentication or single-sign-on) need to be aligned with company-specific requirements. Note, that achieving production-ready security requires to meet all relevant aspects of the **development process** as well. For instance, source code repositories need to be protected and may not contain any secrets or personal data. Likewise, the **deployment process** needs to be secured. That includes not only setting up CI/CD pipelines running on technical platform users, but also defining integration tests to ensure properly secured application endpoints. As part of **secure operations**, application providers need to establish a patch and vulnerability management, as well as a secure support process. For example, component versions need to be updated and credentials need to be rotated regularly. ::: warning The application provider is responsible to **develop, deploy, and operate the application in a secure platform environment**. CAP offers seamless integration into platform services and tools to help to meet these requirements. ::: Find more about BTP platform security here: [SAP BTP Security](https://help.sap.com/docs/btp/sap-business-technology-platform/security-e129aa20c78c4a9fb379b9803b02e5f6){.learn-more} [SAP BTP Security Recommendations](https://help.sap.com/docs/btp/sap-btp-security-recommendations-c8a9bb59fe624f0981efa0eff2497d7d/sap-btp-security-recommendations){.learn-more} [SAP BTP Security (Community)](https://pages.community.sap.com/topics/btp-security){.learn-more}
### CAP in Local Environment { #local } Security not only plays a crucial role in [cloud](#cloud) environments, but also during local development. Apparently the security requirements are different from cloud scenario as local endpoints are typically not exposed for remote clients. But there are still a few things to consider because exploited vulnerabilities could be the basis for attacks on productive cloud services: - Make sure that locally started HTTP endpoints are bound to `localhost`. - In case you run your service in hybrid mode with bindings to cloud service instances, use [cds bind](../../advanced/hybrid-testing) instead of copying bindings manually to `default-env.json` file. `cds bind` avoids materialization of secrets to local disc, which is inherently dangerous. - Don't write sensitive data to application logs, also not via debug logging. - Don't test with real business data, for example, copied from a productive system. ### SAP BTP Services for Security { #btp-services} SAP BTP provides a range of platform services that your CAP applications can utilize to meet production-grade security requirements. To ensure the security of your CAP applications, it's crucial to comply with the service level agreement (SLA) of these platform services. *As the provider of the application, you play a key role in meeting these requirements by correctly configuring and using these services.* ::: tip SAP BTP services and the underlying platform infrastructure hold various certifications and attestations, which can be found under the naming of SAP Cloud Platform in the [SAP Trust Center](https://www.sap.com/about/trust-center/certification-compliance/compliance-finder.html?search=SAP%20Business%20Technology%20Platform%20ISO). ::: The CAP framework offers flexible APIs that you can integrate with various services, including your custom services. If you replace platform services with your custom ones, it's important to ensure that the service level agreements (SLAs) CAP depends on are still met. The most important services for security offered by the platform: [Webcast SAP BTP Cloud Identity and Security Services](https://assets.dm.ux.sap.com/webinars/sap-user-groups-k4u/pdfs/221117_sap_security_webcast_series_sap_btp_cloud_identity_and_security_services.pdf){.learn-more} #### [SAP Cloud Identity Services - Identity Authentication](https://help.sap.com/docs/IDENTITY_AUTHENTICATION) { #identity-service } The Identity Authentication service defines the user base for (CAP) applications and services, and allows to control access. Customers can integrate their 3rd party or on-premise identity provider (IdP) and harden security by defining multifactor authentication or by narrowing client IP ranges. This service helps to introduce a strict separation between platform users (provider) and business users (subscribers), a requirement of CAP. It supports various authentication methods, including SAML 2.0 and [OpenID Connect](https://openid.net/connect/), and allows for the configuration of single sign-on access. [Learn more in the security guide.](https://help.sap.com/docs/IDENTITY_AUTHENTICATION?#discover_task-security){.learn-more} #### [SAP Authorization and Trust Management Service](https://help.sap.com/docs/CP_AUTHORIZ_TRUST_MNG) The service lets customers manage user authorizations in technical roles at application level, which can be aggregated into business-level role collections for large-scale cloud scenarios. Obviously, developers must define application roles carefully as they form basic access rules to business data. #### [SAP Malware Scanning Service](https://help.sap.com/docs/MALWARE_SCANNING) This service can be used to scan transferred business documents for malware and viruses. Currently, there is no CAP integration. A scan needs to be triggered by the business application explicitly. [Learn more in the security guide.](https://help.sap.com/docs/btp?#operate_task-security){.learn-more} #### [SAP Credential Store](https://help.sap.com/docs/CREDENTIAL_STORE) Credentials managed by applications need to be stored in a secure way. This service provides a REST API for (CAP) applications to store and retrieve credentials at runtime. [Learn more in the security guide.](https://help.sap.com/docs/CREDENTIAL_STORE?#discover_task-security){.learn-more} #### [SAP BTP Connectivity](https://help.sap.com/docs/CP_CONNECTIVITY) The connectivity service allows SAP BTP applications to securely access remote services that run on the Internet or on-premise. It provides a way to establish a secure communication channel between remote endpoints that are connected via an untrusted network infrastructure. [Learn more in the security guide.](https://help.sap.com/docs/CP_CONNECTIVITY/cca91383641e40ffbe03bdc78f00f681/cb50b6191615478aa11d2050dada467d.html){.learn-more} ## Architecture and Platform Requirements As [pointed out](#platform-compliance), CAP cloud applications run in a specific context that has a major impact on the security [architecture](#architecture-overview). CAP requires a dedicated [platform environment](#platform-environment) to integrate with, in order to ensure end-to-end security. ### Architecture Overview { #architecture-overview } The following diagram provides a high-level overview about the security-relevant aspects of a deployed CAP application in a cloud environment: ![This TAM graphic is explained in the accompanying text.](./assets/cap-security-architecture-overview.png){} To serve a business request, different runtime components are involved: a request, issued by a UI or technical client ([public zone](#public-zone)), is forwarded by a gateway or ingress router to the CAP application. In case of a UI request, an [Application Router](https://help.sap.com/docs/btp/sap-business-technology-platform/application-router) instance acts as a proxy. The CAP application might make use of a CAP sidecar. All application components ([application zone](#application-zone)) might make use of platform services such as database or identity service ([platform zone](#platform-zone)). #### Public Zone { #public-zone } From CAP's point of view, all components without specific security requirements belong to the public zone. Therefore, you shouldn't rely on the behavior or structure of consumer components like browsers or technical clients for the security of server components. The platform's gateway provides a single point of entry for any incoming call and defines the API visible to the public zone. As malicious users have free access to the public zone, these endpoints need to be protected carefully. Ideally, you should limit the number of exposed endpoints to a minimum, perhaps through proper network configuration. #### Platform Zone { #platform-zone } The platform zone contains all platform components and services that are *configured and maintained* by the application provider. CAP applications consume these low-level [platform services](#btp-services) to handle more complex business requests. For instance, persistence service to store business data and identity service to authenticate the business user play a fundamental role. The platform zone also includes the gateway, which is the main entry point for external requests. Additionally, it may contain extra ingress routers. #### Application Zone { #application-zone} The application zone comprises all microservices that represent a CAP application. They are tightly integrated and form a unit of trust. The application provider is responsible to *develop, deploy and operate* these services: - The [Application Router](https://help.sap.com/docs/btp/sap-business-technology-platform/application-router) acts as as an optional reverse proxy wrapping the application service and providing business-independent functionality required for UIs. This includes serving UI content, providing a login flow as well as managing the session with the browser. It can be deployed as application (reusable module) or alternatively consumed as a [service](https://help.sap.com/docs/btp/sap-business-technology-platform/managed-application-router). - The CAP application service exposes the API to serve business requests. Usually, it makes use of lower-level platform services. As built on CAP, a significant number of security requirements is covered either out of the box or by adding minimal configuration. - The optional CAP sidecar (reusable module) is used to outsource application-independent tasks such as providing multitenancy and extension support. Application providers, that is platform users, have privileged access to the application zone. In contrast, application subscribers, that is business users, are restricted to a minimal interface. ::: warning ❗ Application providers **may not share any secrets from the application zone** such as binding information with other components or persons. In a productive environment, it is recommended to deploy and operate the application on behalf of a technical user. ::: ::: tip Without limitation of generality, there may be multiple CAP services or sidecars according to common [microservice architecture pattern](https://microservices.io/patterns/microservices.html). ::: ### Required Platform Environment { #platform-environment } There are several assumptions that a CAP application needs to make about the platform environment it is deployed to: 1. Application and (platform) service endpoints are exposed externally by the API gateway via TLS protocol. Hence, the **CAP application can offer a pure HTTP endpoint** without having to enforce TLS and to deal with certificates. 2. The server certificates presented by the external endpoints are signed by a trusted certificate authority. This **frees CAP applications from the need to manage trust certificates**. The underlying runtimes (Java or Node) can validate the server certificates by default. 3. **Secrets** that are required to protect the application or to consume other platform services **are injected by the platform** into the application in a secure way. All supported [environments](overview#cloud) fulfill the given requirements. Additional requirements could be added in future. ::: tip Custom domain certificates need to be signed by trusted certificate authority. ::: ::: warning ❗ **In general, application endpoints are visible to public zone**. Hence, CAP can't rely on private endpoints. In particular, an application router does not prevent external access to the CAP application service. As a consequence, **all CAP endpoints must be protected in an appropriate manner**. ::: # Security Aspects This section describes in detail what CAP offers to protect your application. ## Secure Communications { #secure-communications } ### Encrypted Communication Channels { #encrypted-channels } *Integrity* and *confidentiality* of data being transferred between any communication endpoints needs to be guaranteed. In particular, this holds true for communication between client and server ([public zone](./overview#public-zone) resp. [platform zone](./overview#platform-zone)), but also for service-to-service communication (within a platform zone). That means the communication channels are established in a way that rules out undetected data manipulation or disclosure. #### Inbound Communication (Server) { #inbound } [SAP BTP](https://help.sap.com/docs/btp/sap-business-technology-platform/btp-security) exclusively establishes encrypted communication channels based on HTTPS/TLS as shown in the [architecture overview](./overview#architecture-overview) and hence fulfills the requirements out of the box. For all deployed (CAP) applications and platform services, the platform's API gateway resp. ingress router provides TLS endpoints accepting incoming request and forwards to the backing services via HTTP. The HTTP endpoints of microservices are only accessible for the router in terms of network technology (perimeter security) and therefore aren't visible for clients in public and platform zone. Likewise microservices can only serve a single network port, which the platform has opened for the hosting container. The router endpoints are configured with an up to date TLS protocol version containing a state-of-the-art cipher suite. Server authentication is given by X.509 server certificates signed by a trusted certificate authority. ::: tip It's mandatory for public clients to authenticate the server and to verify the server's identity by matching the target host name with the host name in the server certificate. ::: ::: tip Manually provided certificates for [custom domains](https://help.sap.com/docs/CUSTOM_DOMAINS/6f35a23466ee4df0b19085c9c52f9c29/4f4c3ff62fd2413089dce8a973620167.html) need to be signed by a [trusted certificate authority](https://help.sap.com/docs/btp/sap-business-technology-platform/trusted-certificate-authentication). ::: #### Outbound Communication (Client) { #outbound } As platform services and other applications deployed to BTP are only accessible via exposed TLS router endpoints, outbound connections are automatically secured as well. Consequently, technical clients have to [validate the server certificate](#inbound) for proper server authentication. Also here CAP application developers don't need to deal with HTTPS/TLS connection setup provided the client code is build on CAP offerings such as HANA Cloud Service or CloudSDK integration. ::: warning The **CAP application needs to ensure adequate protection of secrets** that are injected into CAP microservices, for example: - [mTLS authentication is enabled](https://help.sap.com/docs/btp/sap-business-technology-platform/enable-mtls-authentication-to-sap-authorization-and-trust-management-service-for-your-application) in the XSUAA service instance of your application and also for XSUAA reuse instances of platform services. - Ensure that [service bindings and keys](https://help.sap.com/docs/btp/sap-business-technology-platform/using-services-in-cloud-foundry-environment) aren't compromised (rotate regularly). - SAP BTP Connectivity services are maintained [securely](https://help.sap.com/docs/connectivity/sap-btp-connectivity-cf/connectivity-security). ::: #### Internal Communication (Client and Server) { #internal } Depending on the target platform, closely coupled microservices of the application zone might also communicate via trusted network channels instead of using [outbound connections](#outbound). For instance, a CAP service could communicate to a CAP sidecar, which is deployed to the same container via localhost HTTP connection. ::: tip CAP allows to use alternative communication channels, but application operators are responsible to set up them in a secure manner. ::: ::: tip CAP applications don't have to deal with TLS, communication encryption, or certificates, for inbound as well as outbound connections. ::: ### Filtering Internet Traffic { #filtering } Reducing attack surface by filtering communication from or to public zone increases the overall security protection level. By default, the platform comes with a standard set of services and configurations to protect network communication building on security features of the underlying hyperscaler. ::: warning Measures to further **restrict web access to your application** can be applied at platform level and aren't offered by CAP. For instance, [CF Route service](https://docs.cloudfoundry.org/services/route-services.html) can be used to implement route-specific restriction rules. ::: ## Secure Authentication { #secure-authentication } None-public resources may only be accessed by authenticated users. Hence, authentication plays a key role for product security on different levels: - **Business users** consume the application via web interface. In multitenant applications, they come from different subscriber tenants that need to be isolated from each other. - **Platform users** operate the application and have privileged access to its components on OS level (containers, configurations, logs etc.). Platform users come from the provider tenant. Managing user pools, providing a logon flow, and processing authentication are complex and highly security-critical tasks **that shouldn't be tackled by applications**. Instead, applications should rely on an identity service provided by the platform which is [seamlessly integrated by CAP](#authenticate-requests). Find more about platform and business users: [SAP BTP User and Member Management](https://help.sap.com/docs/btp/sap-business-technology-platform/user-and-member-management){.learn-more} ### Server Requests { #authenticate-requests } SAP BTP offers central identity services [SAP Cloud Identity Services - Identity Authentication](https://help.sap.com/docs/IDENTITY_AUTHENTICATION) resp. [SAP Authorization and Trust Management Service](https://help.sap.com/docs/CP_AUTHORIZ_TRUST_MNG) for managing and authenticating platform and business users providing: - User authentication flows (OpenID connect), for example, multifactor authentication - Federation of custom identity providers (IdPs) - Single-sign on - Principal propagation - Password and session policies etc. The central platform service provides applications with a large set of industry-proven security features, which is why applications don't have to develop their own extensions and run the risk of security flaws. CAP doesn't require any specific authentication strategy, but it provides out of the box integration with the platform identity service. On configured authentication, *all CAP endpoints are authenticated by default*. ::: warning ❗ **CAP applications need to ensure that an appropriate [authentication method](/guides/security/authorization#prerequisite-authentication) is configured**. It's highly recommended to establish integration tests to safeguard a valid configuration. ::: Learn more about user model and identity providers here: [SAP BTP Security](https://help.sap.com/docs/btp/sap-business-technology-platform/btp-security){.learn-more} ### Remote Services { #authenticate-remote } CAP microservices consume remote services and hence need to be authenticated as technical client as well. Similar to [request authentication](#authenticate-requests), CAP saves applications from having to implement secure setup of service to service communication: - CAP interacts with platform services such as [Event Mesh](../messaging/) or [SaaS Provisioning Service](../deployment/to-cf) on basis of platform-injected service bindings. - CAP offers consumption of [Remote Services](../using-services) on basis of [SAP BTP destinations](../using-services#btp-destinations). Note that the applied authentication strategy is specified by server offering and resp. configuration and not limited by CAP.
### Maintaining Sessions { #sessions } CAP microservices require [authentication](#authenticate-requests) of all requests, but they don't support logon flows for UI clients. Being stateless, they neither establish a session with the client to store login information such as an OAuth 2 token that needs to be passed in each server request. To close this gap, UI-based CAP applications can use an [Application Router](https://help.sap.com/docs/btp/sap-business-technology-platform/application-router) instance or service as reverse proxy as depicted in the [diagram](./overview#architecture-overview). The Application Router redirects the login to the identity service, fetches an OAuth2 token, and stores it into a secure session cookie. ::: warning ❗ The **Application Router endpoints don't hide CAP endpoints** in the service backend. Hence, authentication is still mandatory for CAP microservices. ::: ### Maintaining Secrets { #secrets } To run a CAP application that authenticates users and consumes remote services, **it isn't required to manage any secrets such as keys, tokens, or passwords**. Also CAP doesn't store any of them, but relies on platform [injection mechanisms](./overview#platform-environment) or [destinations](../using-services#btp-destinations). ::: tip In case you still need to store any secrets, use a platform service [SAP Credential Store](https://help.sap.com/docs/CREDENTIAL_STORE). ::: ## Secure Authorization { #secure-authorization } According to segregation of duties paradigm, user administrators need to control how different users may interact with the application. Critical combinations of authorizations must be avoided. Basically, access rules for [business users](#business-authz) are different from [platform users](#platform-authz). ### Business Users { #business-authz } To align with the principle of least privilege, applications need to enforce fine-grained access control for business users from the subscriber tenants. Depending from the business scenario, users need to be restricted to operations they perform on server resources, for example, reading an entity collection. Moreover, they might also be limited to a subset of data entries, that is, they may only operate on a filtered view on the data. The set of rules that apply to a user reflects a specific conceptual role that describes the interaction with the application to fulfill a business scenario. Obviously, the business roles are dependent from the scenarios and hence *need to be defined by the application developers*. Enforcing authorization rules at runtime is highly security-critical and shouldn't be implemented by the application as this would introduce the risk of security flaws. Instead, [CAP authorizations](/guides/security/authorization) follow a declarative approach allowing applications to design comprehensive access rules in the CDS model. Resources in the model such as services or entities can be restricted to users that fulfill specific conditions as declared in `@requires` or `@restrict` [annotations](/guides/security/authorization#restrictions). According to the declarations, server-side authorization enforcement is guaranteed for all requests. It's executed close before accessing the corresponding resources. ::: warning ❗ **By default, CAP services and entities aren't authorized**. Application developers need to **design and test access rules** according to the business need. ::: ::: tip To verify CAP authorizations in your model, it's recommended to use [CDS lint rules](../../tools/cds-lint/rules/). ::: The rules prepared by application developers are applied to business users according to grants given by the subscribers user administrator, that is, they're applied tenant-specific. CAP authorizations can be defined dependently from [user claims](/guides/security/authorization#user-claims) such as [XSUAA scopes or attributes](https://help.sap.com/docs/btp/sap-business-technology-platform/application-security-descriptor-configuration-syntax) that are deployed by application developers and granted by the user administrator of the subscriber. Hence, CAP provides a seamless integration of central identity service without technical lock-in. ::: tip You can generate the `xs-security.json` [descriptor file](https://help.sap.com/docs/btp/sap-business-technology-platform/protecting-your-application) of the application's XSUAA instance by executing `cds add xsuaa` in the project root folder. The XSUAA scopes, roles, and attributes are derived from the CAP authorization model. ::: ::: warning CAP authorization enforcement doesn't automatically log successful and unsuccessful authorization checks. Applications need to add corresponding custom handlers to support it. ::: #### Authorization of CAP Endpoints { #cap-endpoints } In general, responses created by *standard* CAP handlers and services are created on need-to-know basis. This means, authorized users only receive server information according to their privilege. Therefore, business users won't gain information about server host names, any version of application server component, generated queries etc. Based on the CDS model and configuration of CDS services, the CAP runtime exposes following endpoints: | Name | Configuration | URL | Authorization | |-------------------|------------------|-------------------------------------------|-----------------------------------------------| | CDS Service `Foo` | `service Foo {}` | `//Foo/**`1 | `@restrict`/`@requires`2 | | | OData v2/v4 | `//Foo/$metadata`1 | See [here](/guides/security/authorization#requires) | | Index page | | `/index.html` | none, but disabled in production | > 1 See [protocols and paths](../../java/cqn-services/application-services#configure-path-and-protocol) > 2 No authorization by default Based on configured features, the CAP runtime exposes additional callback endpoints for specific platform service:
| Platform service | URL | Authorization | |------------------------------|-----------------------------|--------------------------------------------------------------------------------------------------------| | Multitenancy (SaaS Registry) | `/mt/v1.0/subscriptions/**` | Technical role `mtcallback` |
| Platform service | URL | Authorization | |------------------------------|-------------------------|---------------| | Multitenancy (SaaS Registry) | none so far for Node.js | |
Moreover, technical [MTXs CAP services](../multitenancy/mtxs) may be configured, for example, as sidecar microservice to support higher-level features such as Feature Toggles or Multitenancy: | CAP service | URL | Authorization | ----------- | --- | ------------- | [cds.xt.ModelProviderService](../multitenancy/mtxs#modelproviderservice) | `/-/cds/model-provider/**` | Internal, technical user1 | [cds.xt.DeploymentService](../multitenancy/mtxs#deploymentservice) | `/-/cds/deployment/**` | | Internal, technical user1, or technical role `cds.Subscriber` | [cds.xt.SaasProvisioningService](../multitenancy/mtxs#saasprovisioningservice) | `/-/cds/saas-provisioning/**` | Internal, technical user1, or technical roles `cds.Subscriber` resp. `mtcallback` | [cds.xt.ExtensibilityService](../multitenancy/mtxs#extensibilityservice) | `/-/cds/extensibility/**` | Internal, technical user1, or technical roles `cds.ExtensionDeveloper` > 1 The microservice running the MTXS CAP service needs to be deployed to the [application zone](./overview#application-zone) and hence has established trust with the CAP application client, for instance given by shared XSUAA instance. Authentication for a CAP sidecar needs to be configured just like any other CAP application. ::: warning ❗ Ensure that technical roles such as `cds.Subscriber`, `mtcallback`, or `emcallback` **are never included in business roles**. ::: ### Platform Users { #platform-authz } Similar to [business consumption](#business-authz), different scenarios apply on operator level that need to be separated by dedicated access rules: deployment resp. configuration, monitoring, support, audit logs etc. *CAP doesn't cover authorization of platform users*. Please refer to security documentation of the underlying SAP BTP runtime environment: - [Roles in the Cloud Foundry Environment](https://help.sap.com/docs/btp/sap-business-technology-platform/about-roles-in-cloud-foundry-environment) - [Roles in the Kyma Environment](https://help.sap.com/docs/btp/sap-business-technology-platform/assign-roles-in-kyma-environment) ## Secure Multi-Tenancy { #secure-multitenancy } Multitenant SaaS-applications need to take care for security aspects on a higher level. Different subscriber tenants share the same runtime stack to interact with the CAP application. Ideally, from perspective of a single tenant, the runtime should look like a self-contained virtual system that doesn't interfere with any other tenant. All directly or indirectly involved services that process the business request require to isolate with regards to several dimensions: - No breakout to [persisted data](#isolated-persistent-data) - No breakout to [transient data](#isolated-transient-data) - Limited [resource consumption](#limiting-resource-consumption) The CAP runtime is designed from scratch to support tenant isolation: ### Isolated Persistent Data { #isolated-persistent-data } Having configured [Multitenancy in CAP](../multitenancy/), when serving a business request, CAP automatically targets an isolated HDI container dedicated for the request tenant to execute DB statements. Here, CAP's data query API based on [CQN](../../cds/cqn) is orthogonal to multitenancy, that is, custom CAP handlers can be implemented agnostic to MT. During tenant onboarding process, CAP triggers the HDI container creation via [SAP HANA Cloud Services](https://help.sap.com/docs/HANA_SERVICE_CF/cc53ad464a57404b8d453bbadbc81ceb/f70399be7fca4508aa0e33e138dbd84d.html). The containers have separated DB schemas and dedicated technical DB users for access. CAP guarantees that code for business requests runs on a DB connection opened for the technical user of the tenant's container. ### Isolated Transient Data { #isolated-transient-data } Although CAP microservices are stateless, the CAP Java runtime (generic handlers inclusive) needs to cache data in-memory for performance reasons. For instance, filters for [instance-based authorization](/guides/security/authorization#instance-based-auth) are constructed only once and are reused in subsequent requests.
To minimize risk of a data breach by exposing transient data at runtime, the CAP Java runtime explicitly refrains from declaring and using static mutable objects in Java heap. Instead, request-related data such as the [EventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/EventContext.html) is provided via thread-local storage. Likewise, data is stored in tenant-maps that are transitively referenced by the [CdsRuntime](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/CdsRuntime.html) instance. ::: warning Make sure that custom code doesn't break tenant data isolation. :::
Request-related data is propagated down the call stack via the continuation-local variable [cds.context](../../node.js/events#cds-context). ::: warning Make sure that custom code doesn't break tenant data isolation or leak data across concurrent requests. ::: As a best practice, you should not put any non-static variables in the closures of your service implementations. ##### **Bad example:** {.bad} ::: code-group ```js [srv/cat-service.js] module.exports = srv => { let books // <- leaks data across tenants and concurrent requests // [!code error] srv.on('READ', 'Books', async function(req, next) { if (books) return books return books = await next() }) } ``` :::
### Limiting Resource Consumption { #limiting-resource-consumption } Tenant-aware microservices also need to handle resource consumption of tenants, in particular with regards to CPU, memory, and network connections. Excessive use of resources requested by a single tenant could cause runtime problems for other consumers (noisy neighbor problem). CAP helps to control resource usage:
- Business request run in isolated Java threads and hence OS thread scheduling ensures fair distribution of CPU shares. - By default, tenants have dedicated DB connection pools.
- Fine granular processing of request (CAP handlers) to avoid disproportionate blocking times of the event loop. - Tenants have dedicated DB connection pools.
::: tip Make sure that custom code doesn't introduce excessive memory or CPU consumption within a single request. ::: Because OS resources are strictly limited in a virtualized environment, a single microservice instance can handle load of a limited set of tenants, only. [**Adequate sizing**](#dos-attacks) of your microservice is mandatory, that is, adjusting memory settings, connection pool sizes, request size limits etc. according to the business needs. Last but not least you need to implement a **scaling strategy** to meet increasing load requirements by additional microservice instances. ::: warning ❗ **Sizing and scaling** is up to application developers and operators. CAP default values aren't suitable for all applications. ::: ## Secure Against Untrusted Input { #secure-untrusted-input } Without protection mechanism in place, a malicious user could misuse a valid (that is, authenticated) session with the server and attack valuable business assets. ### Injection Attacks { #injection-attacks } Attackers can send malicious input data in a regular request to make the server perform unintended actions that can lead to serious data exploits. #### Common Attack Patterns { #common-injection-attacks } - CAP's intrinsic data querying engine is immune with regards to [SQL injections](https://owasp.org/www-community/attacks/SQL_Injection) that are introduced by query parameter values that are derived from malicious user input. [CQL statements](../querying) are transformed into prepared statements that are executed in SQL databases such as SAP HANA. Be aware that injections are still possible even via CQL when the query structure (target entity, columns and so on) is based on user input:
```java String entity = ...; // from user input; String column = ...; // from user input; validate(entity, column); // for example, by comparing with positive list Select.from(entity).columns(b -> b.get(column)); ```
```js const entity = const column = validate(entity, column) // for example, by comparing with positive list SELECT.from(entity).columns(column) ```
::: warning Be careful with custom code when creating or modifying CQL queries. Additional input validation is needed when the query structure depends on the request's input. ::: - [Cross Site Scripting (XSS)](https://owasp.org/www-community/attacks/xss) is used by attackers to inject a malicious script, which is executed in the browser session of an unsuspecting user. By default, there are some protection mechanisms in place. For instance, CAP OData V4 adapter renders responses with HTTP, which prevents the browser from misinterpreting the context. On the client side, SAPUI5 provides input validation for all typed element properties and automatic output encoding in all standard controls. - Untrusted data being transferred may contain malware. [SAP Malware Scanning Service](https://help.sap.com/docs/MALWARE_SCANNING) is capable to scan provided input streams for viruses and is regularly updated. ::: warning ❗ Currently, CAP applications need to add custom handlers to **scan data being uploaded or downloaded**. ::: - [Path traversal](https://owasp.org/www-community/attacks/Path_Traversal) attacks aim to access parts of the server's file system outside the web root folder. As part of the [application zone](./overview#application-zone), an Application Router serves the static UI content of the application. The CAP microservice doesn't need to serve web content from file system. Apart from that the used web server frameworks such as Spring or Express already have adequate protection mechanisms in place. - [CLRF injections](https://owasp.org/www-community/vulnerabilities/CRLF_Injection) or [log injections](https://owasp.org/www-community/attacks/Log_Injection) can occur when untrusted user input is written to log output.
CAP Node.js offers a CLRF-safe [logging API](../../node.js/cds-log#logging-in-production) that should be used for application logs.
::: warning Currently, CAP applications need to care for escaping user data that is used as input parameter for application logging. It's recommended to make use of an existing Encoder such as OWASP [ESAPI](https://www.javadoc.io/doc/org.owasp.esapi/esapi/2.0.1/org/owasp/esapi/Encoder.html). :::
- [Deserialization of untrusted data](https://owasp.org/www-community/vulnerabilities/Deserialization_of_untrusted_data) can lead to serious exploits including remote code execution. The OData adapter converts JSON payload into an object representation. Here it follows a hardened deserialization process where the deserializer capabilities (for example, no default types in Jackson) are restricted to a minimum. A strong input validation based on EDMX model is done as well. Moreover, deserialization errors terminate the request and are tracked in the application log. #### General Recommendations Against Injections { #general-injection-attacks } In general, to achieve perfect injection resistance, applications should have input validation, output validation, and a proper Content-Security-Policy in place. - CAP provides built-in support for **input validation**. Developers can use the [`@assert`](../providing-services#input-validation) annotation to define field-specific input checks. ::: warning Applications need to validate or sanitize all input variables according to the business context. ::: - With respect to **output encoding**, CAP OData adapters have proper URI encoding for all resource locations in place. Moreover, OData validates the JSON response according to the given EDMX schema. In addition, client-side protection is given by [SAPUI5](https://pages.community.sap.com/topics/ui5) standard controls - Applications should meet basic [Content Security Policy (CSP)](https://www.w3.org/TR/CSP2/) compliance rules to further limit the attack vector on client side. CSP-compatible browsers only load resources from web locations that are listed in the allowlist defined by the server. `Content-Security-Policy` header can be set as route-specific response header in the [Application Router](https://help.sap.com/docs/btp/sap-business-technology-platform/responseheaders). SAPUI5 is [CSP-compliant](https://sapui5.hana.ondemand.com/sdk/#/topic/fe1a6dba940e479fb7c3bc753f92b28c.html) as well. ::: warning Applications have to **configure Content Security Policy** to meet basic compliance. ::: ### Service Misuse Attacks { #misues-attacks } - [Server Side Request Forgery (SSRF)](https://owasp.org/www-community/attacks/Server_Side_Request_Forgery) abuses server functionality to read or update resources from a secondary system. CAP microservices are protected from this kind of attack if they use the [CAP standard mechanisms](#authenticate-remote) for service to service communication. - [Cross-Site Request Forgery (CSRF)](https://owasp.org/www-community/attacks/csrf) attacks make end users executing unwanted actions on the server while having established a valid web session. By default, the Application Router, which manages the session with the client, enforces a CSRF token protection (on basis of `x-csrf-token` headers). Hence, CAP services don't have to deal with CSRF protection as long as they don't maintain sessions with the client. SAPUI5 supports CSRF tokens on client side out of the box. - [Clickjacking](https://owasp.org/www-community/attacks/Clickjacking) is an attack on client side where end users are tricked to open foreign pages. SAPUI5 provides [protection mechanisms](https://sapui5.hana.ondemand.com/sdk/#/topic/62d9c4d8f5ad49aa914624af9551beb7.html) against this kind of attack. ::: warning To protect SAPUI5 applications against clickjacking, configure `frame options`. ::: ### Denial-of-Service Attacks { #dos-attacks } [Denial-of-service (DoS)](https://owasp.org/www-community/attacks/Denial_of_Service) attacks attempt to reduce service availability for legitimate users. This can happen by erroneous server behavior upon a single large or a few specially crafted malicious requests that bind an excessive amount of shared OS resources such as CPU, memory, or network connections. Since OS resource allocations are distributed over the entire request, DoS-prevention needs to be addressed in all different layers of the runtime stack: #### HTTP Server and CAP Protocol Adapter The used web server frameworks such as [Spring/Tomcat](https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#appendix.application-properties.server) or [Express](https://expressjs.com/) start with reasonable default limits, for example: - Maximum size of the HTTP request header. - Maximum size of the HTTP request body. - Maximum queue length for incoming connection requests - Maximum number of connections that the server accepts and processes at any given time. - Connection timeout. Additional size limits and timeouts (request timeout) are established by the reverse proxy components, API Gateway and Application Router. ::: tip If you want to apply an application-specific sizing, consult the corresponding framework documentation. See section [Maximum Request Body Size](../../node.js/cds-server#maximum-request-body-size) to find out how to restrict incoming requests to a CAP Node.js application depending on the body size. ::: Moreover, CAP adapters automatically introduce query results pagination in order to limit memory peaks (customize with [`@cds.query.limit`](../providing-services#annotation-cds-query-limit)). The total number of request of OData batches can be limited by application configuration.
Settings cds.odataV4.batch.maxRequests resp. cds.odataV2.batch.maxRequests specify the corresponding limits.
::: warning ❗ CAP applications have to limit the amount of `$expands` per request in a custom handler. Also the maximum amount of requests per `$batch` request need to be configured as follows: - Node.js: cds.odata.batch_limit = \ - Java: cds.odataV4.batch.maxRequests = \ ::: ::: tip Design your CDS services exposed to web adapters on need-to-know basis. Be especially careful when exposing associations. ::: #### CAP Service Runtime Open transactions are expensive as they bind many resources such as a database connection as well as memory buffers. To minimize the amount of time a transaction must be kept open, the CAP runtime offers an [Outbox Service](../../java/outbox) that allows to schedule asynchronous remote calls in the business transaction. Hence, the request time to process a business query, which requires a remote call (such as to an audit log server or messaging broker), is minimized and independent from the response time of the remote service. ::: tip Avoid synchronous requests to remote systems during a transaction. ::: [See why CPU time is fairly distributed among business requests](#limiting-resource-consumption){.learn-more} #### Database As already outlined, database connections are a expensive resource. To limit overall usage, by default, the CAP runtime creates connection pools per subscriber tenant. Similarly, the DB driver settings such as SQL query timeout and buffer size have reasonable values. ::: tip
In case the default setting doesn't fit, connection pool properties and driver settings can be customized, respectively.
In case the default setting doesn't fit, connection pool properties and driver settings can be customized, respectively.
::: ::: warning ❗ Applications need to establish an adequate [Workload Management](https://help.sap.com/docs/HANA_CLOUD_DATABASE/f9c5015e72e04fffa14d7d4f7267d897/30f2e9cb92aa4f358dda4ac58e062d83.html) that controls DB resource usage. ::: #### Supplementary Measures As outlined before, a well-sized microservice instance doesn't help to protect from service downtimes when excessive workload initiated by an attacker exceeds the available capacity. [Rate limiting](https://help.sap.com/docs/btp/developing-resilient-apps-on-sap-btp/rate-limiting-c56d72711eec41118f243054f2e92f94) is a possible counter measure to restrict the frequency of calls of a client. ::: warning ❗ Applications need to establish an adequate **rate limiting** strategy. ::: There's also the possibility to introduce request filtering and rate limiting on platform level via [Route Service](https://docs.cloudfoundry.org/services/route-services.html). It has the advantage that the requests can be controlled centrally before touching application service instances. In addition, the number of instances need to be **scaled horizontally** according to current load requirements. This can be achieved automatically by consuming [Application Autoscaler](https://help.sap.com/docs/Application_Autoscaler). ### Additional Protection Mechanisms { #additional-attacks } There are additional attack vectors to consider. For instance, naive URL handling in the server endpoints frequently introduces security gaps. Luckily, CAP applications don't have to implement HTTP/URL processing on their own as CAP offers sophisticated [protocol adapters](../../about/features#consuming-services) such as OData V2/V4 that have the necessary security validations in place. The adapters also transform the HTTP requests into a corresponding CQN statement. Access control is performed on basis of CQN level according to the CDS model and hence HTTP Verb Tampering attacks are avoided. Also HTTP method override, using `X-Http-Method-Override` or `X-Http-Method` header, is not accepted by the runtime. The OData protocol allows to encode field values in query parameters of the request URL or in the response headers. This is, for example, used to specify: - [Pagination (implicit sort order)](../providing-services#pagination-sorting) - [Searching Data](../providing-services#searching-data) - Filtering ::: warning Applications need to ensure by means of CDS modeling that fields reflecting sensitive data are excluded and don't appear in URLs. ::: ::: tip It's recommended to serve all application endpoints via CAP adapters. Securing custom endpoints is left to the application. ::: In addition, CAP runs on a virtual machine with a managed heap that protects from common memory corruption vulnerabilities such as buffer overflow or range overflows. CAP also brings some tools to effectively reduce the attack vector of race condition vulnerabilities. These might be exposed when the state of resources can be manipulated concurrently and a consumer faces an unexpected state. CAP provides basic means of [concurrency control](../providing-services#concurrency-control) on different layers, for example [ETags](../providing-services#etag) and [pessimistic locks](../providing-services#select-for-update). Moreover, Messages received from the [message queue](../messaging/) are always in order. ::: tip Applications have to ensure a consistent data processing taking concurrency into account. :::
## Secure by Default and by Design { #secure-by-default } ### Secure Default Configuration { #secure-default } Where possible, CAP default configuration matches the secure by default principle: - There's no need to provide any password, credentials, or certificates to [protect communication](#secure-communications). - A CAP application bound to an XSUAA instance authenticates all endpoints [by default](#secure-authentication). Developers have to explicitly configure public endpoints if necessary. - Isolated multitenancy is provided out of the box. - Application logging has `INFO` level to avoid potential information disclosures. - CAP also has first-class citizen support for [Fiori UI](../../advanced/fiori) framework that brings a lot secure by default features in the UI client. Of course, several security aspects need application-specific configuration. For instance, this is true for [authorizations](#secure-authorization) or application [sizing](#dos-attacks). ::: tip It's recommended to ensure security settings by automated integration tests. ::: ### Fail Securely { #fail-securely } CAP runtime differentiates several types of error situations during request processing: - Exceptions because of invalid user input (HTTP 4xx). - Exceptions because of unexpected server behaviour, for example, network issues. - Unrecoverable errors due to serious issues in the VM (for example, lack of memory) or program flaws. In general, **exceptions immediately stop the execution of the current request**. In Java, the thrown [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/EventContext.html) is automatically scoped to the current request by means of thread isolation. { .java } CAP Node.js adds an exception wrapper to ensure that only the failing request is affected by the exception. { .node } Customers can react in dedicated exception handlers if necessary. In contrast, **errors stop the overall microservice** to ensure that security measures aren't weakened. ::: tip Align the exception handling in your custom coding with the provided exception handling capabilities of the CAP runtime. :::
# Data Protection & Privacy This section describes how you can make your CAP application compliant with data protection and privacy requirements. ## General Statement { #dpp-statement } Governments place legal requirements on industry to protect data and privacy. ::: tip No guide, including this one, concerning CAP attempts to give any advice on whether any features and functions are the best method to support company-, industry-, region-, or country-specific requirements. Furthermore, the information provided in this note does not give any advice or recommendations with regards to additional features that might be required in a particular environment. Decisions related to data protection must be made on a case-by-case basis, under consideration of the given system landscape and the applicable legal requirements. ::: For general information about data protection and privacy (DPP) on SAP BTP, see the SAP BTP documentation under [Data Protection and Privacy](https://help.sap.com/docs/btp/sap-business-technology-platform/data-protection-and-privacy). ## Data Protection & Privacy in CAP { #dpp-cap } CAP is a framework that provides modeling and runtime features to enable customers to build business applications on top. As a framework, in general, CAP doesn't store or manage any personal data on its own with some exceptions: - Application logging on detailed level written by CAP runtime might contain personal data such as user names and IP addresses. The logs are mandatory to operate the system. Connect an adequate logging service to meet compliance requirements such as [SAP Application Logging Service](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/sap-application-logging-service-for-cloud-foundry-environment). - A draft-enabled service `Foo` has an entity `Foo.DraftAdministrativeData` with fields `CreatedByUser`, `InProcessByUser` and `LastChangedByUser` containing personal data for all draft entity instances in edit mode. - Messages temporarily written to transaction outbox might contain personal data. The entries are mandatory to operate the system. If necessary, applications can process these messages by standard CAP functionality (CDS model `@sap/cds/srv/outbox`). - Be aware that personal data might be added automatically when using the [managed](../domain-modeling#managed-data) aspect. Dependent on the business scenario, custom CDS models served by CAP runtime will most likely contain personal data that is also stored in a backing service. CAP provides a [rich set of tools](aspects) to protect the application from unauthorized access to business data, including personal data. Furthermore, it helps applications to provide [higher-level DPP-related functions](#dpp-support) such as data retrieval. ::: warning ❗ **Applications are responsible to implement compliance requirements with regards to data protection and privacy according to their specific use case**. ::: Also refer to related guides of most important platform services: [SAP Cloud Identity Services - Configuring Privacy Policies](https://help.sap.com/docs/IDENTITY_AUTHENTICATION/6d6d63354d1242d185ab4830fc04feb1/ed48466d770f4519aa23bba754851fbd.html){.learn-more} [SAP HANA Cloud - Data Protection and Privacy](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c82f8d6a84c147f8b78bf6416dae7290/ad9588189e844092910103f2f7b1c968.html){.learn-more} ## Data Protection & Privacy Supported by CAP { #dpp-support } CAP provides several [features](../data-privacy/) to help applications meet DPP-requirements: - The [Personal Data Management (PDM)](../data-privacy/pdm) integration has a configurable **retrieval function**, which can be used to inform data subjects about personal data stored related to them. - CAP also provides a *fully model-driven* approach to track **changes in personal data** or **read access to sensitive personal data** in the audit log. Having [declared personal data](../data-privacy/annotations) in your model, CAP automatically triggers corresponding [audit log events](../data-privacy/audit-logging). ::: warning ❗ So far, applications have to integrate [SAP Data Retention Manager](https://help.sap.com/docs/DATA_RETENTION_MANAGER) to implement an adequate **erasure function** for personal data out of retention period. CAP will cover an out-of-the-box integration in the future. ::: # Managing Data Privacy CAP helps application projects to comply with data privacy regulations using SAP Business Technology Platform (BTP) services. Find a step-by-step guide to these hereinafter... ::: warning SAP does not give any advice on whether the features and functions provided to facilitate meeting data privacy obligations are the best method to support company, industry, regional, or country/region-specific requirements. Furthermore, this information should not be taken as advice or a recommendation regarding additional features that would be required in specific IT environments. Decisions related to data protection must be made on a case-by-case basis, considering the given system landscape and the applicable legal requirements. ::: ## Introduction to Data Privacy Data protection is associated with numerous legal requirements and privacy concerns, such as the EU's [General Data Protection Regulation](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation). In addition to compliance with general data protection and privacy acts regarding [personal data](https://en.wikipedia.org/wiki/Personal_data), you need to consider compliance with industry-specific legislation in different countries/regions. CAP supports applications in their obligations to comply to data privacy regulations, by automating tedious tasks as much as possible based on annotated models. That is, CAP provides easy ways to designate personal data, as well as out-of-the-box integration with SAP BTP services, which enable you to fulfill specific data privacy requirements in your application. This relieves application developers of these tedious tasks and related efforts. ![Shows with which solutions CAP annotations can be used out of the box, as described in the following table.](./assets/Data-Privacy.drawio.svg){} ### In a Nutshell The most essential requests you have to answer are those in the following table. The table also shows the basis of the requirement and the corresponding discipline for the request: | Question / Request | Obligation | Solution | | ------------------------------------------- | ----------------------------------------------- | ----------------------------------- | | *What data about me do you have stored?* | [Right of access](#right-of-access) | [Personal Data Management](pdm.md) | | *Please delete all personal data about me!* | [Right to be forgotten](#right-to-be-forgotten) | [Data Retention Management](drm.md) | | *When was personal data stored/changed?* | [Transparency](#transparency) | [Audit Logging](audit-logging.md) | ## Annotating Personal Data The first and frequently only task to do as an application developer is to identify entities and elements (potentially) holding personal data using `@PersonalData` annotations. These are used to automate CAP-facilitated audit logging, personal data management, and data retention management as much as possible. [Learn more in the *Annotating Personal Data* chapter](annotations) {.learn-more} ## Automatic Audit Logging {#transparency} The **Transparancy** obligation, requests to be able to report with whom data stored about an individual is shared and where that came from (for example, [EU GDPR Article 15(1)(c,g)](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:02016R0679-20160504&qid=1692819634946#tocId22)). The [SAP Audit Log Service](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-logging-in-cloud-foundry-environment) stores all audit logs for a tenant in a common, compliant data store and allows auditors to search through and retrieve the respective logs when necessary. [Learn more in the *Audit Logging* guide](audit-logging) {.learn-more} ## Personal Data Management { #right-of-access } The [**Right of Access** to personal data](https://en.wikipedia.org/wiki/Right_of_access_to_personal_data) "gives people the right to access their personal data and information about how this personal data is being processed". The [SAP Personal Data Manager](https://help.sap.com/docs/personal-data-manager) allows you to inform individuals about the data you have stored regarding them. [Learn more in the *Personal Data Management* guide](pdm) {.learn-more} ## Data Retention Management { #right-to-be-forgotten } The [**Right to be Forgotten**](https://en.wikipedia.org/wiki/Right_to_be_forgotten) gives people "the right to request erasure of personal data related to them on any one of a number of grounds [...]". The [SAP Data Retention Manager](https://help.sap.com/docs/data-retention-manager) allows you to manage retention and residence rules to block or destroy personal data. # Annotating Personal Data In order to automate audit logging, personal data management, and data retention management as much as possible, the first and frequently only task to do as an application developer is to identify entities and elements (potentially) holding personal data using `@PersonalData` annotations. ## Reference App Sample { #annotated-model } In the remainder of this guide, we use the [Incidents Management reference sample app](https://github.com/cap-js/incidents-app) as the base to add data privacy and audit logging to. ![Shows the connections between the entities in the sample app.](./assets/Incidents-App.drawio.svg){} So, let's annotate the data model to identify personal data. In essence, in all our entities we search for elements which carry personal data, such as person names, birth dates, etc., and tag them accordingly. All found entities are classified as either *Data Subjects*, *Subject Details* or *Related Data Objects*. Following the [best practice of separation of concerns](../domain-modeling#separation-of-concerns), we annotate our domain model in a separate file *srv/data-privacy.cds*, which we add to our project and fill it with the following content: > For the time beeing also replace the data in _data/sap.capire.incidents-Customers.csv_. ::: code-group ```cds [db/data-privacy.cds] using { sap.capire.incidents as my } from '../db/schema'; extend my.Customers with { dateOfBirth : Date; }; annotate my.Customers with @PersonalData : { DataSubjectRole : 'Customer', EntitySemantics : 'DataSubject' } { ID @PersonalData.FieldSemantics: 'DataSubjectID'; firstName @PersonalData.IsPotentiallyPersonal; lastName @PersonalData.IsPotentiallyPersonal; email @PersonalData.IsPotentiallyPersonal; phone @PersonalData.IsPotentiallyPersonal; dateOfBirth @PersonalData.IsPotentiallyPersonal; creditCardNo @PersonalData.IsPotentiallySensitive; }; annotate my.Addresses with @PersonalData: { EntitySemantics : 'DataSubjectDetails' } { customer @PersonalData.FieldSemantics: 'DataSubjectID'; city @PersonalData.IsPotentiallyPersonal; postCode @PersonalData.IsPotentiallyPersonal; streetAddress @PersonalData.IsPotentiallyPersonal; }; annotate my.Incidents with @PersonalData : { EntitySemantics : 'Other' } { customer @PersonalData.FieldSemantics: 'DataSubjectID'; }; ``` ```csv [data/sap.capire.incidents-Customers.csv] ID,firstName,lastName,email,phone,dateOfBirth 1004155,Daniel,Watts,daniel.watts@demo.com,+44-555-123,1996-01-01 1004161,Stormy,Weathers,stormy.weathers@demo.com,,1981-01-01 1004100,Sunny,Sunshine,sunny.sunshine@demo.com,+01-555-789,1965-01-01 ``` ::: ## @PersonalData... Let's break down the annotations to identify personal data, shown in the sample above. These annotations fall into three categories: - **Entity-level annotations** signify relevant entities as *Data Subjects*, *Data Subject Details*, or *Related Data Objects* in data privacy terms, as depicted in the graphic below. - **Key-level annotations** signify object primary keys, as well as references to data subjects (which have to be present on each object). - **Field-level annotations** identify elements containing personal data. Learn more about these annotations in the [@PersonalData OData vocabulary](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/PersonalData.md). {.learn-more} ### .EntitySemantics {.annotation} The entity-level annotation `@PersonalData.EntitySemantics` signifies relevant entities as *Data Subject*, *Data Subject Details*, or *Other* in data privacy terms, as depicted in the following graphic. ![Shows the connections between the entities in the sample app. In addition via color coding it makes clear how entities are annotated: customers are data subject, addresses are data subject details and incidents are other.](./assets/Data-Subjects.drawio.svg){} The following table provides some further details. Annotation | Description --------------------- | ------------- `DataSubject` | The entities of this set describe a data subject (an identified or identifiable natural person), for example, Customer or Vendor. `DataSubjectDetails` | The entities of this set contain details of a data subject (an identified or identifiable natural person) but do not by themselves identify/describe a data subject, for example, Addresses. `Other` | Entities containing personal data or references to data subjects, but not representing data subjects or data subject details by themselves. For example, customer quote, customer order, or purchase order with involved business partners. These entities are relevant for audit logging. There are no restrictions on their structure. The properties should be annotated suitably with `FieldSemantics`. Hence, we annotate our model as follows: ```cds annotate my.Customers with @PersonalData: { EntitySemantics: 'DataSubject' // [!code focus] }; annotate my.Addresses with @PersonalData: { EntitySemantics: 'DataSubjectDetails' // [!code focus] }; annotate my.Incidents with @PersonalData: { EntitySemantics: 'Other' // [!code focus] }; ``` ### .DataSubjectRole {.annotation} Can be added to `@PersonalData.EntitySemantics: 'DataSubject'`. It's a user-chosen string specifying the role name to use. If omitted, the default is the entity name. Use case is similar to providing user-friendly labels for the UI, although in this case there's no i18n. In our model, we can add the `DataSubjectRole` as follows: ```cds annotate my.Customers with @PersonalData: { EntitySemantics: 'DataSubject', DataSubjectRole: 'Customer' // [!code focus] }; ``` ### .FieldSemantics: DataSubjectID {.annotation} Use this annotation to identify data subject's unique key, or a reference to it. References are commonly associations or foreign keys in subject details entities, or related ones, referring to a subject entity. - Each `@PersonalData` entity needs to identify a the `DataSubjectID` element. - For entities with `DataSubject` semantics, this is typically the primary key. - For entities with `DataSubjectDetails` or `Other` semantics, this is usually an association to the data subject. Hence, we annotate our model as follows: ```cds annotate my.Customers with { ID @PersonalData.FieldSemantics: 'DataSubjectID' // [!code focus] }; annotate my.Addresses with { customer @PersonalData.FieldSemantics: 'DataSubjectID' // [!code focus] }; annotate my.Incidents with { customer @PersonalData.FieldSemantics: 'DataSubjectID' // [!code focus] }; ``` ### .IsPotentiallyPersonal {.annotation} `@PersonalData.IsPotentiallyPersonal` tags which fields are personal and, for example, require audit logs if modified. ```cds annotate my.Customers with { firstName @PersonalData.IsPotentiallyPersonal; // [!code focus] lastName @PersonalData.IsPotentiallyPersonal; // [!code focus] email @PersonalData.IsPotentiallyPersonal; // [!code focus] phone @PersonalData.IsPotentiallyPersonal; // [!code focus] }; ``` ### .IsPotentiallySensitive {.annotation} `@PersonalData.IsPotentiallySensitive` tags which fields are sensitive and, for example, require audit logs in case of access. ```cds annotate my.Customers with { creditCardNo @PersonalData.IsPotentiallySensitive; // [!code focus] }; ``` ## Next Steps... Having annotated your data model with `@PersonalData` annotations, you can now go on to the respective tasks that leverage these annotations to automate as much as possible: - [*Automated Audit Logging*](audit-logging) - [*Personal Data Management*](pdm) - [*Data Retention Management*](drm) # Audit Logging The [`@cap-js/audit-logging`](https://www.npmjs.com/package/@cap-js/audit-logging) plugin provides out-of-the box support for automatic audit logging of data privacy-related events, in particular changes to *personal data* and reads of *sensitive* data. Find here a step-by-step guide how to use it. :::warning _The following is mainly written from a Node.js perspective. For Java's perspective, please see [Java - Audit Logging](../../java/auditlog)._ ::: ## Annotate Personal Data First identify entities and elements (potentially) holding personal data using `@PersonalData` annotations, as explained in detail in the [*Annotating Personal Data* chapter](annotations) of these guides. > We keep using the [Incidents Management reference sample app](https://github.com/cap-js/incidents-app). ## Add the Plugin { #setup } To enable automatic audit logging simply add the [`@cap-js/audit-logging`](https://www.npmjs.com/package/@cap-js/audit-logging) plugin package to your project like so: ```sh npm add @cap-js/audit-logging ``` ::: details Behind the Scenes… [CDS Plugin Packages](../../node.js/cds-plugins) are self-contained extensions. They not only include the relevant code but also bring their own default configuration. In our case, next to bringing the respective code, the plugin does the following: 1. Sets cds.requires.audit-log: true 2. Which in turn activates the effective `audit-log` configuration via **presets**: ```jsonc { "audit-log": { "handle": ["READ", "WRITE"], "outbox": true, "[development]": { "kind": "audit-log-to-console" }, "[hybrid]": { "kind": "audit-log-to-restv2" }, "[production]": { "kind": "audit-log-to-restv2" } }, "kinds": { "audit-log-to-console": { "impl": "@cap-js/audit-logging/srv/log2console" }, "audit-log-to-restv2": { "impl": "@cap-js/audit-logging/srv/log2restv2", "vcap": { "label": "auditlog" } } } } ``` **The individual configuration options are:** - `impl` — the service implementation to use - `outbox` — whether to use transactional outbox or not - `handle` — which events (`READ` and/or `WRITE`) to intercept and generate log messages from **The preset uses profile-specific configurations** for (hybrid) development and production. Use the `cds env` command to find out the effective configuration for your current environment: ::: code-group ```sh [w/o profile] cds env requires.audit-log ``` ```sh [production profile] cds env requires.audit-log --profile production ``` ::: ## Test-drive Locally The previous step is all we need to do to automatically log personal data-related events. Let's see that in action… 1. **Start the server** as usual: ```sh cds watch ``` 2. **Send an update** request that changes personal data: ::: code-group ```http [test/audit-logging.http] PATCH http://localhost:4004/admin/Customers(2b87f6ca-28a2-41d6-8c69-ccf16aa6389d) HTTP/1.1 Authorization: Basic alice:in-wonderland Content-Type: application/json { "firstName": "Jane", "lastName": "Doe" } ``` ::: [Find more sample requests in the Incident Management sample.](https://github.com/cap-js/incidents-app/blob/attachments/test/audit-logging.http){.learn-more} 3. **See the audit logs** in the server's console output: ```js { data_subject: { id: { ID: '2b87f6ca-28a2-41d6-8c69-ccf16aa6389d' }, role: 'Customer', type: 'AdminService.Customers' }, object: { type: 'AdminService.Customers', id: { ID: '2b87f6ca-28a2-41d6-8c69-ccf16aa6389d' } }, attributes: [ { name: 'firstName', old: 'Sunny', new: 'Jane' }, { name: 'lastName', old: 'Sunshine', new: 'Doe' } ], uuid: '5cddbc91-8edf-4ba2-989b-87869d94070d', tenant: 't1', user: 'alice', time: 2024-02-08T09:21:45.021Z } ``` ## Use SAP Audit Log Service While we simply dumped audit log messages to stdout in local development, we'll be using the SAP Audit Log Service on SAP BTP in production. Following is a brief description of the necessary steps for setting this up. A more comprehensive guide, incl. tutorials, is currently under development. ### Setup Instance and Deploy App For deployment in general, please follow the [deployment guide](../deployment/). Check the rest of this guide before actually triggering the deployment (that is, executing `cf deploy`). Here is what you need to do additionally, to integrate with SAP Audit Log Service: 1. In your space, create a service instance of the _SAP Audit Log Service_ (`auditlog`) service with plan `premium`. 2. Add the service instance as _existing resource_ to your `mta.yml` and bind it to your application in its _requires_ section. Existing resources are defined like this: ```yml resources: - name: my-auditlog-service type: org.cloudfoundry.existing-service ``` [Learn more about *Audit Log Write API for Customers*](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-write-api-for-customers?version=Cloud){.learn-more} ### Accessing Audit Logs There are two options to access audit logs: 1. Create an instance of service `auditlog-management` to retrieve audit logs via REST API, see [Audit Log Retrieval API Usage for the Cloud Foundry Environment](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment). 2. Use the SAP Audit Log Viewer, see [Audit Log Viewer for the Cloud Foundry Environment](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-viewer-for-cloud-foundry-environment). ## Generic Audit Logging ### Behind the Scenes... For all [defined services](../providing-services#service-definitions), the generic audit logging implementation does the following: - Intercept all write operations potentially involving personal data. - Intercept all read operations potentially involving sensitive data. - Determine the affected fields containing personal data, if any. - Construct log messages, and send them to the connected audit log service. - All emitted log messages are sent through the [transactional outbox](#transactional-outbox). - Apply resiliency mechanisms like retry with exponential backoff, and more. ## Custom Audit Logging { #custom-audit-logging } In addition to the generic audit logging provided out of the box, applications can also log custom events with custom data using the programmatic API. Connecting to the service: ```js const audit = await cds.connect.to('audit-log') ``` Sending log messages: ```js await audit.log('Foo', { bar: 'baz' }) ``` ::: tip Audit Logging as Just Another CAP Service The Audit Log Service API is implemented as a CAP service, with the service API defined in CDS as shown in the next section. In effect, the common patterns of [*CAP Service Consumption*](../using-services) apply, as well as all the usual benefits like *mocking*, *late-cut µ services*, *resilience* and *extensibility*. ::: ### Service Definition Below is the complete reference modeling as contained in `@cap-js/audit-logging`. The individual operations and events are briefly discussed in the following sections. The service definition declares the generic `log` operation, which is used for all kinds of events, as well as the common type `LogEntry`, which declares the common fields of all log messages. These fields are filled in automatically by the base service and any values provided by the caller are ignored. Further, the service has pre-defined event payloads for the four event types: 1. _Log read access to sensitive personal data_ 1. _Log changes to personal data_ 1. _Security event log_ 1. _Configuration change log_ These payloads are based on [SAP Audit Log Service's REST API](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-write-api-for-customers), which maximizes performance by omitting any intermediate data structures. ```cds namespace sap.auditlog; service AuditLogService { action log(event : String, data : LogEntry); event SensitiveDataRead : LogEntry { data_subject : DataSubject; object : DataObject; attributes : many { name : String; }; attachments : many { id : String; name : String; }; channel : String; }; event PersonalDataModified : LogEntry { data_subject : DataSubject; object : DataObject; attributes : many Modification; success : Boolean default true; }; event ConfigurationModified : LogEntry { object : DataObject; attributes : many Modification; }; event SecurityEvent : LogEntry { data : {}; ip : String; }; } /** Common fields, filled in automatically */ type LogEntry { uuid : UUID; tenant : String; user : String; time : Timestamp; } type DataObject { type : String; id : {}; } type DataSubject : DataObject { role : String; } type Modification { name : String; old : String; new : String; } ``` ### Sensitive Data Read ```cds event SensitiveDataRead : LogEntry { data_subject : DataSubject; object : DataObject; attributes : many { name : String; }; attachments : many { id : String; name : String; }; channel : String; } type DataObject { type : String; id : {}; } type DataSubject : DataObject { role : String; } ``` Send `SensitiveDataRead` event log messages like that: ```js await audit.log ('SensitiveDataRead', { data_subject: { type: 'sap.capire.bookshop.Customers', id: { ID: '1923bd11-b1d6-47b6-a91b-732e755fa976' }, role: 'Customer', }, object: { type: 'sap.capire.bookshop.BillingData', id: { ID: '399a2704-3d2d-4fa1-9e7d-a4e45c67749b' } }, attributes: [ { name: 'creditCardNo' } ] }) ``` ### Personal Data Modified ```cds event PersonalDataModified : LogEntry { data_subject : DataSubject; object : DataObject; attributes : many Modification; success : Boolean default true; } type Modification { name : String; old : String; new : String; } ``` Send `PersonalDataModified` event log messages like that: ```js await audit.log ('PersonalDataModified', { data_subject: { type: 'sap.capire.bookshop.Customers', id: { ID: '1923bd11-b1d6-47b6-a91b-732e755fa976' }, role: 'Customer', }, object: { type: 'sap.capire.bookshop.Customers', id: { ID: '1923bd11-b1d6-47b6-a91b-732e755fa976' } }, attributes: [ { name: 'emailAddress', old: 'foo@example.com', new: 'bar@example.com' } ] }) ``` ### Configuration Modified ```cds event ConfigurationModified : LogEntry { object : DataObject; attributes : many Modification; } ``` Send `ConfigurationModified` event log messages like that: ```js await audit.log ('ConfigurationModified', { object: { type: 'sap.common.Currencies', id: { ID: 'f79ba248-c348-4962-9fef-680c3b88807c' } }, attributes: [ { name: 'symbol', old: 'EUR', new: '€' } ] }) ``` ### Security Events ```cds event SecurityEvent : LogEntry { data : {}; ip : String; } ``` Send `SecurityEvent` log messages like that: ```js await audit.log ('SecurityEvent', { data: { user: 'alice', action: 'Attempt to access restricted service "PDMService" with insufficient authority' }, ip: '127.0.0.1' }) ``` > In the SAP Audit Log Service REST API, `data` is a String. For ease of use, the default implementation stringifies `data`, if it is provided as an object. [Custom implementations](#custom-implementation) should also handle both. ## Custom Implementation { #custom-implementation } In addition, everybody could provide new implementations in the same way as we implement the mock variant: ```js const { AuditLogService } = require('@cap-js/audit-logging') class MyAuditLogService extends AuditLogService { async init() { this.on('*', function (req) { // [!code focus] const { event, data } = req console.log(`[my-audit-log] - ${event}:`, data) }) return super.init() } } module.exports = MyAuditLogService ``` As always, custom implementations need to be configured in `cds.requires.<>.impl`: ```json { "cds": { "requires": { "audit-log": { "impl": "lib/MyAuditLogService.js" } } } } ``` ## Transactional Outbox { #transactional-outbox } By default, all log messages are sent through a transactional outbox. This means, when sent, log messages are first stored in a local outbox table, which acts like a queue for outbound messages. Only when requests are fully and successfully processed, these messages are forwarded to the audit log service. ![This graphic is explained in the accompanying text.](./assets/Transactional-Outbox.drawio.svg) This provides an ultimate level of resiliency, plus additional benefits: - **Audit log messages are guaranteed to be delivered** — even if the audit log service should be down for a longer time period. - **Asynchronous delivery of log messages** — the main thread doesn't wait for requests being sent and successfully processed by the audit log service. - **False log messages are avoided** — messages are forwarded to the audit log service on successfully committed requests; and skipped in case of rollbacks. This transparently applies to all implementations, even [custom implementations](#custom-implementation). You can opt out of this default by configuring cds.audit-log.[development].outbox = false. # Personal Data Management Use the SAP Personal Data Manager (PDM) with a CAP application. :::warning To follow this cookbook hands-on you need an enterprise account. The SAP Personal Data Manager service is currently only available for [enterprise accounts](https://discovery-center.cloud.sap/missiondetail/3019/3297/). An entitlement in trial accounts is not possible. ::: SAP BTP provides the [*SAP Personal Data Manager (PDM)*](https://help.sap.com/docs/PERSONAL_DATA_MANAGER) which allows administrators to respond to the question "What data of me do you have?". To answer this question, the PDM service needs to fetch all personal data using an OData endpoint. That endpoint has to be provided by the application as follows. ## Annotate Personal Data First identify entities and elements (potentially) holding personal data using `@PersonalData` annotations, as explained in detail in the [*Annotating Personal Data* chapter](annotations) of these guides. > We keep using the [Incidents Management reference sample app](https://github.com/cap-js/incidents-app). ## Provide a Service Interface to SAP Personal Data Manager SAP Personal Data Manager needs to call into your application to read personal data so you have to define a respective service endpoint, complying to the interface required by SAP Personal Data Manager. Following the CAP principles, we recommend adding a new dedicated CAP service that handles all the personal data manager requirements for you. This keeps the rest of your data model clean and enables reuse, just as CAP promotes it. ### CAP Service Model for SAP Personal Data Manager Following the [best practice of separation of concerns](../domain-modeling#separation-of-concerns), we create a dedicated service for the integration with SAP Personal Data Manager: ::: code-group ```cds [srv/pdm-service.cds] using {sap.capire.incidents as db} from '../db/schema'; @requires: 'PersonalDataManagerUser' // security check service PDMService @(path: '/pdm') { // Data Privacy annotations on 'Customers' and 'Addresses' are derived from original entity definitions entity Customers as projection on db.Customers; entity Addresses as projection on db.Addresses; entity Incidents as projection on db.Incidents // create view on Incidents and Conversations as flat projection entity IncidentConversationView as select from Incidents { ID, title, urgency, status, key conversation.ID as conversation_ID, conversation.timestamp as conversation_timestamp, conversation.author as conversation_author, conversation.message as conversation_message, customer.ID as customer_ID, customer.email as customer_email }; // annotate new view annotate PDMService.IncidentConversationView with @(PersonalData.EntitySemantics: 'Other') { customer_ID @PersonalData.FieldSemantics: 'DataSubjectID'; }; // annotations for Personal Data Manager - Search Fields annotate Customers with @(Communication.Contact: { n : { surname: lastName, given : firstName }, bday : dateOfBirth, email: [{ type : #preferred, address: email}] }); }; ``` ::: ::: tip Make sure to have [indicated all relevant entities and elements in your domain model](annotations). ::: ### Provide Flat Projections As an additional step, you have to create flat projections on the additional business data, like transactional data. In our model, we have `Incidents` and `Conversations`, which are connected via a [composition](https://github.com/SAP-samples/cloud-cap-samples/blob/gdpr/orders/db/schema.cds). Since SAP Personal Data Manager needs flattened out structures, we define a helper view `IncidentConversationView` to flatten this out. We have to then add data privacy-specific annotations to this new view as well. The `IncidentConversationView` as transactional data is marked as `Other`. In addition, it is important to tag the correct field, which defines the corresponding data subject, in our case that is `customer_ID @PersonalData.FieldSemantics: 'DataSubjectID';` ### Annotating Search Fields In addition, the most important search fields of the data subject have to be annotated with the corresponding annotation `@Communication.Contact`. To perform a valid search in the SAP Personal Data Manager application, you will need _Surname_, _Given Name_, and _Email_ or the _Data Subject ID_. Details about this annotation can be found in [Communication Vocabulary](https://github.com/SAP/odata-vocabularies/blob/main/vocabularies/Communication.md). Alternatively to the tuple _Surname_, _Given Name_, and _Email_, you can also use _Surname_, _Given Name_, and _Birthday_ (called `bday`), if available in your data model. Details about this can be found in [SAP Personal Data Manager - Developer Guide](https://help.sap.com/docs/personal-data-manager/4adcd96ce00c4f1ba29ed11f646a5944/v4-annotations?q=Contact&locale=en-US). ### Restrict Access Using the `@requires` Annotation To restrict access to this sensitive data, the `PDMservice` is protected by the `@requires: 'PersonalDataManagerUser'` annotation. Calling the `PDMservice` externally without the corresponding permission is forbidden. The Personal Data Manager service calls the `PDMservice` with the needed role granted. This is configured in the _xs-security.json_ file, which is explained later. [Learn more about security configuration and the SAP Personal Data Manager.](https://help.sap.com/docs/PERSONAL_DATA_MANAGER/620a3ea6aaf64610accdd05cca9e3de2/4ee5705b8ded43e68bde610223722971.html#loio8eb6d9f889594a2d98f478bd57412ceb){.learn-more} At this point, you are done with your application. Let's set up the SAP Personal Data Manager and try it out. ## Connecting SAP Personal Data Manager Next, we will briefly detail the integration to SAP Personal Data Manager. A more comprehensive guide, incl. tutorials, is currently under development. For further details, see the [SAP Personal Data Manager Developer Guide](https://help.sap.com/docs/personal-data-manager/4adcd96ce00c4f1ba29ed11f646a5944/what-is-personal-data-manager). ### Activate Access Checks in _xs-security.json_ Because we protected the `PDMservice`, we need to establish the security check properly. In particular, you need the _xs-security.json_ file to make the security check active. The following _xs-security.json_ is from our sample. ```json { "xsappname": "incidents-mgmt", "tenant-mode": "shared", "scopes": [ { "name": "$XSAPPNAME.PersonalDataManagerUser", "description": "Authority for Personal Data Manager", "grant-as-authority-to-apps": [ "$XSSERVICENAME(pdm)" ] } ] } ``` Here you define that your personal data manager service instance, called `pdm`, is allowed to access your CAP application granting the `PersonalDataManagerUser` role. ### Add `@sap/xssec` Library To make the authentication work, you have to enable the security strategy by installing the `@sap/xssec` package: ```sh npm install @sap/xssec ``` [Learn more about authorization in CAP using Node.js.](../../node.js/authentication#jwt){.learn-more} ### Build and Deploy Your Application The Personal Data Manager can't connect to your application running locally. Therefore, you first need to deploy your application. In our sample, we added two manifest files using `cds add cf-manifest` and SAP HANA configuration using `cds add hana`. The general deployment is described in detail in [Deploy Using Manifest Files](../deployment/to-cf). Make a production build: ```sh cds build --production ``` Deploy your application: ```sh cf create-service-push ``` ### Subscribe to SAP Personal Data Manager Service [Subscribe to the service](https://help.sap.com/docs/PERSONAL_DATA_MANAGER/620a3ea6aaf64610accdd05cca9e3de2/ef10215655a540b6ba1c02a96e118d66.html) from the _Service Marketplace_ in the SAP BTP cockpit. ![A screenshot of the tile in the cockpit for the SAP Personal Data Manager service.](assets/pdmCockpitCreate.png){} Follow the wizard to create your subscription. ### Create Role Collections SAP Personal Data Manager comes with the following roles: Role Name | Role Template ----------|------ PDM_Administrator | PDM_Administrator PDM_CustomerServiceRepresentative | PDM_CustomerServiceRepresentative PDM_OperatorsClerk | PDM_OperatorsClerk All of these roles have two different _Application Identifiers_. ::: tip Application identifiers with **!b** are needed for the UI, and identifiers with **!t** are needed for executing the Postman collection. ::: [Learn more about defining a role collection in SAP BTP cockpit](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/4b20383efab341f181becf0a947a5498.html){.learn-more} ### Create a Service Instance You need a configuration file, like the following, to create a service instance for the Personal Data Manager. `pdm-instance-config.json` ```json { "xs-security": { "xsappname": "incidents-mgmt", "authorities": ["$ACCEPT_GRANTED_AUTHORITIES"] }, "fullyQualifiedApplicationName": "incidents-mgmt", "appConsentServiceEnabled": true } ``` Create a service instance using the SAP BTP cockpit or execute the following command: ```sh cf create-service personal-data-manager-service standard incidents-mgmt-pdm -c ./pdm-instance-config.json ``` ### Bind the Service Instance to Your Application. With both the application deployed and the SAP Personal Data Manger service set up, you can now bind the service instance of the Personal Data Manager to your application. Use the URL of your application in a configuration file, such as the following example, which you need when binding a service instance. `pdm-binding-config.json` ```json { "fullyQualifiedApplicationName": "incidents-mgmt", "fullyQualifiedModuleName": "incidents-mgmt-srv", "applicationTitle": "PDM Incidents", "applicationTitleKey": "PDM Incidents", "applicationURL": "https://incidents-mgmt-srv.cfapps.eu10.hana.ondemand.com/", // get the URL from the CF CLI command: cf apps "endPoints": [ { "type": "odatav4", "serviceName": "pdm-service", "serviceTitle": "Incidents Management", "serviceTitleKey": "IncidentsManagement", "serviceURI": "pdm", "hasGdprV4Annotations": true, "cacheControl": "no-cache" } ] } ``` Here the `applicationURL`, the `fullyQualifiedModuleName`, and the `serviceURI` have to be those of your Cloud Foundry deployment and your CAP service definition (_services-manifest.yaml_). Bind the service instance using the SAP BTP cockpit or execute the following command: ```sh cf bind-service incidents-mgmt-srv incidents-mgmt-pdm -c ./pdm-binding-config.json ``` ## Using the SAP Personal Data Manager Application Open the SAP Personal Data Manager application from the _Instances and Subscriptions_ page in the SAP BTP cockpit. ![To open the application, open the three dot menu and select "Go to Application".](assets/pdmCockpit.png){} In the personal data manager application you can search for data subjects with _First Name_, _Last Name_, and _Date of Birth_, or alternatively with their _ID_. ![A screenshot of the SAP Personal Data Manager application.](assets/pdmApplication.png){} # Deployment Learn here about deployment options of a CAP application. # Deploy to Cloud Foundry A comprehensive guide on deploying applications built with SAP Cloud Application Programming Model (CAP) to SAP BTP Cloud Foundry environment. ## Intro & Overview After completing the functional implementation of your CAP application by following the [Getting Started](../../get-started/in-a-nutshell) or [Cookbook](../) guides, you would finally deploy it to the cloud for production. The essential steps are illustrated in the following graphic: ![First prepare for production (once) and then freeze your dependencies (once and on upgrades). Next build and assemble and then deploy.](assets/deploy-setps.drawio.svg){} First, you apply these steps manually in an ad-hoc deployment, as described in this guide. Then, after successful deployment, you automate them using [CI/CD pipelines](cicd). ## Prerequisites The following sections are based on a new project that you can create like this:
```sh cds init bookshop --add sample cd bookshop ``` ::: details Alternatively, download or clone the sample repository Exercise the following steps in the `bookshop` subfolder of the [`cloud-cap-samples`](https://github.com/sap-samples/cloud-cap-samples) repo: ```sh git clone https://github.com/sap-samples/cloud-cap-samples samples cd samples/bookshop ``` :::
```sh cds init bookshop --java --add sample cd bookshop ``` > If you want to use a ready-to-be-deployed sample, see our [java/samples](https://github.com/sap-samples/cloud-cap-samples-java). [Learn more about Setting Up Local Development.](../../java/getting-started#local){.learn-more}


In addition, you need to prepare the following: #### 1. SAP BTP with SAP HANA Cloud Database up and Running {#btp-and-hana} - Access to [SAP BTP, for example a trial](https://developers.sap.com/tutorials/hcp-create-trial-account.html) - An [SAP HANA Cloud database running](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-administration-guide/create-sap-hana-database-instance-using-sap-hana-cloud-central) in your subaccount - Entitlement for [`hdi-shared` service plan](https://help.sap.com/docs/hana-cloud/sap-hana-cloud-getting-started-guide/set-up-schema-or-hdi-container-cloud-foundry) for your subaccount - A [Cloud Foundry space](https://help.sap.com/docs/btp/sap-business-technology-platform/create-spaces?version=Cloud) ::: tip Starting the SAP HANA database takes several minutes Therefore, we recommend doing these steps early on. In trial accounts, you need to start the database **every day**. ::: #### 2. Latest Versions of `@sap/cds-dk` {#latest-cds} Ensure you have the latest version of `@sap/cds-dk` installed globally: ```sh npm -g outdated #> check whether @sap/cds-dk is listed npm i -g @sap/cds-dk #> if necessary ```
Likewise, ensure the latest version of `@sap/cds` is installed in your project: ```sh npm outdated #> check whether @sap/cds is listed npm i @sap/cds #> if necessary ```
#### 3. Cloud MTA Build Tool {#mbt} - Run `mbt` in a terminal to check whether you've installed it. - If not, install it according to the [MTA Build Tool's documentation](https://sap.github.io/cloud-mta-build-tool/download). - For macOS/Linux machines best is to install using `npm`: ```sh npm i -g mbt ``` - For Windows, [please also install `GNU Make`](https://sap.github.io/cloud-mta-build-tool/makefile/). #### 4. Cloud Foundry CLI w/ MTA Plugins {#cf-cli} - Run `cf -v` in a terminal to check whether you've installed version **8** or higher. - If not, install or update it according to the [Cloud Foundry CLI documentation](https://github.com/cloudfoundry/cli#downloads). - In addition, ensure to have the [MTA plugin for the Cloud Foundry CLI](https://github.com/cloudfoundry-incubator/multiapps-cli-plugin/tree/master/README.md) installed. ```sh cf add-plugin-repo CF-Community https://plugins.cloudfoundry.org cf install-plugin multiapps cf install-plugin html5-plugin ``` ## Prepare for Production {#prepare-for-production} If you followed CAP's grow-as-you-go approach so far, you've developed your application with an in-memory database and basic/mock authentication. To prepare for production you need to ensure respective production-grade choices are configured: ![You need to add SAP HANA Cloud, an App Router and XSUAA.](assets/deploy-overview.drawio.svg){} We'll use the `cds add ` CLI command for that, which ensures the required services are configured correctly and corresponding package dependencies are added to your _package.json_. ### 1. Using SAP HANA Database
While we used SQLite as a low-cost stand-in during development, we're going to use a managed SAP HANA database for production:
While we used SQLite or H2 as a low-cost stand-in during development, we're going to use a managed SAP HANA database for production:
```sh cds add hana --for production ``` [Learn more about using SAP HANA for production.](../databases-hana){.learn-more} ### 2. Using XSUAA-Based Authentication Configure your app for XSUAA-based authentication: ```sh cds add xsuaa --for production ``` ::: tip This will also generate an `xs-security.json` file The roles/scopes are derived from authorization-related annotations in your CDS models. Ensure to rerun `cds compile --to xsuaa`, as documented in the [_Authorization_ guide](/guides/security/authorization#xsuaa-configuration) whenever there are changes to these annotations. ::: ::: details For trial and extension landscapes, OAuth configuration is required Add the following snippet to your _xs-security.json_ and adapt it to the landscape you're deploying to: ```json "oauth2-configuration": { "redirect-uris": ["https://*.cfapps.us10-001.hana.ondemand.com/**"] } ``` ::: [Learn more about SAP Authorization and Trust Management/XSUAA.](https://discovery-center.cloud.sap/serviceCatalog/authorization-and-trust-management-service?region=all){.learn-more} ### 3. Using MTA-Based Deployment { #add-mta-yaml} We'll be using the [Cloud MTA Build Tool](https://sap.github.io/cloud-mta-build-tool/) to execute the deployment. The modules and services are configured in an `mta.yaml` deployment descriptor file, which we generate with: ```sh cds add mta ``` [Learn more about MTA-based deployment.](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/d04fc0e2ad894545aebfd7126384307c.html?locale=en-US){.learn-more} ### 4. Using App Router as Gateway { #add-app-router} The _App Router_ acts as a single point-of-entry gateway to route requests to. In particular, it ensures user login and authentication in combination with XSUAA. Two deployment options are available: - **Managed App Router**: for SAP Build Work Zone, the Managed App Router provided by SAP Fiori Launchpad is available. See the [end-to-end tutorial](https://developers.sap.com/tutorials/integrate-with-work-zone.html) for the necessary configuration in `mta.yaml` and on each _SAP Fiori application_. - **Custom App Router**: for scenarios without SAP Fiori Launchpad, the App Router needs to be deployed along with your application. Use the following command to enhance the application configuration: ```sh cds add approuter ``` [Learn more about the SAP BTP Application Router.](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/01c5f9ba7d6847aaaf069d153b981b51.html?locale=en-US){.learn-more} ### 5. User Interfaces { #add-ui } #### SAP Cloud Portal If you intend to deploy user interface applications, you also need to set up the [HTML5 Application Repository](https://discovery-center.cloud.sap/serviceCatalog/html5-application-repository-service) in combination with the [SAP Cloud Portal service](https://discovery-center.cloud.sap/serviceCatalog/cloud-portal-service): ```sh cds add portal ``` #### SAP Build Work Zone, Standard Edition For **single-tenant applications**, you can use [SAP Build Work Zone, Standard Edition](https://discovery-center.cloud.sap/serviceCatalog/sap-build-work-zone-standard-edition): ```sh cds add workzone ``` ### 6. Optional: Add Multitenancy { #add-multitenancy } To enable multitenancy for production, run the following command: ```sh cds add multitenancy --for production ``` > If necessary, modifies deployment descriptors such as _mta.yaml_ for Cloud Foundry. [Learn more about MTX services.](../multitenancy/#behind-the-scenes){.learn-more} ::: tip You're set! The previous steps are required _only once_ in a project's lifetime. With that done, we can repeatedly deploy the application. :::
### 7. Freeze Dependencies { #freeze-dependencies }
Deployed applications should freeze all their dependencies, including transient ones. Create a _package-lock.json_ file for that: ```sh npm update --package-lock-only ```
If you use multitenancy, also freeze dependencies for the MTX sidecar: ```sh npm update --package-lock-only --prefix mtx/sidecar ``` In addition, you need install and freeze dependencies for your UI applications: ```sh npm i --prefix app/browse npm i --prefix app/admin-books ``` [Learn more about dependency management for Node.js](../../node.js/best-practices#dependencies){.learn-more} ::: tip Regularly update your `package-lock.json` to consume latest versions and bug fixes Do so by running this command again, for example each time you deploy a new version of your application. ::: ## Build & Assemble { #build-mta } ### Build Deployables with `cds build` Run `cds build` to generate additional deployment artifacts and prepare everything for production in a local `./gen` folder as a staging area. While `cds build` is included in the next step `mbt build`, you can also run it selectively as a test, and to inspect what is generated: ```sh cds build --production ``` [Learn more about running and customizing `cds build`.](custom-builds){.learn-more} ### Assemble with `mbt build` ::: info Prepare monorepo setups The CAP samples repository on GitHub has a more advanced (monorepo) structure, so tell the `mbt` tool to find the `package-lock.json` on top-level: ```sh ln -sf ../package-lock.json ``` ::: Now, we use the `mbt` build tool to assemble everything into a single `mta.tar` archive: ```sh mbt build -t gen --mtar mta.tar ``` [Got errors? See the troubleshooting guide.](../../get-started/troubleshooting#mta){.learn-more} [Learn how to reduce the MTA archive size during development.](../../get-started/troubleshooting#reduce-mta-size){.learn-more} ## Deploy to Cloud {#deploy} Finally, we can deploy the generated archive to Cloud Foundry: ```sh cf deploy gen/mta.tar ``` [You need to be logged in to Cloud Foundry.](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/7a37d66c2e7d401db4980db0cd74aa6b.html){.learn-more} This process can take some minutes and finally creates a log output like this: ```log […] Application "bookshop" started and available at "[org]-[space]-bookshop.landscape-domain.com" […] ``` Copy and open this URL in your web browser. It's the URL of your App Router application. ::: tip For multitenant applications, you have to subscribe a tenant first In this case, the application is accessible via a tenant-specific URL after onboarding. ::: ### Inspect Apps in BTP Cockpit Visit the "Applications" section in your [SAP BTP cockpit](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/144e1733d0d64d58a7176e817fa6aeb3.html) to see the deployed apps: ![The screenshot shows the SAP BTP cockpit, when a user navigates to his dev space in the trial account and looks at all deployed applications.](./assets/apps-cockpit.png){.mute-dark} ::: tip Assign the _admin_ role We didn't do the _admin_ role assignment for the `AdminService`. You need to create a role collection and [assign the role and your user](https://developers.sap.com/tutorials/btp-app-role-assignment.html) to get access. ::: [Got errors? See the troubleshooting guide.](../../get-started/troubleshooting#cflogs-recent){.learn-more} ### Upgrade Tenants {.java} The CAP Java SDK offers `main` methods for Subscribe/Unsubscribe in the classes `com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe` that can be called from the command line. This way, you can run the tenant subscribe/unsubscribe for the specified tenant. This would trigger also your custom handlers, which is useful for the local testing scenarios. In order to register all handlers of the application properly during the execution of a tenant operation `main` method, the component scan package must be configured. To set the component scan, the property cds.multitenancy.component-scan must be set to the package name of your application. The handler registration provides additional information that is used for the tenant subscribe, for example, messaging subscriptions that are created. ::: warning You can stop the CAP Java back end when you call this method, but the _MTX Sidecar_ application must be running! ::: This synchronization can also be automated, for example using [Cloud Foundry Tasks](https://docs.cloudfoundry.org/devguide/using-tasks.html) on SAP BTP and [Module Hooks](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/b9245ba90aa14681a416065df8e8c593.html) in your MTA. The `main` method optionally takes tenant ID (string) as the first input argument and tenant options (JSON string) as the second input argument. Alternatively, you can use the environment variables `MTCOMMAND_TENANTS` and `MTCOMMAND_OPTIONS` instead of arguments. The command-line arguments have higher priority, so you can use them to override the environment variables. The method returns the following exit codes. | Exit Code | Result | | --------- | ------------------------------------------------------------------------------------------------ | | 0 | Tenant subscribed/unsubscribed successfully. | | 3 | Failed to subscribe/unsubscribe the tenant. Rerun the procedure to make sure the tenant is subscribed/unsubscribed. | To run this method locally, use the following command where `` is the one of your applications: ::: code-group ```sh [>= Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe org.springframework.boot.loader.launch.PropertiesLauncher [] ``` ```sh [< Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe org.springframework.boot.loader.PropertiesLauncher [] ``` ::: In the SAP BTP, Cloud Foundry environment, it can be tricky to construct such a command. The reason is that the JAR file is extracted by the Java buildpack and the place of the Java executable isn't easy to determine. Also the place differs for different Java versions. Therefore, we recommend adapting the start command that is generated by the buildpack and run the adapted command: ::: code-group ```sh [>= Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.launch.JarLauncher/org.springframework.boot.loader.launch.PropertiesLauncher/g' /home/vcap/staging_info.yml && sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | sed 's/^/ MTCOMMAND_TENANTS=my-tenant [MTCOMMAND_TENANTS=]/' | bash ``` ```sh [< Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.JarLauncher/org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | sed 's/^/ MTCOMMAND_TENANTS=my-tenant [MTCOMMAND_TENANTS=]/' | bash ``` ```sh [Java 8] sed -i 's/org.springframework.boot.loader.JarLauncher/-Dloader.main=com.sap.cds.framework.spring.utils.Subscribe/Unsubscribe org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | sed 's/^/ MTCOMMAND_TENANTS=my-tenant [MTCOMMAND_TENANTS=]/' | bash ``` ::: --- {} # Appendices ## Deploy using `cf push` As an alternative to MTA-based deployment, you can choose Cloud Foundry-native deployment using [`cf push`](https://docs.cloudfoundry.org/devguide/push.html), or `cf create-service-push` respectively. ### Prerequisites Install the [_Create-Service-Push_ plugin](https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin): ```sh cf install-plugin Create-Service-Push ``` This plugin acts the same way as `cf push`, but extends it such that services are _created_ first. With the plain `cf push` command, this is not possible. ### Add a `manifest.yml` {#add-manifest} ```sh cds add cf-manifest ``` This creates two files, a _manifest.yml_ and _services-manifest.yml_ in the project root folder. - _manifest.yml_ holds the applications. In the default layout, one application is the actual server holding the service implementations, and the other one is a 'DB deployer' application, whose sole purpose is to start the SAP HANA deployment. - _services-manifest.yml_ defines which Cloud Foundry services shall be created. The services are derived from the service bindings in _package.json_ using the [`cds.requires` configuration](../../node.js/cds-env#services). ::: tip Version-control manifest files Unlike the files in the _gen_ folders, these manifest files are genuine sources and should be added to the version control system. This way, you can adjust them to your needs as you evolve your application. ::: ### Build the Project This prepares everything for deployment, and -- by default -- writes the build output, that is the deployment artifacts, to folder _./gen_ in your project root. ```sh cds build --production ``` [Learn how `cds build` can be configured.](custom-builds#build-config){.learn-more} The `--production` parameter ensures that the cloud deployment-related artifacts are created by `cds build`. See section [SAP HANA database deployment](../databases-hana) for more details. ### Push the Application { #push-the-application} This command creates service instances, pushes the applications and binds the services to the application with a single call: ```sh cf create-service-push ``` During deployment, the plugin reads the _services-manifest.yml_ file and creates the services listed there. It then reads _manifest.yml_, pushes the applications defined there, and binds these applications to service instances created before. If the service instances already exist, only the `cf push` operation will be executed. You can also apply some shortcuts: - Use `cf push` directly to deploy either all applications, or `cf push ` to deploy a single application. - Use `cf create-service-push --no-push` to only create or update service-related data without pushing the applications. In the deployment log, find the application URL in the `routes` line at the end: ```log{3} name: bookshop-srv requested state: started routes: bookshop-srv.cfapps.sap.hana.ondemand.com ``` Open this URL in the browser and try out the provided links, for example, `…/browse/Books`. Application data is fetched from SAP HANA. ::: tip Ensure successful SAP HANA deployment Check the deployment logs of the database deployer application using ```sh cf logs -db-deployer --recent ``` to ensure that SAP HANA deployment was successful. The application itself is by default in state `started` after HDI deployment has finished, even if the HDI deployer returned an error. To save resources, you can explicitly stop the deployer application afterwards. ::: ::: tip No Fiori preview in the cloud The [SAP Fiori Preview](../../advanced/fiori#sap-fiori-preview), that you are used to see from local development, is only available for the development profile and not available in the cloud. For productive applications, you should add a proper SAP Fiori application. ::: ::: warning Multitenant applications are not supported yet as multitenancy-related settings are not added to the generated descriptors. The data has to be entered manually. ::: [Got errors? See the troubleshooting guide.](../../get-started/troubleshooting#aborted-deployment-with-the-create-service-push-plugin){.learn-more} # Deploy to Kyma Runtime You can run your CAP application in the [Kyma Runtime](https://discovery-center.cloud.sap/serviceCatalog/kyma-runtime?region=all). This runtime of the SAP Business Technology Platform is the SAP managed offering for the [Kyma project](https://kyma-project.io/). This guide helps you to run your CAP applications on SAP BTP Kyma Runtime. ## Overview As well as Kubernetes, Kyma is a platform to run containerized workloads. The service's files are provided as a container image, commonly referred to as a Docker image. In addition, the containers to be run on Kubernetes, their configuration and everything else that is needed to run them, are described by Kubernetes resources. In consequence, two kinds of artifacts are needed to run applications on Kubernetes: 1. Container images 2. Kubernetes resources The following diagram shows the steps to run on the SAP BTP Kyma Runtime: ![A CAP Helm chart is added to your project. Then you built your project as container images and push those images to a container registry of your choice. As last step the Helm chart is deployed to your Kyma resources, where service instances of SAP BTP services are created and pods pull the previously created container images from the container registry.](assets/deploy-kyma.drawio.svg) 1. [**Add** a Helm chart](#cds-add-helm) 2. [**Build** container images](#build-images) 3. [**Deploy** your application by applying Kubernetes resources](#deploy-helm-chart) ## Prerequisites {#prerequisites} + You prepared your project as described in the [Deploy to Cloud Foundry](to-cf) guide. + Use a Kyma enabled [Trial Account](https://account.hanatrial.ondemand.com/) or [learn how to get access to a Kyma cluster](#get-access-to-a-cluster). + You need a [Container Image Registry](#get-access-to-a-container-registry) + Get the required SAP BTP service entitlements + Download and install the following command line tools: + [`kubectl` command line client](https://kubernetes.io/docs/tasks/tools/) for Kubernetes + [Docker Desktop or Docker for Linux](https://docs.docker.com/get-docker/) + [`pack` command line tool](https://buildpacks.io/docs/tools/pack/) + [`helm` command line tool](https://helm.sh/docs/intro/install/) + [`ctz` command line tool](https://www.npmjs.com/package/ctz) ::: warning Make yourself familiar with Kyma and Kubernetes. CAP doesn't provide consulting on it. ::: ## Prepare for Production The detailed procedure is described in the [Deploy to Cloud Foundry guide](to-cf#prepare-for-production). Run this command to fast-forward: ```sh cds add hana,xsuaa --for production ``` ## Add Helm Chart {#cds-add-helm} CAP provides a configurable [Helm chart](https://helm.sh/) for Node.js and Java applications. ```sh cds add helm ``` This command adds the Helm chart to the _chart_ folder of your project with 3 files: `values.yaml`, `Chart.yaml` and `values.schema.json`. During cds build, the _gen_/_chart_ folder is generated. This folder will have all the necessary files required to deploy the helm chart. Files from the _chart_ folder in root of the project are copied to the folder generated in _gen_ folder. The files in the _gen/chart_ folder support the deployment of your CAP service, database and UI content, and the creation of instances for BTP services. [Learn more about CAP Helm chart.](#about-cap-helm){.learn-more} ## Build Images {#build-images} We'll be using the [Containerize Build Tool](https://www.npmjs.com/package/ctz/) to build the images. The modules are configured in a `containerize.yaml` descriptor file, which we generate with: ```sh cds add containerize ``` #### Configure Image Repository Specify the repository where you want to push the images: ```yaml ... repository: ``` ::: warning You should be logged in to the above repository to be able to push images to it. You can use `docker login -u ` to login. ::: Now, we use the `ctz` build tool to build all the images: ```sh ctz containerize.yaml ``` > This will start containerizing your modules based on the configuration in the specified file. After it is done, it will ask whether you want to push the images or not. Type `y` and press enter to push your images. You can also use the above command with `--push` flag to skip this. If you want more logs, you can use the `--log` flag with the above command. [Learn more about Containerize Build Tool](https://www.npmjs.com/package/ctz/){.learn-more} ### UI Deployment For UI access, you can use the standalone and the managed App Router as explained in [this blog](https://blogs.sap.com/2021/12/09/using-sap-application-router-with-kyma-runtime/). The `cds add helm` command [supports deployment](#html5-applications) to the [HTML5 application repository](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/f8520f572a6445a7bfaff4a1bbcbe60a.html?locale=en-US&version=Cloud) which can be used with both options. For that, create a container image with your UI files configured with the [HTML5 application deployer](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/9b178ab3388c4647b0c52f2c85641844.html). The `cds add helm` command also supports deployment of standalone approuter. To configure backend destinations, have a look at the [approuter configuration section.](#configure-approuter-specifications) ## Deploy Helm Chart {#deploy-helm-chart} Once your Helm chart is created, your container images are uploaded to a registry and your cluster is prepared, you're almost set for deploying your Kyma application. ### Create Service Instances for SAP HANA Cloud {#hana-cloud-instance} 1. Enable SAP HANA for your project as explained in the [CAP guide for SAP HANA](../databases-hana). 2. Create an SAP HANA database. 3. To create HDI containers from Kyma, you need to [create a mapping between your namespace and SAP HANA Cloud instance](https://blogs.sap.com/2022/12/15/consuming-sap-hana-cloud-from-the-kyma-environment/). ::: warning Set trusted source IP addresses Make sure that your SAP HANA Cloud instance can be accessed from your Kyma cluster by [setting the trusted source IP addresses](https://help.sap.com/docs/HANA_CLOUD/9ae9104a46f74a6583ce5182e7fb20cb/0610e4440c7643b48d869a6376ccaecd.html). ::: ### Deploy using CAP Helm Chart Before deployment, you need to set the container image and cluster specific settings. #### Configure Access to Your Container Images Add your container image settings to your _chart/values.yaml_: ```yaml ... global: domain: imagePullSecret: name: image: registry: tag: latest ``` You can use the pre-configured domain name for your Kyma cluster: ```yaml kubectl get gateway -n kyma-system kyma-gateway \ -o jsonpath='{.spec.servers[0].hosts[0]}' ``` To use images on private container registries you need to [create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). For image registry, use the same value you mentioned in `containerize.yaml` #### Configure Approuter Specifications By default `srv-api` and `mtx-api` (only in Multi Tenant Application) are configured. If you're using any other destination or your `xs-app.json` file has a different destination, update the destinations under the `backendDestinations` key in _values.yaml_ file: ```yaml backendDestinations: backend: service: srv ``` > `backend` is the name of the destination. `service` points to the deployment name whose url will be used for this destination. #### Deploy CAP Helm Chart 1. Execute `cds build --production` to generate the helm chart in _gen_ folder. 2. Deploy using `helm` command: ```sh helm upgrade --install bookshop ./gen/chart \ --namespace bookshop-namespace --create-namespace ``` This installs the Helm chart from the _gen/chart_ folder with the release name `bookshop` in the namespace `bookshop-namespace`. ::: tip With the `helm upgrade --install` command you can install a new chart as well as upgrade an existing chart. ::: This process can take a few minutes to complete and create the log output: ```log […] The release bookshop is installed in namespace [namespace]. Your services are available at: [workload] - https://bookshop-[workload]-[namespace].[configured-domain] […] ``` Copy and open this URL in your web browser. It's the URL of your application. ::: info If a standalone approuter is present, the srv and sidecar aren't exposed and only the approuter URL will be logged. But if an approuter isn't present then srv and sidecar are also exposed and their URL will also be logged. ::: [Learn more about using a private registry with your Kyma cluster.](#setup-your-cluster-for-a-private-container-registry){.learn-more} [Learn more about the CAP Helm chart settings](#configure-helm-chart){ .learn-more} [Learn more about using `helm upgrade`](https://helm.sh/docs/helm/helm_upgrade){ .learn-more} ::: tip Try out the [CAP SFLIGHT](https://github.com/SAP-samples/cap-sflight) and [CAP for Java](https://github.com/SAP-samples/cloud-cap-samples-java) examples on Kyma. ::: ## Customize Helm Chart {#customize-helm-chart} ### About CAP Helm Chart { #about-cap-helm} The following files are added to a _chart_ folder by executing `cds add helm`: | File/Pattern | Description | | --------------------- | ---------------------------------------------------------- | | _values.yaml_ | [Configuration](#configure-helm-chart) of the chart; The initial configuration is determined from your CAP project. | | _Chart.yaml_ | Chart metadata that is initially determined from the _package.json_ file | | _values.schema.json_ | JSON Schema for _values.yaml_ file | The following files are added to a _gen/chart_ folder along with all the files in the _chart_ folder in the root of the project by executing `cds build` after adding `helm`: | File/Pattern | Description | | --------------------- | ---------------------------------------------------------- | | _templates/*.tpl_ | Template libraries used in the template resources | | _templates/NOTES.txt_ | Message printed after installing or upgrading the Helm charts | | _templates/*.yaml_ | Template files for the Kubernetes resources | [Learn how to create a Helm chart from scratch from the Helm documentation.](https://helm.sh/docs){.learn-more} ### Configure {#configure-helm-chart} [CAP's Helm chart](#cds-add-helm) can be configured by the settings as explained below. Mandatory settings are marked with . You can change the configuration by editing the _chart/values.yaml_ file. When you call `cds add helm` again, your changes will be persisted and only missing default values are added. The `helm` CLI also offers you other options to overwrite settings from _chart/values.yaml_ file: + Overwrite properties using the `--set` parameter. + Overwrite properties from a YAML file using the `-f` parameter. ::: tip It is recommended to do the main configuration in the _chart/values.yaml_ file and have additional YAML files for specific deployment types (dev, test, productive) and targets. ::: #### Global Properties | Property | Description | Mandatory | | --------------- | ------------------------------------------------------------- | :---------: | | imagePullSecret → name | Name of secret to access the container registry | ( ) 1 | | domain | Kubernetes cluster ingress domain (used for application URLs) | | | image → registry | Name of the container registry from where images are pulled | | 1: Mandatory only for private docker registries #### Deployment Properties The following properties are available for the `srv` key: | Property | Description | Mandatory | |------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------:| | **bindings** | [Service Bindings](#configuration-options-for-service-bindings) | | | **resources** | [Kubernetes Container resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) | | | **env** | Map of additional env variables | | | **health** | [Kubernetes Liveness, Readyness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) | | | → liveness → path | Endpoint for liveness and startup probe | | | → readiness → path | Endpoint for readiness probe | | | → startupTimeout | Wait time in seconds until the health checks are started | | | **image** | [Container image](#configuration-options-for-container-images) | | You can explore more configuration options in the subchart's directory _gen/chart/charts/web-application_. #### SAP BTP Services The helm chart supports to create service instances for commonly used services. Services are pre-populated in the _chart/values.yaml_ file based on the used services in the `requires` section of the CAP configuration (for example, _package.json_) file. You can use the following services in your configuration: | Property | Description | Mandatory | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|:---------:| | **xsuaa** | Enables the creation of a XSUAA service instance. See details for [Node.js](../../node.js/authentication) and [Java](../../java/security) projects. | | | parameters → xsappname | Name of XSUAA application. Overwrites the value from the _xs-security.json_ file. (unique per subaccount) | | | parameters → HTML5Runtime_enabled | Set to true for use with Launchpad Service | | | **connectivity** | Enables [on-premise connectivity](#connectivity-service) | | | **event-mesh** | Enables SAP Event Mesh; [messaging guide](../messaging/), [how to enable the SAP Event Mesh](../messaging/event-mesh) | | | **html5-apps-repo-host** | HTML5 Application Repository | | | **hana** | HDI Shared Container | | | **service-manager** | Service Manager Container | | | **saas-registry** | SaaS Registry Service | | [Learn how to configure services in your Helm chart](#configuration-options-for-services){.learn-more} #### SAP HANA The deployment job of your database content to a HDI container can be configured using the `hana-deployer` section with the following properties: | Property | Description | Mandatory | |---------------|------------------------------------------------------------------------------------------------------------------|:---------:| | **bindings** | [Service binding](#configuration-options-for-service-bindings) to the HDI container's secret | | | **image** | [Container image](#configuration-options-for-container-images) of the HDI deployer | | | **resources** | [Kubernetes Container resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) | | | **env** | Map of additional environment variables | | #### HTML5 Applications The deployment job of HTML5 applications can be configured using the `html5-apps-deployer` section with the following properties: [Container image]: #configuration-options-for-container-images [HTML5 application deployer]: https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/9b178ab3388c4647b0c52f2c85641844.html [Kubernetes Container resources]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | Property | Description | Mandatory | |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------|:---------:| | **image** | [Container image] of the [HTML5 application deployer] | | | **bindings** | [Service bindings](#configuration-options-for-service-bindings) to XSUAA, destinations and HTML5 Application Repository Host services | | | **resources** | [Kubernetes Container resources] | | | **env** | Map of additional environment variables | | | → SAP_CLOUD_SERVICE | Name for your business service (unique per subaccount) | | ::: tip Run `cds add html5-repo` to automate the setup for HTML5 application deployment. ::: #### Backend Destinations Backend destinations maybe required for HTML5 applications or for App Router deployment. They can be configured using the `backendDestinations` section with the following properties: | Property | Description | |------------------|-----------------------------------------------------| | (key) | Name of backend destination | | service: (value) | Value is the target Kubernetes service (like `srv`) | If you want to add an external destination, you can do so by providing the `external` property like this: ```yaml ... backendDestinations: srv-api: service: srv ui5: # [!code ++] external: true # [!code ++] name: ui5 # [!code ++] Type: HTTP # [!code ++] proxyType: Internet # [!code ++] url: https://ui5.sap.com # [!code ++] Authentication: NoAuthentication # [!code ++] ``` > Our helm chart will remove the `external` key and add the rest of the keys as-is to the environment variable. #### Connectivity Service Use `cds add connectivity`, to add a volume to your `srv` deployment. ::: warning Create an instance of the SAP BTP Connectivity service with plan `connectivity_proxy` and a service binding, before deploying the first application that requires it. Using this plan, a proxy to the connectivity service gets installed into your Kyma cluster. This may take a few minutes. The connectivity proxy uses the first created instance in a cluster for authentication. This instance must not be deleted as long as connectivity is used. ::: The volume you've added to your `srv` deployment is needed, to add additional connection information, compared to what's available from the service binding. ```yaml srv: ... additionalVolumes: - name: connectivity-secret volumeMount: mountPath: /bindings/connectivity readOnly: true projected: sources: - secret: name: optional: false - secret: name: optional: false items: - key: token_service_url path: url - configMap: name: "RELEASE-NAME-connectivity-proxy-info" optional: false ``` In the volumes added, replace the value of `` with the binding that you created earlier. If the binding is created in a different namespace then you need to create a secret with details from the binding and use that secret. ::: tip You don't have to edit `RELEASE-NAME` in the `configMap` property. It is passed as a template string and will be replaced with your actual release name by Helm. ::: #### Arbitrary Service These are the steps to create and bind to an arbitrary service, using the binding of the feature toggle service to the CAP application as an example: 1. In the _chart/Chart.yaml_ file, add an entry to the `dependencies` array. ```yaml dependencies: ... - name: service-instance alias: feature-flags version: 0.1.0 ``` 2. Add the service configuration and the binding in the _chart/values.yaml_ file: ```yaml feature-flags: serviceOfferingName: feature-flags servicePlanName: lite ... srv: bindings: feature-flags: serviceInstanceName: feature-flags ``` > The `alias` property in the `dependencies` array must match the property added in the root of _chart/values.yaml_ and the value of `serviceInstanceName` in the binding. ::: warning There should be at least one service instance created by `cds add helm` if you want to bind an arbitrary service. ::: #### Configuration Options for Services _Services have the following configuration options:_ | Property | Type | Description | Mandatory | ------------------- | ----------- | ---------------------------------------- |:-----: | | **fullNameOverride** | string | Use instead of the generated name | | **serviceOfferingName** | string | Technical service offering name from service catalog | | **servicePlanName** | string | Technical service plan name from service catalog | | **externalName** | string | The name for the service instance in SAP BTP | | **customTags** | array of string | List of custom tags describing the service instance, will be copied to `ServiceBinding` secret in the key called `tags` | | **parameters** | object | Object with service parameters | | **jsonParameters** | string | Some services support the provisioning of additional configuration parameters. For the list of supported parameters, check the documentation of the particular service offering. | | **parametersFrom** | array of object | List of secrets from which parameters are populated. | The `jsonParameters` key can also be specified using the `--set file` flag while installing/upgrading Helm release. For example, `jsonParameters` for the `xsuaa` property can be defined using the following command: ```sh helm install bookshop ./chart --set-file xsuaa.jsonParameters=xs-security.json ``` You can explore more configuration options in the subchart's directory _gen/chart/charts/service-instance_. #### Configuration Options for Service Bindings | Property | Description | Mandatory | |-------------------------|--------------------------------------------------|:------------------:| | (key) | Name of the service binding | | | secretFrom | Bind to Kubernetes secret | ()1 | | serviceInstanceName | Bind to service instance within the Helm chart | ()1 | | serviceInstanceFullname | Bind to service instance using the absolute name | ()1 | | parameters | Object with service binding parameters | | 1: Exactly one of these properties need to be specified #### Configuration Options for Container Images | Property | Description | Mandatory | |------------|-------------------------------------------------|:---------:| | repository | Full container image repository name | | | tag | Container image version tag (default: `latest`) | | ### Modify Modifying the Helm chart allows you to customize it to your needs. However, this has consequences if you want to update with the latest changes from the CAP template. You can run `cds add helm` again to update your Helm chart. It has the following behavior for modified files: 1. Your changes of the _chart/values.yaml_ and _chart/Chart.yaml_ will not be modified. Only new or missing properties will be added by `cds add helm`. 2. To modify any of the generated files such as templates or subcharts, copy the files from _gen/chart_ folder and place it in the same level inside the _chart_ folder. After the next `cds build` executions the generated chart will have the modified files. 3. If you want to have some custom files such as templates or subcharts, you can place them in the _chart_ folder at the same level where you want them to be in _gen/chart_ folder. They will be copied as is. ### Extend 1. Adding new files to the Helm chart does not conflict with `cds add helm`. 2. A modification-free approach to change files is to use [Kustomize](https://kustomize.io/) as a [post-processor](https://helm.sh/docs/topics/advanced/#post-rendering) for your Helm chart. This might be usable for small changes if you don't want to branch-out from the generated `cds add helm` content. ## Additional Information ### SAP BTP Services and Features You can find a list of SAP BTP services in the [Discovery Center](https://discovery-center.cloud.sap/viewServices?provider=all®ions=all&showFilters=true). To find out if a service is supported in the Kyma and Kubernetes environment, goto to the **Service Marketplace** of your Subaccount in the SAP BTP Cockpit and select Kyma or Kubernetes in the environment filter. You can find information about planned SAP BTP, Kyma Runtime features in the [product road map](https://roadmaps.sap.com/board?PRODUCT=73554900100800003012&PRODUCT=73554900100800003012). ### Using Service Instance created on Cloud Foundry To bind service instances created on Cloud Foundry to a workload (`srv`, `hana-deployer`, `html5-deployer`, `approuter` or `sidecar`) in Kyma environment, do the following: 1. In your cluster, create a secret with credentials from the service key of that instance. 2. Use the `fromSecret` property inside the `bindings` key of the workload. For example, if you want to use an `hdi-shared` instance created on Cloud Foundry: 1. [Create a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret) with the credentials from a service key from the Cloud Foundry account. 2. Add additional properties to the Kubernetes secret. ```yaml stringData: # <…> .metadata: | { "credentialProperties": [ { "name": "certificate", "format": "text"}, { "name": "database_id", "format": "text"}, { "name": "driver", "format": "text"}, { "name": "hdi_password", "format": "text"}, { "name": "hdi_user", "format": "text"}, { "name": "host", "format": "text"}, { "name": "password", "format": "text"}, { "name": "port", "format": "text"}, { "name": "schema", "format": "text"}, { "name": "url", "format": "text"}, { "name": "user", "format": "text"} ], "metaDataProperties": [ { "name": "plan", "format": "text" }, { "name": "label", "format": "text" }, { "name": "type", "format": "text" }, { "name": "tags", "format": "json" } ] } type: hana label: hana plan: hdi-shared tags: '[ "hana", "database", "relational" ]' ``` ::: tip Update the values of the properties accordingly. ::: 3. Change the `serviceInstanceName` property to `fromSecret` from each workload which has that service instance in `bindings` in _chart/values.yaml_ file: ::: code-group ```yaml [srv] … srv: bindings: db: serviceInstanceName: // [!code --] fromSecret: // [!code ++] ``` ```yaml [hana-deployer] … hana-deployer: bindings: hana: serviceInstanceName: // [!code --] fromSecret: // [!code ++] ``` ::: 4. Delete `hana` property in _chart/values.yaml_ file. ::: code-group ```yaml … hana: // [!code --] serviceOfferingName: hana // [!code --] servicePlanName: hdi-shared // [!code --] … ``` ::: 5. Make the following changes to _chart/Chart.yaml_ file. ::: code-group ```yaml … dependencies: … - name: service-instance // [!code --] alias: hana // [!code --] version: ">0.0.0" // [!code --] … ``` ::: ### About Cloud Native Buildpacks Cloud Native Buildpacks provide advantages such as embracing [best practices](https://buildpacks.io/features/) and secure standards like: + Resulting images use an unprivileged user. + Builds are [reproducible](https://buildpacks.io/docs/features/reproducibility/). + [Software Bill of Materials](https://buildpacks.io/docs/features/bill-of-materials/) (SBoM) for all dependencies baked into the image. + Auto detection, no need to manually select base images. Additionally Cloud Native Buildpacks can be easily plugged together to fulfill more complex requirements. For example the [ca-certificates](https://github.com/paketo-buildpacks/ca-certificates) enables adding additional certificates to the system trust-store at build and runtime. When using Cloud Native Buildpacks you can continuously benefit from the best practices coming from the community without any changes required. [Learn more about Cloud Native Buildpacks Concepts](https://buildpacks.io/docs/concepts/){ .learn-more} One way of using Cloud Native Buildpacks in CI/CD is by utilizing the [`cnbBuild`](https://www.project-piper.io/steps/cnbBuild/) step of Project "Piper". This does not require any special setup, like providing a Docker daemon, and works out of the box for Jenkins and Azure DevOps Pipelines. [Learn more about Support for Cloud Native Buildpacks in Jenkins](https://medium.com/buildpacks/support-for-cloud-native-buildpacks-in-jenkins-656330156e77){ .learn-more}
### Get Access to a Cluster You can either purchase a Kyma cluster from SAP, create your [personal trial](https://hanatrial.ondemand.com/) account or sign-up for the [free tier](https://www.sap.com/products/business-technology-platform/trial.html#new-customers) offering to get a SAP managed Kyma Kubernetes cluster. ### Get Access to a Container Registry SAP BTP doesn't provide a container registry. You can choose from offerings of hosted open source and private container image registries, as well as solutions that can be run on premise or in your own cloud infrastructure. However, you need to consider that the Kubernetes cluster needs to access the container registry from its network. + The use of a public container registry gives everyone access to your container images. + In a private container registry, your container images are protected. You will need to configure a **pull secret** to allow your cluster to access it. #### Setup Your Cluster for a Public Container Registry Make sure that the container registry is accessible from your Kubernetes cluster. No further setup is required. #### Setup Your Cluster for a Private Container Registry To use a docker image from a private repository, you need to [create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) and configure this secret for your containers. ::: warning It is recommended to use a technical user for this secret that has only read permission, because users with access to the Kubernetes cluster can reveal the password from the secret easily. ::: # Deploy using CI/CD Pipelines A comprehensive guide to implementing continuous integration and continuous deployment (CI/CD) for CAP projects using best practices, tools, and services. ## SAP CI/CD Service SAP Continuous Integration and Delivery is a service on SAP BTP, which lets you configure and run predefined continuous integration and delivery pipelines. It connects with your Git SCM repository and in its user interface, you can easily monitor the status of your builds and detect errors as soon as possible, which helps you prevent integration problems before completing your development. SAP Continuous Integration and Delivery has a ready-to-use pipeline for CAP, that is applicable to Node.js, Java and multitarget application (MTA) based projects. It does not require you to host your own Jenkins instance and it provides an easy, UI-guided way to configure your pipelines. Try the tutorial [Get Started with SAP Continuous Integration and Delivery](https://developers.sap.com/tutorials/cicd-start-cap.html) to configure a CI/CD pipeline that builds, tests, and deploys your code changes. [Learn more about SAP Continuous Integration and Delivery.](https://help.sap.com/viewer/SAP-Cloud-Platform-Continuous-Integration-and-Delivery){.learn-more} ## CI/CD Pipelines with SAP Piper You can set up continuous delivery in your software development project, applicable to both SAP Business Technology Platform (BTP) and SAP on-premise platforms. SAP implements tooling for continuous delivery in project [Piper](https://www.project-piper.io/). Try the tutorial [Create Automated System Tests for SAP Cloud Application Programming Model Projects](https://developers.sap.com/tutorials/cicd-wdi5-cap.html) to create system tests against a CAP-based sample application and automate your tests through a CI/CD pipeline. [Learn more about project Piper](https://www.project-piper.io/){.learn-more} ## GitHub Actions GitHub offers continuous integration workflows using [GitHub Actions](https://docs.github.com/en/actions/automating-builds-and-tests/about-continuous-integration). In our [SFlight sample,](https://github.com/SAP-samples/cap-sflight) we use GitHub Actions in two simple workflows to test our samples on [current Node.js and Java versions](https://github.com/SAP-samples/cap-sflight/tree/main/.github/workflows). We also defined our [own actions](https://github.com/SAP-samples/cap-sflight/tree/main/.github/actions) and use them in a [custom workflow](https://github.com/SAP-samples/cap-sflight/blob/main/.github/workflows/deploy-btp.yml). # Multitenancy ## Introduction & Overview CAP has built-in support for multitenancy with [the `@sap/cds-mtxs` package](https://www.npmjs.com/package/@sap/cds-mtxs). Essentially, multitenancy is the ability to serve multiple tenants through single clusters of microservice instances, while strictly isolating the tenants' data. Tenants are clients using SaaS solutions. In contrast to single-tenant mode, applications wait for tenants to subscribe before serving any end-user requests. [Learn more about SaaS applications.](#about-saas-applications){.learn-more} ## Prerequisites Make sure you have the latest version of `@sap/cds-dk` installed: ```sh npm update -g @sap/cds-dk ``` ## Jumpstart with an application To get a ready-to-use _bookshop_ application you can modify and deploy, run: ::: code-group ```sh [Node.js] cds init bookshop --add sample cd bookshop ``` ```sh [Java] cds init bookshop --java --add tiny-sample cd bookshop ``` ::: ## Enable Multitenancy {#enable-multitenancy} Now, you can run this to enable multitenancy for your CAP application: ```sh cds add multitenancy --for production ```
::: details See what this adds to your Node.js project… 1. Adds package `@sap/cds-mtxs` to your project: ```jsonc { "dependencies": { "@sap/cds-mtxs": "^2" }, } ``` 2. Adds this configuration to your _package.json_ to enable multitenancy with sidecar: ```jsonc { "cds": { "profile": "with-mtx-sidecar", "requires": { "[production]": { "multitenancy": true } } } } ``` 3. Adds a sidecar subproject at `mtx/sidecar` with this _package.json_: ```json { "name": "bookshop-mtx", "dependencies": { "@cap-js/hana": "^2", "@sap/cds": "^8", "@sap/cds-mtxs": "^2", "@sap/xssec": "^4", "express": "^4" }, "devDependencies": { "@cap-js/sqlite": "^1" }, "scripts": { "start": "cds-serve" }, "cds": { "profile": "mtx-sidecar" } } ``` 4. If necessary, modifies deployment descriptors such as `mta.yaml` for Cloud Foundry and Helm charts for Kyma. :::
::: details See what this adds to your Java project… 1. Adds the following to _.cdsrc.json_ in your app: ```jsonc { "profiles": [ "with-mtx-sidecar", "java" ], "requires": { "[production]": { "multitenancy": true } } } ``` 2. Adds the following to your _srv/pom.xml_ in your app: ```xml com.sap.cds cds-feature-mt runtime ``` 3. Adds the following to your _srv/src/java/resources/application.yaml_: ```yml --- spring: config.activate.on-profile: cloud cds: multi-tenancy: mtxs.enabled: true ``` 1. Adds a sidecar subproject at `mtx/sidecar` with this _package.json_: ```json { "name": "bookshop-mtx", "dependencies": { "@cap-js/hana": "^2", "@sap/cds": "^8", "@sap/cds-mtxs": "^2", "@sap/xssec": "^4", "express": "^4" }, "devDependencies": { "@cap-js/sqlite": "^1" }, "scripts": { "start": "cds-serve", "build": "cds build ../.. --for mtx-sidecar --production && npm ci --prefix gen" }, "cds": { "profile": "mtx-sidecar" } } ``` :::
::: details Profile-based configuration presets The profiles `with-mtx-sidecar` and `mtx-sidecar` activate pre-defined configuration presets, which are defined as follows: ```js { "[with-mtx-sidecar]": { // [!code focus] requires: { db: { '[development]': { kind: 'sqlite', credentials: { url: 'db.sqlite' }, schema_evolution: 'auto', }, '[production]': { kind: 'hana', 'deploy-format': 'hdbtable', 'vcap': { 'label': 'service-manager' } }, }, "[java]": { "cds.xt.ModelProviderService": { kind: 'rest', model:[] }, "cds.xt.DeploymentService": { kind: 'rest', model:[] }, }, "cds.xt.SaasProvisioningService": false, "cds.xt.DeploymentService": false, "cds.xt.ExtensibilityService": false, } }, "[mtx-sidecar]": { // [!code focus] requires: { db: { "[development]": { kind: 'sqlite', credentials: { url: "../../db.sqlite" }, schema_evolution: 'auto', }, "[production]": { kind: 'hana', 'deploy-format': 'hdbtable', 'vcap': { 'label': 'service-manager' } }, }, "cds.xt.ModelProviderService": { "[development]": { root: "../.." }, // sidecar is expected to reside in ./mtx/sidecar "[production]": { root: "_main" }, "[prod]": { root: "_main" } // for simulating production in local tests }, "cds.xt.SaasProvisioningService": true, "cds.xt.DeploymentService": true, "cds.xt.ExtensibilityService": true, }, "[development]": { server: { port: 4005 } } }, … } ``` ::: tip You can always inspect the _effective_ configuration with `cds env`. ::: ## Install Dependencies
After adding multitenancy, install your application dependencies: ```sh npm i ```
After adding multitenancy, Maven build should be used to generate the model related artifacts: ```sh mvn install ```
## Test-Drive Locally {#test-locally} For local testing, create a new profile that contains the multitenancy configuration: ```sh cds add multitenancy --for local-multitenancy ```
For multitenancy you need additional dependencies in the _pom.xml_ of the `srv` directory. To support mock users in the local test scenario add `cds-starter-cloudfoundry`: ```xml com.sap.cds cds-starter-cloudfoundry ``` Then you add additional mock users to the spring-boot profile: ::: code-group ```yaml [application.yaml] --- spring: config.activate.on-profile: local-multitenancy #... cds: multi-tenancy: mtxs.enabled: true security.mock.users: // [!code focus] - name: alice // [!code focus] tenant: t1 roles: [ admin ] - name: bob // [!code focus] tenant: t1 roles: [ cds.ExtensionDeveloper ] - name: erin // [!code focus] tenant: t2 roles: [ admin, cds.ExtensionDeveloper ] ``` ::: Configure the sidecar to use dummy authentication. ::: code-group ```json [mtx/sidecar/package.json] { "cds": { "profile": "mtx-sidecar", "[development]": { "requires": { "auth": "dummy" } } } } ``` :::
Before deploying to the cloud, you can test-drive common SaaS operations with your app locally, including SaaS startup, subscribing tenants, and upgrading tenants. ::: details Using multiple terminals… In the following steps, we start two servers, the main app and MTX sidecar, and execute some CLI commands. So, you need three terminal windows. ::: ### 1. Start MTX Sidecar ```sh cds watch mtx/sidecar ``` ::: details Trace output explained In the trace output, we see several MTX services being served; most interesting for multitenancy: the _ModelProviderService_ and the _DeploymentService_. ```log [cds] - connect using bindings from: { registry: '~/.cds-services.json' } [cds] - connect to db > sqlite { url: '../../db.sqlite' } [cds] - serving cds.xt.ModelProviderService { path: '/-/cds/model-provider' } // [!code focus] [cds] - serving cds.xt.DeploymentService { path: '/-/cds/deployment' } // [!code focus] [cds] - serving cds.xt.SaasProvisioningService { path: '/-/cds/saas-provisioning' } [cds] - serving cds.xt.ExtensibilityService { path: '/-/cds/extensibility' } [cds] - serving cds.xt.JobsService { path: '/-/cds/jobs' } ``` In addition, we can see a `t0` tenant being deployed, which is used by the MTX services for book-keeping tasks. ```log [cds|t0] - loaded model from 1 file(s): ../../db/t0.cds [mtx|t0] - (re-)deploying SQLite database for tenant: t0 // [!code focus] /> successfully deployed to db-t0.sqlite // [!code focus] ``` With that, the server waits for tenant subscriptions, listening on port 4005 by default in development mode. ```log [cds] - server listening on { url: 'http://localhost:4005' } // [!code focus] [cds] - launched at 3/5/2023, 1:49:33 PM, version: 7.0.0, in: 1.320s [cds] - [ terminate with ^C ] ``` ::: [If you get an error on server start, read the troubleshooting information.](/get-started/troubleshooting#why-do-i-get-an-error-on-server-start){.learn-more} ### 2. Launch App Server
```sh cds watch --profile local-multitenancy ``` ::: details Persistent database The server starts as usual, but automatically uses a persistent database instead of an in-memory one: ```log [cds] - loaded model from 6 file(s): db/schema.cds srv/admin-service.cds srv/cat-service.cds srv/user-service.cds ../../../cds-mtxs/srv/bootstrap.cds ../../../cds/common.cds [cds] - connect using bindings from: { registry: '~/.cds-services.json' } [cds] - connect to db > sqlite { url: 'db.sqlite' } // [!code focus] [cds] - serving AdminService { path: '/odata/v4/admin', impl: 'srv/admin-service.js' } [cds] - serving CatalogService { path: '/odata/v4/catalog', impl: 'srv/cat-service.js' } [cds] - serving UserService { path: '/user', impl: 'srv/user-service.js' } [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 3/5/2023, 2:21:53 PM, version: 6.7.0, in: 748.979ms [cds] - [ terminate with ^C ] ``` :::
```sh cd srv mvn cds:watch -Dspring-boot.run.profiles=local-multitenancy ``` ::: details Persistent database The server starts as usual, with the difference that a persistent database is used automatically instead of an in-memory one: ```log 2023-03-31 14:19:23.987 INFO 68528 --- [ restartedMain] c.s.c.bookshop.Application : The following 1 profile is active: "local-mtxs" ... 2023-03-31 14:19:23.987 INFO 68528 --- [ restartedMain] c.s.c.services.impl.ServiceCatalogImpl : Registered service ExtensibilityService$Default 2023-03-31 14:19:23.999 INFO 68528 --- [ restartedMain] c.s.c.services.impl.ServiceCatalogImpl : Registered service CatalogService 2023-03-31 14:19:24.016 INFO 68528 --- [ restartedMain] c.s.c.f.s.c.runtime.CdsRuntimeConfig : Registered DataSource 'ds-mtx-sqlite'// [!code focus] 2023-03-31 14:19:24.017 INFO 68528 --- [ restartedMain] c.s.c.f.s.c.runtime.CdsRuntimeConfig : Registered TransactionManager 'tx-mtx-sqlite'// [!code focus] 2023-03-31 14:19:24.554 INFO 68528 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2023-03-31 14:19:24.561 INFO 68528 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2023-03-31 14:19:24.561 INFO 68528 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71] ``` :::
### 3. Subscribe Tenants In the third terminal, subscribe to two tenants using one of the following methods. ::: code-group ```sh [CLI] cds subscribe t1 --to http://localhost:4005 -u yves: cds subscribe t2 --to http://localhost:4005 -u yves: ``` ```http POST http://localhost:4005/-/cds/deployment/subscribe HTTP/1.1 Content-Type: application/json Authorization: Basic yves: { "tenant": "t1" } ``` ```js [JavaScript] const ds = await cds.connect.to('cds.xt.DeploymentService') await ds.subscribe('t1') ``` ::: > Run `cds help subscribe` to see all available options. ::: details `cds subscribe` explained 1. Be reminded that these commands are only relevant for local testing. For a deployed app, [subscribe to your tenants](#subscribe) through the BTP cockpit. 2. In the CLI commands, we use the pre-defined mock user `yves`, see [pre-defined mock users](../../node.js/authentication#mock-users). 3. The subscription is sent to the MTX sidecar process (listening on port **4005**) 4. The sidecar reacts with trace outputs like this: ```log [cds] - POST /-/cds/deployment/subscribe ... [mtx] - successfully subscribed tenant t1 // [!code focus] ``` 5. In response to each subscription, the sidecar creates a new persistent tenant database per tenant, keeping tenant data isolated: ```log [cds] - POST /-/cds/deployment/subscribe [mtx] - (re-)deploying SQLite database for tenant: t1 // [!code focus] > init from db/init.js // [!code focus] > init from db/data/sap.capire.bookshop-Authors.csv // [!code focus] > init from db/data/sap.capire.bookshop-Books.csv // [!code focus] > init from db/data/sap.capire.bookshop-Books_texts.csv // [!code focus] > init from db/data/sap.capire.bookshop-Genres.csv // [!code focus] /> successfully deployed to ./../../db-t1.sqlite // [!code focus] [mtx] - successfully subscribed tenant t1 ``` 6. To unsubscribe a tenant, run: ```sh cds unsubscribe ‹tenant› --from http://localhost:4005 -u ‹user› ``` > Run `cds help unsubscribe` to see all available options. ::: #### Test with Different Users/Tenants {.node} Open the _Manage Books_ app at and log in with `alice`. Select **Wuthering Heights** to open the details, edit here the title and save your changes. You've changed data in one tenant. To see requests served in tenant isolation, that is, from different databases, check that it's not visible in the other one. Open a private/incognito browser window and log in as `erin` to see that the title still is _Wuthering Heights_. In the following example, _Wuthering Heights (only in t1)_ was changed by _alice_. _erin_ doesn't see it, though. ![A screenshot of the bookshop application showing the effect of tenant isolation logged in as _alice_, as described in the previous sentence.](assets/book-changed-t1.png){} ::: details Use private/incognito browser windows to test with different tenants... Do this to force new logins with different users, assigned to different tenants: 1. Open a new _private_ / _incognito_ browser window. 2. Open in it → log in as `alice`. 3. Repeat that with `erin`, another pre-defined user, assigned to tenant `t2`. ::: ::: details Note tenants displayed in trace output... We can see tenant labels in server logs for incoming requests: ```log [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 3/5/2023, 4:28:05 PM, version: 6.7.0, in: 736.445ms [cds] - [ terminate with ^C ] ... [odata|t1] - POST /adminBooks { '$count': 'true', '$select': '... } // [!code focus] [odata|t2] - POST /adminBooks { '$count': 'true', '$select': '... } // [!code focus] ... ``` ::: ::: details Pre-defined users in `mocked-auth` How users are assigned to tenants and how tenants are determined at runtime largely depends on your identity providers and authentication strategies. The `mocked` authentication strategy, used by default with `cds watch`, has a few [pre-defined users](../../node.js/authentication#mock-users) configured. You can inspect these by running `cds env requires.auth`: ```console [bookshop] cds env requires.auth { kind: 'basic-auth', strategy: 'mock', users: { alice: { tenant: 't1', roles: [ 'admin' ] }, bob: { tenant: 't1', roles: [ 'cds.ExtensionDeveloper' ] }, carol: { tenant: 't1', roles: [ 'admin', 'cds.ExtensionDeveloper' ] }, // [!code focus] dave: { tenant: 't1', roles: [ 'admin' ], features: [] }, erin: { tenant: 't2', roles: [ 'admin', 'cds.ExtensionDeveloper' ] }, // [!code focus] fred: { tenant: 't2', features: ... }, me: { tenant: 't1', features: ... }, yves: { roles: [ 'internal-user' ] } '*': true //> all other logins are allowed as well }, tenants: { t1: { features: … }, t2: { features: '*' } } } ``` You can also add or override users or tenants by adding something like this to your _package.json_: ```jsonc "cds":{ "requires": { "auth": { "users": { "u2": { "tenant": "t2" }, // [!code focus] "u3": { "tenant": "t3" } // [!code focus] } } } } ``` ::: ### 4. Upgrade Your Tenant When deploying new versions of your app, you also need to upgrade your tenants' databases. For example, open `db/data/sap.capire.bookshop-Books.csv` and add one or more entries in there. Then upgrade tenant `t1` as follows: ::: code-group ```sh [CLI] cds upgrade t1 --at http://localhost:4005 -u yves: ``` ```http POST http://localhost:4005/-/cds/deployment/upgrade HTTP/1.1 Content-Type: application/json Authorization: Basic yves: { "tenant": "t1" } ``` ```js [JavaScript] const ds = await cds.connect.to('cds.xt.DeploymentService') await ds.upgrade('t1') ``` :::
Now, open or refresh again as _alice_ and _erin_ → the added entries are visible for _alice_, but still missing for _erin_, as `t2` has not yet been upgraded.
## Deploy to Cloud ### Cloud Foundry / Kyma In order to get your multitenant application deployed, follow this excerpt from the [deployment to CF](../deployment/to-cf) and [deployment to Kyma](../deployment/to-kyma) guides. Once: Add SAP HANA Cloud, XSUAA, and [App Router](../deployment/to-cf#add-app-router) configuration. The App Router acts as a single point-of-entry gateway to route requests to. In particular, it ensures user login and authentication in combination with XSUAA. ```sh cds add hana,xsuaa,approuter --for production ``` If you intend to serve UIs you can easily set up the SAP Cloud Portal service: ```sh cds add portal ``` Once: add a **deployment descriptor**: ::: code-group ```sh [Cloud Foundry] cds add mta ``` ```sh [Kyma] cds add helm,containerize ``` ::: ::: details Add xsuaa redirect for trial / extension landscapes Add the following snippet to your _xs-security.json_ and adapt it to the landscape you're deploying to: ```json "oauth2-configuration": { "redirect-uris": ["https://*.cfapps.us10-001.hana.ondemand.com/**"] } ``` ::: [Learn more about configured BTP services for SaaS applications.](#behind-the-scenes){.learn-more} [Freeze the `npm` dependencies](../deployment/to-cf#freeze-dependencies) for server and MTX sidecar. ```sh npm update --package-lock-only npm update --package-lock-only --prefix mtx/sidecar ``` In addition, you need install and freeze dependencies for your UI applications: ```sh npm i --prefix app/browse npm i --prefix app/admin-books ``` **Build and deploy**: ::: code-group ```sh [Cloud Foundry] mbt build -t gen --mtar mta.tar cf deploy gen/mta.tar ``` ```sh [Kyma] # Omit `--push` flag for testing, otherwise `ctz` # will push images to the specified repository ctz containerize.yaml --push helm upgrade --install bookshop ./chart ``` ::: ### Subscribe **Create a BTP subaccount** to subscribe to your deployed application. This subaccount has to be in the same region as the provider subaccount, for example, `us10`. See the [list of all available regions](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/f344a57233d34199b2123b9620d0bb41.html). {.learn-more} ![Global Account view to create a subaccount.](assets/create-subaccount.png){.mute-dark} In your **subscriber account** go to _Instances and Subscription_ and select _Create_. ![The screenshot is explained in the accompanying text.](assets/sub-account.png){.mute-dark} Select _bookshop_ and use the only available plan _default_. ![The screenshot is explained in the accompanying text.](assets/subscribe-bookshop.png){.mute-dark} [Learn more about subscribing to a SaaS application using the SAP BTP cockpit.](https://help.sap.com/docs/btp/sap-business-technology-platform/subscribe-to-multitenant-applications-using-cockpit?version=Cloud#procedure){.learn-more} [Learn more about subscribing to a SaaS application using the `btp` CLI.](https://help.sap.com/docs/btp/btp-cli-command-reference/btp-subscribe-accounts-subaccount?locale=en-US){.learn-more} You can now access your subscribed application via _Go to Application_. ![The screenshot is explained in the accompanying text.](assets/go-to-app.png){.mute-dark} As you can see, your route doesn't exist yet. You need to create and map it first. > If you're deploying to Kyma, your application will load and you won't get the below error. You can skip the step of exposing the route. ```log 404 Not Found: Requested route ('...') does not exist. ``` > Leave the window open. You need the information to create the route. #### Cloud Foundry Use the following command to create and map a route to your application: ```sh cf map-route ‹app› ‹paasDomain› --hostname ‹subscriberSubdomain›-‹saasAppName› ``` In our example, let's assume our `saas-registry` is configured in the _mta.yaml_ like this: ```yaml - name: bookshop-registry type: org.cloudfoundry.managed-service parameters: service: saas-registry service-plan: application config: appName: bookshop-${org}-${space} // [!code focus] ``` Let's also assume we've deployed to our app to Cloud Foundry org `myOrg` and space `mySpace`. This would be the full command to create a route for the subaccount with subdomain `subscriber1`: ```sh cf map-route bookshop cfapps.us10.hana.ondemand.com --hostname subscriber1-myOrg-mySpace-bookshop ``` ::: details Learn how to do this in the BTP cockpit instead… Switch to your **provider account** and go to your space → Routes. Click on _New Route_. ![The screenshot is explained in the accompanying text.](assets/cockpit-routes.png){.mute-dark} Here, you need to enter a _Domain_ and _Host Name_. ![The screenshot is explained in the accompanying text.](assets/cockpit-routes-new.png){.mute-dark} Let's use this route as example: __ - The **Domain** here is _cfapps.us10.hana.ondemand.com_ - The **Host Name** here is _subscriber1-bookshop_ Hit _Save_ to create the route. You can now see the route is created but not mapped to an application yet. ![The screenshot is explained in the accompanying text.](assets/cockpit-routes-new-overview.png){.mute-dark} Click on _Map Route_, choose your App Router module and hit _Save_. ![The screenshot is explained in the accompanying text.](assets/cockpit-routes-new-map.png){.mute-dark} You should now see the route mapped to your application. ![Overview in your dev space with the newly mapped route.](assets/cockpit-routes-new-mapped-overview.png){.mute-dark} ::: ### Update Database Schema
[Learn best practices for schema updates in the Java Guide](../../java/multitenancy#database-update){.learn-more}
There are several ways to run the update of the database schema. #### MTX Sidecar API Please check the [Upgrade API](./mtxs#upgrade-tenants-→-jobs) to see how the database schema update can be run for single or all tenants using the API endpoint. #### `cds-mtx upgrade` Command The database schema upgrade can also be run using `cds-mtx upgrade `. The command must be run in the MTX sidecar root directory. ##### Run as Cloud Foundry hook Example definition for a [module hook](https://help.sap.com/docs/btp/sap-business-technology-platform/module-hooks): ```yaml hooks: - name: upgrade-all type: task phases: # - blue-green.application.before-start.idle - deploy.application.before-start parameters: name: upgrade memory: 512M disk-quota: 768M command: cds-mtx upgrade '*' ``` [Blue-green deployment strategy for MTAs](https://help.sap.com/docs/btp/sap-business-technology-platform/blue-green-deployment-strategy){.learn-more} ##### Manually run as Cloud Foundry Task You can also invoke the command manually using `cf run-task`: ```sh cf run-task --name "upgrade-all" --command "cds-mtx upgrade '*'" ```
### Test-Drive with Hybrid Setup For faster turnaround cycles in development and testing, you can run the app locally while binding it to remote service instances created by a Cloud Foundry deployment. To achieve this, bind your SaaS app and the MTX sidecar to its required cloud services, for example: ```sh cds bind --to-app-services bookshop-srv ``` For testing the sidecar, make sure to run the command there as well: ```sh cd mtx/sidecar cds bind --to-app-services bookshop-srv ``` To generate the SAP HANA HDI files for deployment, go to your project root and run the build: ```sh cds build --production ``` ::: warning Run `cds build` after model changes Each time you update your model or any SAP HANA source file, you need repeat the build. ::: > Make sure to stop any running CAP servers left over from local testing. By passing `--profile hybrid` you can now run the app with cloud bindings and interact with it as you would while [testing your app locally](#test-locally). Run this in your project root: ```sh cds watch mtx/sidecar --profile hybrid ``` And in another terminal:
```sh cd srv mvn cds:watch -Dspring-boot.run.profiles=hybrid ```
```sh cds watch --profile hybrid ```
Learn more about [Hybrid Testing](../../advanced/hybrid-testing).{.learn-more} ::: tip Manage multiple deployments Use a dedicated profile for each deployment landscape if you are using several, such as `dev`, `test`, `prod`. For example, after logging in to your `dev` space: ```sh cds bind -2 bookshop-db --profile dev cds watch --profile dev ``` ::: ## SaaS Registry Dependencies {#saas-dependencies} Some of the services your application consumes need to be registered as _reuse services_ to work in multitenant environments. `@sap/cds-mtxs` offers an easy way to integrate these dependencies. It supports some services out of the box and also provides a simple API for plugins. Most notably, you will need such dependencies for the SAP BTP [Audit Log](https://discovery-center.cloud.sap/serviceCatalog/audit-log-service), [Connectivity](https://discovery-center.cloud.sap/serviceCatalog/connectivity-service), [Destination](https://discovery-center.cloud.sap/serviceCatalog/destination), [HTML5 Application Repository](https://discovery-center.cloud.sap/serviceCatalog/html5-application-repository-service), and [Cloud Portal](https://discovery-center.cloud.sap/serviceCatalog/cloud-portal-service) services. All these services are supported natively and can be activated individually by providing configuration in `cds.requires`. In the most common case, you simply activate service dependencies like so: ::: code-group ```json [mtx/sidecar/package.json] "cds": { "requires": { "audit-log": true, "connectivity": true, "destinations": true, "html5-repo": true, "portal": true } } ``` ::: ::: details Defaults provided by `@sap/cds-mtxs`... The Boolean values above activate the default configuration in `@sap/cds-mtxs`: ```json "cds": { "requires": { "connectivity": { // Uses credentials.xsappname "vcap": { "label": "connectivity" }, "subscriptionDependency": "xsappname" }, "portal": { "vcap": { "label": "portal" }, // Uses credentials.uaa.xsappname "subscriptionDependency": { "uaa": "xsappname" } }, ... } } ``` ::: ::: details If you need additional services... You can use the `subscriptionDependency` setting to provide a similar dependency configuration in your application or CAP plugin _package.json_: ```json [package.json] "cds": { "requires": { "my-service": { "subscriptionDependency": "xsappname" } } } ``` > The `subscriptionDependency` specifies the property name of the credentials value with the desired `xsappname`, starting from `cds.requires['my-service'].credentials`. Usually it's just `"xsappname"`, but JavaScript objects interpreted as a key path are also allowed, such as `{ "uaa": "xsappname" }` in the example for `audit-log` above. Alternatively, overriding the [`dependencies`](./mtxs#get-dependencies) handler gives you full flexibility for any custom implementation. ::: ## Add Custom Handlers MTX services are implemented as standard CAP services, so you can register for events just as you would for any application service. ### In the Java Main Project {.java} For Java, you can add custom handlers to the main app as described in the [documentation](/java/multitenancy#custom-logic): ```java @After private void subscribeToService(SubscribeEventContext context) { String tenant = context.getTenant(); Map options = context.getOptions(); } @On private void upgradeService(UpgradeEventContext context) { List tenants = context.getTenants(); Map options = context.getOptions(); } @Before private void unsubscribeFromService(UnsubscribeEventContext context) { String tenant = context.getTenant(); Map options = context.getOptions(); } ``` ### In the Sidecar Subproject You can add custom handlers in the sidecar project, implemented in Node.js. ```js cds.on('served', () => { const { 'cds.xt.DeploymentService': ds } = cds.services ds.before('subscribe', async (req) => { // HDI container credentials are not yet available here const { tenant } = req.data }) ds.before('upgrade', async (req) => { // HDI container credentials are not yet available here const { tenant } = req.data }) ds.after('deploy', async (result, req) => { const { container } = req.data.options const { tenant } = req.data ... }) ds.after('unsubscribe', async (result, req) => { const { container } = req.data.options const { tenant } = req.data }) }) ``` ## Configuring the Java Service { #binding-it-together .java} `cds add multitenancy` added configuration similar to this: ::: code-group ```yaml [mta.yaml (Cloud Foundry)] modules: - name: bookshop-srv type: java path: srv parameters: ... provides: - name: srv-api # required by consumers of CAP services (e.g. approuter) properties: srv-url: ${default-url} requires: - name: app-api properties: CDS_MULTITENANCY_APPUI_URL: ~{url} CDS_MULTITENANCY_APPUI_TENANTSEPARATOR: "-" - name: bookshop-auth - name: bookshop-db - name: mtx-api properties: CDS_MULTITENANCY_SIDECAR_URL: ~{mtx-url} - name: bookshop-registry ``` ```yaml [values.yaml (Kyma)] ... srv: bindings: ... image: repository: bookshop-srv env: SPRING_PROFILES_ACTIVE: cloud CDS_MULTITENANCY_APPUI_TENANTSEPARATOR: "-" CDS_MULTITENANCY_APPUI_URL: https://{{ .Release.Name }}-srv-{{ .Release.Namespace }}.{{ .Values.global.domain }} CDS_MULTITENANCY_SIDECAR_URL: https://{{ .Release.Name }}-sidecar-{{ .Release.Namespace }}.{{ .Values.global.domain }} ... ``` ::: - `CDS_MULTITENANCY_SIDECAR_URL` sets the application property cds.multitenancy.sidecar.url. This URL is required by the CAP Java runtime to connect to the MTX Sidecar application and is derived from the property `mtx-url` of the mtx-sidecar module. - `CDS_MULTITENANCY_APPUI_URL` sets the entry point URL that is shown in the SAP BTP Cockpit. - `CDS_MULTITENANCY_APPUI_TENANTSEPARATOR` is the separator in generated tenant-specific URL. The tenant application requests are separated by the tenant-specific app URL: ```http https:// ``` ::: tip Use MTA extensions for landscape-specific configuration You can define the environment variable `CDS_MULTITENANCY_APPUI_TENANTSEPARATOR` in an MTA extension descriptor: ::: code-group ```yaml [mt.mtaext] _schema-version: "3.1" extends: my-app ID: my-app.id modules: - name: srv properties: CDS_MULTITENANCY_APPUI_TENANTSEPARATOR: "-" - name: app properties: TENANT_HOST_PATTERN: ^(.*)-${default-uri} ``` [Learn more about _Defining MTA Extension Descriptors_](https://help.sap.com/docs/btp/sap-business-technology-platform/defining-mta-extension-descriptors?q=The%20MTA%20Deployment%20Extension%20Descriptor){.learn-more} ::: #### Option: Provisioning Only { #provisioning-only-mtx-sidecar .java} Under certain conditions it makes a lot of sense to use the MTX Sidecar only for tenant provisioning. This configuration can be used in particular when the application doesn't offer (tenant-specific) model extensions and feature toggles. In such cases, business requests can be served by the Java runtime without interaction with the sidecar, for example to fetch an extension model. Use the following MTX Sidecar configuration to achieve this: ::: code-group ```json [.cdsrc.json] { "requires": { "multitenancy": true, "extensibility": false, // [!code focus] "toggles": false // [!code focus] }, "build": { ... } } ``` ::: In this case, the application can use its static local model without requesting the MTX sidecar for the model. This results in a significant performance gain because CSN and EDMX metadata are loaded from the JAR instead of the MTX Sidecar. To make the Java application aware of this setup as well, set the following properties: ::: code-group ```yaml [application.yaml] cds: model: provider: extensibility: false // [!code focus] toggles: false // [!code focus] ``` ::: ::: tip Enable only the features that you need You can also selectively use these properties to enable only extensibility or feature toggles, thus decreasing the dimensions when looking up dynamic models. :::
## Appendix ### About SaaS Applications Software-as-a-Service (SaaS) solutions are deployed once by a SaaS provider, and then used by multiple SaaS customers subscribing to the software. SaaS applications need to register with the [_SAP BTP SaaS Provisioning service_](https://discovery-center.cloud.sap/serviceCatalog/saas-provisioning-service) to handle `subscribe` and `unsubscribe` events. In contrast to [single-tenant deployments](../deployment/to-cf), databases or other _tenant-specific_ resources aren't created and bootstrapped upon deployment, but upon subscription per tenant. CAP includes the **MTX services**, which provide out-of-the-box handlers for `subscribe`/`unsubscribe` events, for example to manage SAP HANA database containers. If everything is set up, the following graphic shows what's happening when a user subscribes to a SaaS application: ![The graphic is explained in the following text.](assets/saas-overview.drawio.svg){} 1. The SaaS Provisioning Service sends a `subscribe` event to the CAP application. 2. The CAP application delegates the request to the MTX services. 3. The MTX services use Service Manager to create the database tenant. 4. The CAP Application connects to this tenant at runtime using Service Manager.
In CAP Java, tenant provisioning is delegated to CAP Node.js based services. This has the following implications: - Java applications need to run and maintain the [_cds-mtxs_ module](../multitenancy/#enable-multitenancy) as a sidecar application (called _MTX sidecar_ in this documentation). - But multitenant CAP Java applications automatically expose the tenant provisioning API called by the SaaS Provisioning service so that [custom logic during tenant provisioning](/java/multitenancy#custom-logic) can be written in Java.
### About Sidecar Setups The SaaS operations `subscribe` and `upgrade` tend to be resource-intensive. Therefore, it's recommended to offload these tasks onto a separate microservice, which you can scale independently of your main app servers. Java-based projects even require such a sidecar, as the MTX services are implemented in Node.js. In these MTX sidecar setups, a subproject is added in _./mtx/sidecar_, which serves the MTX Services as depicted in the illustration below. ![The main app serves the CAP services and the database. The sidecar serves the Deployment service and the Model Provider service. The Deployment service receives upgrade and subscribe request and sends deploy requests to the database of the main app. The Deployment service and the CAP services get the model from the Model Provider service to keep all layers in sync.](./assets/mtx-sidecar.drawio.svg) The main task for the MTX sidecar is to serve `subscribe` and `upgrade` requests. The CAP services runtime requests models from the sidecar only when you apply tenant-specific extensions. For Node.js projects, you have the option to run the MTX services embedded in the main app, instead of in a sidecar. ### Behind the Scenes { #behind-the-scenes} With adding the MTX services, your project configuration is adapted at all relevant places. Configuration and dependencies are added to your _package.json_ and an _xs-security.json_ containing MTX-specific scopes and roles is created. {.node} Configuration and dependencies are added to your _.cdsrc.json_ and an _xs-security.json_ containing MTX-specific scopes and roles is created. {.java} For the MTA deployment service dependencies are added to the _mta.yaml_ file. Each SaaS application will have bindings to at least three SAP BTP service instances. | Service | Description | | ------------------------------------------------------------ | ------------------------------------------------------------ | | [Service Manager](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/4e19b11211fe4ca2a266d3fdd4a72188.html) (`service-manager`) | CAP uses this service for creating a new SAP HANA Deployment Infrastructure (HDI) container for each tenant and for retrieving tenant-specific database connections. | | [SaaS Provisioning Service](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/3971151ba22e4faa9b245943feecea54.html) (`saas-registry`) | To make a SaaS application available for subscription to SaaS consumer tenants, the application provider must register the application in the SAP BTP Cloud Foundry environment through the SaaS Provisioning Service. | | [User Account and Authentication Service](https://help.sap.com/docs/CP_AUTHORIZ_TRUST_MNG) (`xsuaa`) | Binding information contains the OAuth client ID and client credentials. The XSUAA service can be used to validate the JSON Web Token (JWT) from requests and to retrieve the tenant context from the JWT.| ## Next Steps - See the [MTX Services Reference](./mtxs) for details on service and configuration options, in particular about sidecar setups. - See our guide on [Extending and Customizing SaaS Solutions](../extensibility/). # MTX Services Reference {{$frontmatter?.synopsis}} ## Introduction & Overview The `@sap/cds-mtxs` package provides a set of CAP services which implement _**multitenancy**_, _[features toggles](../extensibility/feature-toggles)_ and _[extensibility](../extensibility/)_ (_'MTX'_ stands for these three functionalities). These services work in concert as depicted in the following diagram: ![The graphic depicting the MTX infrastructure as described in the following guide.](./assets/mtx-overview.drawio.svg) MTX services are implemented in Node.js and can run in the same Node.js server as your application services or in separate micro services called _sidecars_. All services can be consumed via REST APIs. As the services are defined and implemented as standard CAP services, with definitions in CDS and implementations based on the CAP Node.js framework, application projects can hook into all events to add custom logic using CAP Node.js. ## Getting Started… ### Add `@sap/cds-mtxs` Package Dependency ```sh npm add @sap/cds-mtxs ``` ### Enable MTX Functionality Add one or more of the following convenience configuration flags, for example, to your `package.json` in a Node.js-based project: ```json "cds": { "requires": { "multitenancy": true, "extensibility": true, "toggles": true } } ``` [Java-based projects require a sidecar setup.](#sidecars){.learn-more} ### Test-Drive Locally After enabling MTX features, you can test MTX functionality with local development setups and in-memory databases as usual: ```sh cds watch ``` This shows the MTX services being served in addition to your app services: ```log{6-8,11-15} [cds] - loaded model from 6 file(s): db/schema.cds srv/admin-service.cds srv/cat-service.cds ../../db/extensions.cds ../../srv/deployment-service.cds ../../srv/bootstrap.cds [cds] - connect to db > sqlite { url: ':memory:' } [cds] - serving cds.xt.SaasProvisioningService { path: '/-/cds/saas-provisioning' } [cds] - serving cds.xt.DeploymentService { path: '/-/cds/deployment' } [cds] - serving cds.xt.ModelProviderService { path: '/-/cds/model-provider' } [cds] - serving cds.xt.ExtensibilityService { path: '/-/cds/extensibility' } [cds] - serving cds.xt.JobsService { path: '/-/cds/jobs' } [cds] - serving AdminService { path: '/admin' } [cds] - serving CatalogService { path: '/browse', impl: 'srv/cat-service.js' } [cds] - server listening on { url: 'http://localhost:4004' } [cds] - launched at 5/6/2023, 9:31:11 AM, in: 863.803ms ``` ## Grow As You Go Follow CAP principles of _'Grow as you go...'_ to minimize complexity of setups, stay in [inner loops](https://www.getambassador.io/docs/telepresence/latest/concepts/devloop) with fast turnarounds, and hence minimize costs and accelerate development. ### Enable MTX Only if Required During development you rarely need to run your servers with MTX functionality enabled. Only do so when you really need it. For example, in certain tests or by using configuration profiles. This configuration would have development not use MTX by default. You could still run with MTX enabled on demand and have it always active in production: ```jsonc "cds": { "requires": { "[local-multitenancy]": { "multitenancy": true, "extensibility": true, "toggles": true }, "[production]": { "multitenancy": true, "extensibility": true, "toggles": true } } } ``` During development you could occasionally run with MTX: ```sh cds watch --profile local-multitenancy ``` ### Testing With Minimal Setup When designing test suites that run frequently in CI/CD pipelines, you can shorten runtimes and reduce costs. First run a set of functional tests which use MTX in minimized setups – that is, with local servers and in-memory databases as introduced in the [_Multitenancy_ guide](../multitenancy/#test-locally). Only in the second and third phases, you would then run the more advanced hybrid tests. These hybrid tests could include testing tenant subscriptions with SAP HANA, or integration tests with the full set of required cloud services. ## Sidecar Setups {#sidecars} In the minimal setup introduced in the _[Getting Started...](#getting-started)_ chapter, we had the MTX services being served embedded with our main app, that is, in the same server as our application services. While this is possible for Node.js and even recommended to reduce complexity during development, quite frequently, we'd want to run them in a separate micro service. Reasons for that include: - **For Java-based projects** — As these services are implemented in Node.js we need to run them separately and consume them remotely for Java-based apps. - **To scale independently** — As some operations, especially `upgrade`, are very resource-intensive, we want to scale these services separate from our main application. As MTX services are built and consumed as CAP services, we benefit from CAP's agnostic design and can easily move them to separate services. ### Create Sidecar as a Node.js Subproject An MTX sidecar is a standard, yet minimal Node.js CAP project. By default it's added to a subfolder `mtx/sidecar` within your main project, containing just a _package.json_ file. ::: code-group ```json [mtx/sidecar/package.json] { "name": "bookshop-mtx", "version": "0.0.0", "dependencies": { "@sap/cds": "^7", "@sap/cds-hana": "^2", "@sap/cds-mtxs": "^1", "@sap/xssec": "^4", "express": "^4" }, "devDependencies": { "@cap-js/sqlite": "^1" }, "scripts": { "start": "cds-serve" }, "cds": { "profile": "mtx-sidecar" } } ``` ::: The only configuration necessary for the project is the `mtx-sidecar` profile. ::: details Let's have a look at what this profile provides... #### Required MTX Services ```jsonc ... "cds": { "requires": { "cds.xt.ModelProviderService": "in-sidecar", "cds.xt.DeploymentService": true, "cds.xt.SaasProvisioningService": true, "cds.xt.ExtensibilityService": true ... } } ``` Here we enable all MTX services in a standard configuration. Of course, you can choose to only serve some of which, according to your needs, using [individual configuration](#conf-individual). #### Using Shared Database ```jsonc ... "[development]": { "db": { "kind": "sqlite", "credentials": { "url": "../../db.sqlite" }} } ... ``` With multitenancy the _[DeploymentService](#deploymentservice)_ needs to deploy the very database instances which are subsequently used by the main application. This setting ensures that for local development with SQLite. #### Additional `[development]` Settings ```jsonc ... "[development]": { "requires": { "auth": "mocked" }, "server": { "port": 4005 } } ... ``` These additional settings for profile `[development]` are to support local tests with default values for the server port (different from the default port `4004` of the main app), and to allow mock authentication in the sidecar (secured by default in production). ::: ### Testing Sidecar Setups With the above setup in place, we can test-drive the sidecar mode locally. To do so, we'll simply start the sidecar and main app in separate shells. 1. Run sidecar in first shell: ```sh cds watch mtx/sidecar ``` ::: details You see the sidecar starting on port 4005... ```log {28} cd mtx/sidecar cds serve all --with-mocks --in-memory? live reload enabled for browsers ___________________________ [cds] - loaded model from 3 file(s): ../cds-mtxs/srv/model-provider.cds ../cds-mtxs/srv/deployment-service.cds ../cds-mtxs/db/t0.cds [cds] - connect using bindings from: { registry: '~/.cds-services.json' } [cds] - connect to db > sqlite { url: '../../db.sqlite' } [cds] - using authentication: { kind: 'mocked' } [cds] - serving cds.xt.ModelProviderService { path: '/-/cds/model-provider' } [cds] - serving cds.xt.DeploymentService { path: '/-/cds/deployment' } [cds] - loaded model from 1 file(s): ../cds-mtxs/db/t0.cds [mtx] - (re-)deploying SQLite database for tenant: t0 /> successfully deployed to db-t0.sqlite [cds] - server listening on { url: 'http://localhost:4005' } [cds] - launched at 5/6/2023, 1:08:33 AM, version: 7.3.0, in: 772.25ms [cds] - [ terminate with ^C ] ``` ::: 2. Run the main app as before in a second shell: ```sh cds watch ``` #### _ModelProviderService_ serving models from main app When we use our application, we can see `model-provider/getCsn` requests in the sidecar's trace log. In response to those requests, the sidecar reads and returns the main app's models, that is, the models from two levels up the folder hierarchy as is the default with the `mtx-sidecar` profile. #### Note: Service Bindings by `cds watch` Required service bindings are done automatically by `cds watch`'s built-in runtime service registry. This is how it works: 1. Each server started using `cds watch` registers all served services in `~/cds-services.json`. 2. Every subsequently started server binds automatically all `required` remote services, to equally named services already registered in `~/cds-services.json`. In our case: The main app's `ModelProviderService` automatically receives the service binding credentials, for example `url`, to talk to the one served by the sidecar. ### Build Sidecar for Production When deploying a sidecar for production, it doesn't have access to the main app's models two levels up the deployed folder hierarchy. Instead we have to prepare deployment by running `cds build` in the project's root: ```sh cds build ``` One of the build tasks that are executed is the `mtx-sidecar` build task. It generates log output similar to the following: ```log [cds] - the following build tasks will be executed {"for":"mtx-sidecar", "src":"mtx/sidecar", "options":... } [cds] - done > wrote output to: gen/mtx/sidecar/_main/fts/isbn/csn.json gen/mtx/sidecar/_main/fts/reviews/csn.json gen/mtx/sidecar/_main/resources.tgz gen/mtx/sidecar/_main/srv/_i18n/i18n.json gen/mtx/sidecar/_main/srv/csn.json gen/mtx/sidecar/package.json gen/mtx/sidecar/srv/_i18n/i18n.json gen/mtx/sidecar/srv/csn.json [cds] - build completed in 687 ms ``` The outcome of that build task is a compiled and deployable version of the sidecar in the _gen/mtx/sidecar_ staging areas: ```zsh{6-17} bookshop/ ├─ _i18n/ ├─ app/ ├─ db/ ├─ fts/ ├─ gen/mtx/sidecar/ │ ├─ _main/ │ │ ├── fts/ │ │ │ ├── isbn/ │ │ │ │ └── csn.json │ │ │ └── reviews/ │ │ │ └── csn.json │ │ ├── srv/ │ │ │ ├── _i18n │ │ │ └── csn.json │ │ └── resources.tgz │ └─ package.json ├─ mtx/sidecar/ ├─ ... ``` In essence, the `mtx-sidecar` build task does the following: 1. It runs a standard Node.js build for the sidecar. 2. It pre-compiles the main app's models, including all features into respective _csn.json_ files, packaged into the `_main` subfolder. 3. It collects all additional sources required for subsequent deployments to `resources.tgz`. For example, these include _.csv_ and _i18n_ files. ### Test-Drive Production Locally We can also test-drive the production-ready variant of the sidecar locally before actual deployment, again using two separate shells. 1. **First, start sidecar** from `gen/mtx/sidecar` in `prod` simulation mode: ```sh cds watch gen/mtx/sidecar --profile development,prod ``` 2. **Second, start main** app as usual: ```sh cds watch ``` #### _ModelProviderService_ serving models from main app When we now use our application again, and inspect the sidecar's trace logs, we see that the sidecar reads and returns the main app's precompiled models from `_main` now: ```log [cds] – POST /-/cds/model-provider/getCsn [cds] – model loaded from 3 file(s): gen/mtx/sidecar/_main/srv/csn.json gen/mtx/sidecar/_main/fts/isbn/csn.json gen/mtx/sidecar/_main/fts/reviews/csn.json ``` ## Configuration {#conf} ### Shortcuts `cds.requires.multitenancy / extensibility / toggles` {#conf-shortcuts} The easiest way to enable multitenancy, extensibility, and feature toggles is as follows: ```json "cds": { "requires": { "multitenancy": true, "extensibility": true, "toggles": true } } ``` On the one hand, these settings are interpreted by the CAP runtime to support features such as tenant-specific database connection pooling when `multitenancy` is enabled. On the other hand, these flags are checked during server bootstrapping to ensure the required combinations of services are served by default. The following tables shows which services are enabled by one of the shortcuts: | | `multitenancy` | `extensibility` | `toggles` | | ----------------------------------------------------- | :------------: | :-------------: | :-------: | | _[SaasProvisioningService](#saasprovisioningservice)_ | yes | no | no | | _[DeploymentService](#deploymentservice)_ | yes | no | no | | _[ExtensibilityService](#extensibilityservice)_ | no | yes | no | | _[ModelProviderService](#modelproviderservice)_ | yes | yes | yes | ### Configuring Individual Services {#conf-individual} In addition or alternatively to the convenient shortcuts above you can configure each service individually, as shown in the following examples: ```jsonc "cds": { "requires": { "cds.xt.DeploymentService": true } } ``` The names of the service-individual configuration options are: - `cds/requires/` ##### Allowed Values - `false` — deactivates the service selectively - `true` — activates the service with defaults for embedded usage - `` — uses [preset](#presets), for example, with defaults for sidecar usage - `{ ...options }` — add/override individual configuration options ##### Common Config Options - `model` — specifies/overrides the service model to be used - `impl` — specifies/overrides the service implementation to be used - `kind` — the kind of service/consumption, for example, `rest` for remote usage > These options are supported by all services. #### Combined with Convenience Flags ```json "cds": { "requires": { "multitenancy": true, "cds.xt.SaasProvisioningService": false, "cds.xt.DeploymentService": false, "cds.xt.ModelProviderService": { "kind": "rest" } } } ``` This tells the CAP runtime to enable multitenancy, but neither serve the _DeploymentService_, nor the _SaasProvisioningService_, and to use a remote _ModelProviderService_ via REST protocol. #### Individual Configurations Only We can also use only the individual service configurations: ```json "cds": { "requires": { "cds.xt.DeploymentService": true, "cds.xt.ModelProviderService": { "root": "../.." } } } ``` In this case, the server will **not** run in multitenancy mode. Also, extensibility and feature toggles are not supported. Yet, the _DeploymentService_ and the _ModelProviderService_ are served selectively. For example, this kind of configuration can be used in [sidecars](#sidecars). ### Using Configuration Presets {#presets} #### Profile-based configuration The simplest and for most projects sufficient configuration is the profile-based one, where just these two entries are necessary: ::: code-group ```json [package.json] "cds": { "profile": "with-mtx-sidecar" } ``` ::: ::: code-group ```json [mtx/sidecar/package.json] "cds": { "profile": "mtx-sidecar" } ``` ::: #### Preset-based configuration Some MTX services come with pre-defined configuration presets, which can easily be used by referring to the preset suffixes. For example, to simplify and standardize sidecar configuration, _[ModelProviderService](#modelproviderservice)_ supports the `in-sidecar` preset which can be used like that: ```json "cds": { "requires": { "cds.xt.ModelProviderService": "in-sidecar" } } ``` These presets are actually configured in `cds.env` defaults like that: ```js cds: { requires: { // Configuration Presets (in cds.env.requires.kinds) kinds: { "cds.xt.ModelProviderService-in-sidecar": { "[development]": { root: "../.." }, "[production]": { root: "_main" }, }, "cds.xt.ModelProviderService": { model: "@sap/cds/srv/model-provider" }, // ... } } } ``` [Learn more about `cds.env`](../../node.js/cds-env){.learn-more} ### Inspecting Effective Configuration You can always inspect the effective configuration by executing this in the _mtx/sidecar_ folder: ```sh cds env get requires ``` This will give you an output like this: ```js { auth: { strategy: 'dummy', kind: 'dummy' }, 'cds.xt.ModelProviderService': { root:'../..', model:'@sap/cds/srv/model-provider', kind:'in-sidecar' } } ``` Add CLI option `--profile` to inspect configuration in different profiles: ```sh cds env get requires --profile development cds env get requires --profile production ``` ## Customization All services are defined and implemented as standard CAP services, with service definitions in CDS, and implementations based on the CAP Node.js framework. Thus, you can easily do both, adapt service definitions, as well as hook into all events to add custom logic using CAP Node.js. ### Customizing Service Definitions For example, you could override the endpoints to serve a service: ```cds using { cds.xt.ModelProviderService } from '@sap/cds-mtxs'; annotate ModelProviderService with @path: '/mtx/mps'; ``` For sidecar scenarios, define the annotations in the Node.js sidecar application and not as part of the main application. ### Adding Custom Lifecycle Event Handlers Register handlers in `server.js` files: ::: code-group ```js [mtx/sidecar/server.js] const cds = require('@sap/cds') cds.on('served', ()=>{ const { 'cds.xt.ModelProviderService': mps } = cds.services const { 'cds.xt.DeploymentService': ds } = cds.services ds.before ('upgrade', (req) => { ... }) ds.after ('subscribe', (_,req) => { ... }) mps.after ('getCsn', (csn) => { ... }) }) ``` ::: ::: tip Custom hooks for CLI usage For CLI usage via `cds subscribe|upgrade|unsubscribe` you can create a `mtx/sidecar/cli.js` file, which works analogously to a `server.js`. ::: ## Consumption ### Via Programmatic APIs Consume MTX services using standard [Service APIs](../../node.js/core-services). For example, in `cds repl`: ```js await cds.test() var { 'cds.xt.ModelProviderService': mps } = cds.services var { 'cds.xt.DeploymentService': ds } = cds.services var db = await ds.subscribe ('t1') var csn = await mps.getCsn('t1') cds.context = { tenant:'t1' } await db.run('SELECT type, name from sqlite_master') ``` ### Via REST APIs Common usage of the MTX services is through REST APIs. Here's an example: 1. Start the server ```sh cds watch ``` 2. Subscribe a tenant ```http POST /-/cds/deployment/subscribe HTTP/1.1 Content-Type: application/json { "tenant": "t1" } ``` 3. Get CSN from `ModelProviderService` ```http POST /-/cds/model-provider/getCsn HTTP/1.1 Content-Type: application/json { "tenant": "t1", "toggles": ["*"] } ``` ## ModelProviderService The _ModelProviderService_ serves model variants, which may include tenant-specific extensions and/or feature-toggled aspects. | | | | ----------------------- | ---------------------------------- | | Service Definition | `@sap/cds-mtxs/srv/model-provider` | | Service Definition Name | `cds.xt.ModelProviderService` | | Default HTTP Endpoint | `/-/cds/model-provider` | ### Configuration {#model-provider-config} ```json "cds.xt.ModelProviderService": { "root": "../../custom/path" } ``` - [Common Config Options](#common-config-options) - `root` — a directory name, absolute or relative to the _package.json_'s location, specifying the location to search for models and resources to be served by the model provider services. Default is undefined, for embedded usage of model provider. In case of a sidecar, it refers to the main app's model; usually `"../.."` during development, and `"_main"` in production. ##### Supported Presets {#model-provider-presets} - `in-sidecar` — provides defaults for usage in sidecars - `from-sidecar` — shortcut for `{ "kind": "rest" }` ### `getCsn` _(tenant, toggles) → CSN_ Returns the application's effective CSN document for the given tenant + feature toggles vector. CAP runtimes call that method to obtain the effective models to serve. | Arguments | Description | | --------- | ----------------------------------------------------------- | | `tenant` | A string identifying the tenant | | `toggles` | An array listing toggled features; `['*']` for all features | #### Example Usage {#example-get-csn } ```http POST /-/cds/model-provider/getCsn HTTP/1.1 Content-Type: application/json { "tenant": "t1", "toggles": ["*"] } ``` The response is a CSN in JSON representation. [Learn more about **CSN**](http://localhost:5173/docs/cds/csn) {.learn-more} ### `getEdmx` _(tenant, toggles, service, locale) → EDMX_ Returns the EDMX document for a given service in context of the given tenant and feature toggles vector. CAP runtimes call this to get the EDMX document they return in response to OData `$metadata` requests. | Arguments | Description | | --------- | ----------------------------------------------------------- | | `tenant` | A string identifying the tenant | | `toggles` | An array listing toggled features; `['*']` for all features | | `service` | Fully-qualified name of a service definition | | `locale` | Requested locale, that is, as from `accept-language` header | #### Example Usage {#example-get-edmx} ```http POST /-/cds/model-provider/getEdmx HTTP/1.1 Content-Type: application/json { "tenant": "t1", "toggles": ["*"], "service": "CatalogService", "locale": "en" } ``` ### `getResources` _() → TAR_ Returns a _.tar_ archive containing CSV files, I18n files, as well as native database artifacts, required for deployment to databases. `DeploymentService` calls that whenever it receives a `subscribe` or `upgrade` event. ### `getExtensions` _(tenant) → CSN_ {get-extensions} Returns a _parsed_ CSN document containing all the extensions stored in `cds.xt.Extensions` for the given tenant. | Arguments | Description | | --------- | ------------------------------- | | `tenant` | A string identifying the tenant | ### `isExtended` _(tenant) → true|false_ Returns `true` if the given `tenant` has extensions applied. | Arguments | Description | | --------- | ------------------------------- | | `tenant` | A string identifying the tenant | ## ExtensibilityService The _ExtensibilityService_ allows to add and activate tenant-specific extensions at runtime. | | | | ----------------------- | ----------------------------------------- | | Service Definition | `@sap/cds-mtxs/srv/extensibility-service` | | Service Definition Name | `cds.xt.ExtensibilityService` | | Default HTTP Endpoint | `/-/cds/extensibility` | [See the extensibility guide for more context](../extensibility/customization){.learn-more} ### Configuration {#extensibility-config} ```jsonc "cds.xt.ExtensibilityService": { // fields must start with x_ or xx_ "element-prefix": ["x_", "xx_"], // namespaces starting with com.sap or sap. can't be extended "namespace-blocklist": ["com.sap.", "sap."], "extension-allowlist": [ { // at most 2 new fields in entities from the my.bookshop namespace "for": ["my.bookshop"], "kind": "entity", "new-fields": 2, // allow extensions for field "description" only "fields": ["description"] }, { // at most 2 new entities in CatalogService "for": ["CatalogService"], "new-entities": 2, // allow @readonly annotations in CatalogService "annotations": ["@readonly"] } ] } ``` - [Common Config Options](#common-config-options) - `element-prefix` — restrict field names to prefix - `namespace-blocklist` — restrict namespaces to be extended - `extension-allowlist` — allow certain entities to be extended > Without `extension-allowlist` configured, extensions are forbidden. Using `"for": ["*"]` applies the rules to all possible values. See the [list of possible `kind` values](../../cds/csn#def-properties).{.learn-more} - `new-fields` specifies the maximum number of fields that can be added. - `fields` lists the fields that are allowed to be extended. If the list is omitted, all fields can be extended. - `new-entities` specifies the maximum number of entities that can be added to a service. ### GET `Extensions/` _→ [{ ID, csn, timestamp }]_ {#get-extensions} Returns a list of all tenant-specific extensions.
#### Request Format | **Parameters** | Description | | - | - | | `ID` | String uniquely identifying the extension | > Omitting `ID` will return all extensions.
#### Response Format | **Body** | Description | | - | - | | `ID` | String uniquely identifying the extension | | `csn` | Compiled extension CSN | | `timestamp` | Timestamp of activation date |
#### Example Request ##### Get a specific extension {#get-extension} ::: code-group ```http [Request] GET /-/cds/extensibility/Extensions/isbn-extension HTTP/1.1 Content-Type: application/json ``` ```json [Response] { "ID": "isbn-extension", "csn": "{\"extensions\":[{\"extend\":\"my.bookshop.Books\",\"elements\":{\"Z_ISBN\":{\"type\":\"cds.String\"}}}],\"definitions\":{}}", "timestamp": "2023-01-01T01:01:01.111Z" } ``` ::: ##### Get all extensions {#get-all-extensions} ::: code-group ```http [Request] GET /-/cds/extensibility/Extensions HTTP/1.1 Content-Type: application/json ``` ```json [Response] [ { "ID": "isbn-extension", "csn": "{\"extensions\":[{\"extend\":\"my.bookshop.Books\",\"elements\":{\"Z_ISBN\":{\"type\":\"cds.String\"}}}],\"definitions\":{}}", "timestamp": "2023-01-01T01:01:01.111Z" }, { "ID": "rental-extension", "csn": "{\"extensions\":[{\"extend\":\"my.bookshop.Books\",\"elements\":{\"Z_rentalPrice\":{\"type\":\"cds.Integer\"}}}],\"definitions\":{}}", "timestamp": "2023-01-01T01:02:01.111Z" } ] ``` ::: ### PUT `Extensions/` (\[csn\]) _→ \[{ ID, csn, timestamp }\]_ {#put-extensions} Creates a new tenant-specific extension. #### HTTP Request Options | Request Header | Example Value | Description | | ---------------- | -------------------------------------------------------|--------------| | `prefer` | `respond-async` | Trigger asynchronous extension activation |
#### Request Format | **Parameters** | Description | | - | - | | `ID` | String uniquely identifying the extension | | **Body** | `csn` | Array of extension CDL or CSN to apply | | `i18n` | Texts and translations |
#### Response Format | **Body** | Description | | - | - | | `ID` | String uniquely identifying the extension | | `csn` | Compiled extension CSN | | `i18n` | Texts and translations | | `timestamp` | Timestamp of activation date |
#### Example Request ::: code-group ```http [Request] PUT /-/cds/extensibility/Extensions/isbn-extension HTTP/1.1 Content-Type: application/json { "csn": ["using my.bookshop.Books from '_base/db/data-model'; extend my.bookshop.Books with { Z_ISBN: String };"], "i18n": [{ "name": "i18n.properties", "content": "Books_stock=Stock" }, { "name": "i18n_de.properties", "content": "Books_stock=Bestand" }] } ``` ```json [Response] { "ID": "isbn-extension", "csn": "{\"extensions\":[{\"extend\":\"my.bookshop.Books\",\"elements\":{\"Z_ISBN\":{\"type\":\"cds.String\"}}}],\"definitions\":{}}", "i18n": "{\"\":{\"Books_stock\":\"Stock\"},\"de\":{\"Books_stock\":\"Bestand\"}}", "timestamp": "2023-09-07T22:31:28.246Z" } ``` ::: The request can also be triggered asynchronously by setting the `Prefer: respond-async` header. You can use the URL returned in the `Location` response header to poll the job status. In addition, you can poll the status for individual tenants using its individual task ID: ```http GET /-/cds/jobs/pollTask(ID='') HTTP/1.1 ``` The response is similar to the following: ```js { "status": "FINISHED", "op": "activateExtension" } ``` The job and task status can take on the values `QUEUED`, `RUNNING`, `FINISHED` and `FAILED`. > By convention, custom (tenant-specific) fields are usually prefixed with `Z_`. The i18n data can also be passed in JSON format: ```json "i18n": [{ "name": "i18n.json", "content": "{\"\":{\"Books_stock\":\"Stock\"},\"de\":{\"Books_stock\":\"Bestand\"}}" }] ``` You also get this JSON in the response body of PUT or [GET](#get-extensions) requests. In this example, the text with key "Books_stock" from the base model is replaced. ### DELETE `Extensions/` {#delete-extensions} Deletes a tenant-specific extension. #### HTTP Request Options | Request Header | Example Value | Description | | ---------------- | -------------------------------------------------------|--------------| | `prefer` | `respond-async` | Trigger asynchronous extension activation |
#### Request Format | **Parameters** | Description | | - | - | | `ID` | String uniquely identifying the extension |
#### Example Usage ```http [Request] DELETE /-/cds/extensibility/Extensions/isbn-extension HTTP/1.1 Content-Type: application/json ``` The request can also be triggered asynchronously by setting the `Prefer: respond-async` header. You can use the URL returned in the `Location` response header to poll the job status. In addition, you can poll the status for individual tenants using its individual task ID: ```http GET /-/cds/jobs/pollTask(ID='') HTTP/1.1 ``` The response is similar to the following: ```js { "status": "FINISHED", "op": "activateExtension" } ``` The job and task status can take on the values `QUEUED`, `RUNNING`, `FINISHED` and `FAILED`. ## DeploymentService The _DeploymentService_ handles `subscribe`, `unsubscribe`, and `upgrade` events for single tenants and single apps or micro services. Actual implementation is provided through internal plugins, for example, for SAP HANA and SQLite. | | | | ----------------------- | -------------------------------------- | | Service Definition | `@sap/cds-mtxs/srv/deployment-service` | | Service Definition Name | `cds.xt.DeploymentService` | | Default HTTP Endpoint | `/-/cds/deployment` | ### Configuration {#deployment-config} ```jsonc "cds.xt.DeploymentService": { "hdi": { "deploy": { ... }, "create": { "database_id": "", ... }, "bind": { ... } } } ``` - [Common Config Options](#common-config-options) - `hdi` — bundles HDI-specific settings - `deploy` — [HDI deployment parameters](https://www.npmjs.com/package/@sap/hdi-deploy#supported-features) - `create` — tenant creation parameters (≈ [`cf create-service`](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/a36df26b36484129b482ae20c3eb8004.html)) - `database_id` — SAP HANA Cloud instance ID - `bind` — binding parameters (≈ [`cf bind-service`](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/c7b09b79d3bb4d348a720ba27fe9a2d5.html)) ##### Supported Presets {#deployment-presets} - `in-sidecar` — provides defaults for usage in sidecars - `from-sidecar` — shortcut for `{ "kind": "rest" }` ### `subscribe` _(tenant)_ Received when a new tenant subscribes. The implementations create and initialize required resources, that is, creating and initializing tenant-specific HDI containers in case of SAP HANA, or tenant-specific databases in case of SQLite. ### `upgrade` _(tenant)_ Used to upgrade a subscribed tenant. Implementations read the latest models and content from the latest deployed version of the application and re-deploy that to the tenant's database. ##### Drop-Creating Databases for SQLite In case of SQLite, especially in case of in-memory databases, an upgrade will simply drop and create a new tenant-specific database. Which means all data is lost. ##### Schema Evolution for SAP HANA In case of SAP HANA, the delta to the former database layout will be determined, and corresponding CREATE TABLE, DROP-CREATE VIEW, and ALTER TABLE statements will eventually be executed without any data loss. ### `unsubscribe` _(tenant)_ Received when a tenant is deleted. The implementations free required resources, that is, dispose tenant-specific HDI containers in case of SAP HANA, or tenant-specific databases in case of SQLite. ## SaasProvisioningService The _SaasProvisioningService_ is a façade for the _DeploymentService_ to adapt to the API expected by [SAP BTP's SaaS Provisioning service](https://discovery-center.cloud.sap/serviceCatalog/saas-provisioning-service), hence providing out-of-the-box integration. | | | | ----------------------- | ------------------------------------------------ | | Service Definition | `@sap/cds-mtxs/srv/cf/saas-provisioning-service` | | Service Definition Name | `cds.xt.SaasProvisioningService` | | Default HTTP Endpoint | `/-/cds/saas-provisioning` | ### Configuration {#saas-provisioning-config} ```jsonc "cds.xt.SaasProvisioningService": { "jobs": { "queueSize": 5, // default: 100 "workerSize": 5, // default: 1 "clusterSize": 5, // default: 1 } } ``` - [Common Config Options](#common-config-options) - `jobs` — settings of the built-in job orchestrator - `workerSize` — max number of parallel asynchronous jobs per database - `clusterSize` — max number of database clusters, running `workerSize` jobs each - `queueSize` — max number of jobs waiting to run in the job queue - `dependencies` — SAP BTP SaaS Provisioning service dependencies #### HTTP Request Options | Request Header | Example Value | Description | | ---------------- | -------------------------------------------------------|--------------| | `prefer` | `respond-async` | Trigger subscription, upgrade or unsubscription request asynchronously. | | `status_callback` | `/saas-manager/v1/subscription-callback/123456/result` | Callback path for SAP BTP SaaS Provisioning service. Set automatically if asynchronous subscription is configured for `saas-registry` service. | ::: tip No `prefer: respond-async` needed with callback Requests are implicitly asynchronous when `status_callback` is set. ::: ##### Example Usage With `@sap/hdi-deploy` [parameters](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-developer-guide-for-cloud-foundry-multitarget-applications-sap-business-app-studio/deployment-options-in-hdi) `trace` and `version`: ```http POST /-/cds/saas-provisioning/upgrade HTTP/1.1 Content-Type: application/json { "tenants": ["t1"], "options": { "_": { "hdi": { "deploy": { "trace": "true", "version": "true" } } } } } ``` ### GET `tenant/` {#get-tenant} Returns tenant-specific metadata if `` is set, and a list of all tenants' metadata if omitted. | Parameters | Description | | ---------------- | ----------------------------------------------------------- | | `tenant` | A string identifying the tenant. | #### Example Usage {#example-tenant-metadata} ##### Get Metadata for a Specific Tenant {#example-get-tenant-metadata} ::: code-group ```http [Request] GET /-/cds/saas-provisioning/tenant/t1 HTTP/1.1 Content-Type: application/json ``` ```json [Response] { "subscribedTenantId": "tenant-1", "eventType": "CREATE", "subscribedSubdomain": "subdomain-1", "subscriptionAppName": "app-1", "subscribedSubaccountId": "subaccount-1", "createdAt": "2023-11-10T14:36:22.639Z", "modifiedAt": "2023-13-10T15:16:22.802Z" } ``` ::: ##### Get Metadata for All Tenants ::: code-group ```http [Request] GET /-/cds/saas-provisioning/tenant HTTP/1.1 Content-Type: application/json ``` ```json [Response] [ { "subscribedTenantId": "tenant-1", "eventType": "CREATE", "subscribedSubdomain": "subdomain-1", "subscriptionAppName": "app-1", "subscribedSubaccountId": "subaccount-1", "createdAt": "2023-11-10T14:36:22.639Z", "modifiedAt": "2023-13-10T15:16:22.802Z" }, { "subscribedTenantId": "tenant-2", "eventType": "CREATE", "subscribedSubdomain": "subdomain-2", "subscriptionAppName": "app-2", "subscribedSubaccountId": "subaccount-2", "createdAt": "2023-11-11T14:36:22.639Z", "modifiedAt": "2023-11-12T12:14:45.452Z" } ] ``` ::: ### PUT `tenant/` (...) {#put-tenant} Creates tenant resources required for onboarding. Learn about query parameters, arguments, and their description in the following table: | Parameters | | | ---------------- | ------------------------------------------------------------------------ | | `tenant` | A string identifying the tenant | | Arguments | | `subscribedTenantId` | A string identifying the tenant | | `subscribedSubdomain` | A string identifying the tenant-specific subdomain | | `eventType` | The `saas-registry` event (`CREATE` or `UPDATE`) | #### Example Usage {#example-post-tenant} ::: code-group ```http [Request] PUT /-/cds/saas-provisioning/tenant/t1 HTTP/1.1 Content-Type: application/json { "subscribedTenantId": "t1", "subscribedSubdomain": "subdomain1", "eventType": "CREATE" } ``` ```txt [Response] https://my.app.url ``` ::: ### DELETE `tenant/` {#delete-tenant} Deletes all tenant resources. ### GET `dependencies` _→ [{ xsappname }]_ {#get-dependencies} Returns configured SAP BTP SaaS Provisioning service dependencies. [Learn how to configure SaaS dependencies](./#saas-dependencies){.learn-more} ### `upgrade` _[tenants] → Jobs_ Use the `upgrade` endpoint to upgrade tenant base models. | Arguments | Description | | --------- | ----------------------------------------------------------- | | `tenants` | A list of tenants, or `[*]` for all tenants | | `options` | Additional options, including HDI deployment options, see [DeploymentService](#deployment-config), prefixed with `_` | #### Example Usage {#example-upgrade} ##### Asynchronously Upgrade a List of Tenants ::: code-group ```http [Request] POST /-/cds/saas-provisioning/upgrade HTTP/1.1 Content-Type: application/json Prefer: respond-async { "tenants": ["t1", "t2"], "options": { "_": { "hdi": { "deploy": { "trace": "true", "version": "true" } } } } } ``` ```json [Response] { "ID": "", "createdAt": "", "op": "upgrade", "tenants": { "t1": { "ID": "" } } } ``` ::: ##### Asynchronously Upgrade All Tenants ::: code-group ```http [Request] POST /-/cds/saas-provisioning/upgrade HTTP/1.1 Content-Type: application/json Prefer: respond-async { "tenants": ["*"] } ``` ```json [Response] { "ID": "", "createdAt": "", "op": "upgrade", "tenants": { "t1": { "ID": "" } } } ``` ::: We recommended to execute the upgrades asynchronously by setting the `Prefer: respond-async` header. You can use the URL returned in the `Location` response header to poll the job status. In addition, you can poll the status for individual tenants using its individual task ID: ```http GET /-/cds/jobs/pollTask(ID='') HTTP/1.1 ``` The response is similar to the following: ```js { "status": "FINISHED", "op": "upgrade" } ``` The job and task status can take on the values `QUEUED`, `RUNNING`, `FINISHED` and `FAILED`. ## [Old MTX Reference](old-mtx-apis) {.toc-redirect} [See Reference docs for former 'old' MTX Services.](old-mtx-apis){.learn-more} # Migration from Old MTX {#migration} Towards new multitenancy capabilities {.subtitle}
::: warning Make sure that you always use the latest version of the CAP modules using `npm outdated`. For Java, also check the versions configured in `pom.xml` files. ::: ## Functional Differences Before you start to migrate to `@sap/cds-mtxs`, read about the differences compared to the old MTX. ### Persistence Changes With `@sap/cds-mtxs`, the persistence has been simplified. There's no second container needed (META-tenant) any longer. Instead, tenant-specific metadata, such as extensions, are stored in the same container as the application data. ![This screenshot is explained in the accompanying text.](assets/persistence-overview.drawio.svg) In addition, `@sap/cds-mtxs` also uses a dedicated tenant `t0` to store some runtime data, such as job logs. ### Extensibility #### Changes of Extension Persistence In contrast to `@sap/cds-mtx`, with `@sap/cds-mtxs`, the extensions are no longer stored as sources, but only as compiled `csn` files. Instead of running a build on the server with each extension activation, the build is now run locally _before_ the extension is deployed. The extensions are then stored as `csn` files with a `tag` as key. When using [`cds push`](../extensibility/customization#push-extension), the `tag` is derived from the name of the extension project in `package.json`. Example `package.json` of extension project: ```json { "name": "@capire/orders-ext", "extends": "@capire/orders", ... } ``` When the extension is pushed, it is stored with the tag `@capire/orders-ext`. Also check the [Push API](mtxs#extensibilityservice). #### Handling of extension sources As mentioned previously, `cds push` only uploads compiled extensions as CSN files. Thus, it's no longer possible to download the CDS sources from the server. Source control is expected to be done by the SaaS application provider using his own repository. ### Security Some of the roles have changed with `@sap/cds-mtxs`. | @sap/cds-mtx | @sap/cds-mtxs | | ----------------- | ------------------------ | | `ExtendCDS` | `cds.ExtensionDeveloper` | | `ExtendCDSdelete` | w/o replacement | ## Permanent and Temporary Limitations ### Temporary Limitations - Diagnose API isn't available. - Upload of extension only works synchronously. ### Permanent Limitations - Scopes aren't configurable. - It isn't possible to have tenant-specific model versions. - Use of SAP HANA hdbmigrationtable is only possible for entities that aren't to be extended. - Upload of arbitrary custom files together with extensions is no longer available. ## Migration Steps To switch to `@sap/cds-mtxs`, you need to change your project configuration, your custom handlers, and you might need to update the database content. ![A decision tree covering the following steps.](assets/migration-steps.drawio.svg){} ### Adapt Project Configuration
#### Switch to `@sap/cds-mtxs` To switch your Node.js project to `@sap/cds-mtxs`, perform the following steps: 1. Remove `@sap/cds-mtx`: ```sh npm remove @sap/cds-mtx ``` 2. Add `@sap/cds-mtxs`: ```sh npm add @sap/cds-mtxs ``` 3. Open your _package.json_ and add the following: ```json "cds": { "requires": { "multitenancy": true } } ``` #### Enable Extensibility If your project supports extensibility, you need to enable extensibility in your configuration. To do so, you only need to add `extensibility: true` to your cds configuration in `.cdsrc.json` or `package.json`. ```json "requires": { "multitenancy": true, "extensibility": true } ```
#### Create New Sidecar and Adapt mta.yaml To create a sidecar based on `@sap/cds-mtxs`, you can use the following command: ```sh cds add multitenancy ``` It creates a new sidecar folder _mtx/sidecar_ and also modifies other files, including _mta.yaml_. Currently, as `cds add multitenancy` is meant to be used with new projects, the best way is to **revert** the changes that have been made to _mta.yaml_ and to make a few manual changes instead. ##### Remove Global Build Section The global build section can be removed. The necessary build script has moved to the sidecar module. ```yaml # build-parameters: # before-all: # - builder: custom # commands: # - npm install --production # - npx -p @sap/cds-dk cds build --production ``` ##### Add MTXS Flag to Java Module To switch the runtime module to `@sap/cds-mtxs`, you need to add the corresponding environment variable: ```yaml requires: ... - name: mtx-sidecar properties: CDS_MULTITENANCY_MTXS_ENABLED: true # Only required for cds-services version 2 CDS_MULTITENANCY_SIDECAR_URL: ~{url} ``` #### Adapt _mta.yaml_ to Use New Sidecar To enable the newly created sidecar, you need to change the path of your existing sidecar to the new path. You only need to adapt the path to `mtx/sidecar` and add a custom build section. ::: code-group ```yaml [mta.yaml] modules: - name: bookshop-mtx type: nodejs path: mtx/sidecar # adapted path build-parameters: # added build section builder: custom build-result: gen commands: - npm run build requires: - name: bookshop-srv parameters: memory: 256M disk-quota: 1G requires: - name: bookshop-auth - name: bookshop-db provides: - name: mtx-api properties: mtx-url: ${default-url} ``` ::: #### Add Workspace for Sidecar in Root package.json To make the `@sap/cds-mtxs` models part of the installation, add a workspace to the root `package.json` to include the sidecar dependencies. ```json "workspaces": [ "mtx/sidecar" ] ``` ::: tip Freeze Sidecar Dependencies To prepare the build of the MTA archive (`mbt build`), you need to generate a `package-lock.json` for the sidecar by executing this in the project root: ```sh npm i --package-lock-only --prefix mtx/sidecar ``` ::: #### Adapt Build Tasks `cds add multitenancy` also adapts the build tasks in `.cdsrc.json` or `package.json`. You only need to remove the `mtx` build task. If your project uses the default project layout, all build tasks can be removed from the build configuration as follows: ```json { "build": { "target": "." }, "profiles": ["with-mtx-sidecar", "java"], "requires": { "multitenancy": true } } ``` #### Enable Extensibility If your project supports extensibility, you need to enable extensibility in your configuration. To do so, you only need to add `extensibility: true` to your cds configuration in `.cdsrc.json` or `package.json`. ```json "requires": { "multitenancy": true, "extensibility": true } ```
#### Security Adaptations The scopes needed by extension developers have changed. Scopes `ExtendCDS` and `ExtendCDSdelete` have changed to `cds.ExtensionDeveloper`. Make sure to adapt all occurrences in your security configuration (`xs-security.json`). Communicate to customer admins and extension developers to add the new scope to their role collection. Also adjust the documentation for the SaaS application accordingly if available.
#### Handler Registration A typical handler registration in `server.js` now looks like ```js cds.on('served', async () => { const { 'cds.xt.SaasProvisioningService': provisioning } = cds.services const { 'cds.xt.DeploymentService': deployment } = cds.services await provisioning.prepend(() => { provisioning.on('UPDATE', 'tenant', async (req, next) => { ... }) provisioning.on('dependencies', async (req, next) => { ... }) ... }) await deployment.prepend(() => { // previously this was `upgradeTenant` deployment.on('upgrade', async (req) => { // HDI container credentials are not yet available here }) // previously this was `deployToDb` deployment.on('deploy', async (req) => { const { tenant, options: { container } } = req.data ... }) ... }) }) ``` Here's what has changed: - `ProvisioningService` changed to `cds.xt.SaasProvisioningService` - `DeploymentService` changed to `cds.xt.DeploymentService` - Use `cds.on('served')` instead of `cds.on('mtx')`. For Node.js, the `saas-registry` endpoints in `mta.yaml` need to be changed to `.../-/cds/saas-provisioning/...`: ```yaml parameters: service: saas-registry config: appUrls: getDependencies: ~{mtx-api/mtx-url}/-/cds/saas-provisioning/dependencies onSubscription: ~{mtx-api/mtx-url}/-/cds/saas-provisioning/tenant/{tenantId} ```
#### Miscellaneous Configuration `@sap/cds-mtx` offers some additional configuration that you can also set in `@sap/cds-mtxs`. ##### HDI Container Configuration In `@sap/cds-mtx`, you can configure the HDI container creation as follows: ```json "mtx": { "provisioning": { "lazymetadatacontainercreation": true, "container": { "provisioning_parameters": { "database_id": "" }, "binding_parameters": { "key": "value" } }, "metadatacontainer": { "provisioning_parameters": { "database_id": "" } } } } ``` In `@sap/cds-mtxs`, you can do the same configuration for the `cds.xt.DeploymentService`: ```json "requires": { "cds.xt.DeploymentService": { "lazyT0": true, "hdi": { "create": { "database_id": "" }, "bind": { "key": "value" } }, "for": { "t0": { "hdi": { "create": { "database_id": "" } } } } }, } ``` ##### Extension Restrictions This configuration allows to set what extensions are allowed. With `@sap/cds-mtx`: ```json "mtx" : { "extension-allowlist": [ { "for": ["my.bookshop.Authors", "my.bookshop.Books"], "new-fields": 2 }, { "for": ["CatalogService"] } ] } ``` With `@sap/cds-mtxs`, the same configuration has moved to the `cds.xt.ExtensibilityService` configuration: ```json "requires": { "cds.xt.ExtensibilityService": { "extension-allowlist": [ { "for": ["my.bookshop.Authors", "my.bookshop.Books"], "new-fields": 2 }, { "for": ["CatalogService"] }] } } ``` ### Migrate Tenant Content of Existing Applications Depending on the MTX features that your existing application has used, you need to execute some steps to move your data to the persistence used by `@sap/cds-mtxs`. #### Multitenancy Only In case you only used the multitenancy features such as subscription/unsubscription, you just need to make the [configuration changes described earlier](#adapt-project-configuration). ::: tip When does this scenario apply? - Your application doesn't support extensibility. - You don't need to read all tenant IDs or the tenant metadata using
`GET /-/cds/saas-provisioning/tenant/` or
`GET /-/cds/saas-provisioning/tenant/`. The tenant metadata is the data that is sent to the MTX API by the SAP BTP SaaS Provisioning Service on subscription, similar to this: ```json { "subscriptionAppId": "...", "subscriptionAppName": "..." , "subscribedTenantId": "...", ... } ``` ::: See [project configuration](#adapt-project-configuration). #### Saving Subscription Metadata If your application needs access to the tenant list or tenant metadata, you need to update this data for `@sap/cds-mtxs`. ::: tip When does this scenario apply? - Your application doesn't support extensibility. - Your application needs to read all tenant IDs or the tenant metadata using
`GET /-/cds/saas-provisioning/tenant/` or
`GET /-/cds/saas-provisioning/tenant/`. ::: In order to copy the metadata from existing subscriptions to the new persistence of `@sap/cds-mtxs`, you need to run [a migration script](#run-the-migration-script) that comes with `@sap/cds-mtxs`. #### Migration of Extensions If your application supports extensibility, you also need to update the existing extensions for `@sap/cds-mtxs`. You can do this with the [same migration script](#run-the-migration-script). ::: tip When does this scenario apply? - Your application supports extensibility. ::: #### Run the Migration Script The migration script is part of `@sap/cds-mtxs`. You can run it locally or during application deployment. Before running the script, you need to make the [configuration changes](#adapt-project-configuration) mentioned earlier. ##### Run the Migration Script Locally The script has to run in the (Node.js) application environment resulting from `cds build --production` to correctly simulate the execution in the deployment environment. For Node.js applications, this result is the `gen/srv` folder generated in the application root, for Java applications, this result is the `gen` folder of the new `@sap/cds-mtxs` sidecar (`mtx/sidecar/gen`). It also needs access to the application bindings. That means, when running locally, it has to [run in hybrid mode](../../advanced/hybrid-testing#run-with-service-bindings). You also need to add the `production` profile to ensure that the models are resolved correctly. ::: tip Make sure, that the sources you want to migrate have the exact same version on your local machine as the sources that are deployed to the `@sap/cds-mtx` application . ::: Example: ```sh cds migrate "*" --dry --profile hybrid,production --resolve-bindings ``` ##### Options To run the migration for all or a set of tenants, you need to run: ```sh cds migrate [,]|"*" ``` The option `--dry` allows you to perform a dry run, without changing the database content. Keep in mind that, depending on the number of tenants, the script requires some time to run. This is important when you consider to run it in combination with the application deployment. If the migration was successful, tenants are marked as migrated. When running the migration a second time, these tenants are ignored. If you want to rerun the migration also for the already migrated tenants, you can do so by using parameter `--force`. ##### Save Existing Extension Projects You can use the migration script to save the content of the subscribers' extension projects. With parameter `-d`, you can specify a directory that is used by the script to store the existing, migrated extension projects. ```sh cds migrate [,]|"*" -d ``` To really access the saved extension projects, you need access to the file system, of course. So, the easiest way is to run the script locally for that. ##### Add the Migration Script as Cloud Foundry Task to mta.yaml You can add the migration script as a `hook` to your Node.js server module (application or sidecar) in _mta.yaml_. For that, you can use the script `cds-mtx-migrate` that also comes with the `@sap/cds-mtxs` but doesn't require `@sap/cds-dk` to be installed. Example: ```yaml - name: bookshop-mt-sidecar type: nodejs path: mtx/sidecar ... hooks: - name: migrate-tenants type: task phases: # - blue-green.application.before-start.idle - deploy.application.before-start parameters: name: migration memory: 512M disk-quota: 768M command: cds-mtx-migrate "*" ``` See also [Module Hooks](https://help.sap.com/docs/btp/sap-business-technology-platform/module-hooks) ::: warning Warning: In case you already run an upgrade as task and your project supports extensions, make sure that the upgrade is run **AFTER** the migration. Otherwise, the content of extended tables can get lost. ::: ##### Advanced: Separate Extensions Based on Extension File Names The concept of extensions has slightly changed with `@sap/cds-mtxs`. Extensions sources are no longer stored in the backend. Instead, each extension gets a _tag_ and the extension is stored as `csn` with the _tag_ as key. When running the migration script, all extension files are compiled to one `csn` and are stored with a default _tag_: `migrated`. You can change the _default tag_ by passing your _own tag_ using the `--tag` parameter: ```sh cds migrate "*" -d migrated_projects --tag "mytag" ``` In addition, you can separate your extensions into several `csn`-files with different tags. For example, if your original extension files follow a pattern, you can do so by passing parameter `--tagRule` with a regular expression. Let's use the following extension project structure: ```zsh old-bookshop-ext/ ├── db/ │ ├── extension_id_1.cds │ └── extension_id_2.cds │ └── order_ext_id_1.cds │ └── order_ext_id_2.cds ├── srv/ └── package.json ``` You can split your extensions as follows: ```sh cds migrate "*" -d migrated_projects --tagRule "(?:ext_|extension_)(.*)\.cds" ``` As a result, you get two extensions with tags `id_1` and `id_2`. The _tag_ is taken from the first captured group of the regular expression. ::: tip Find the right regular expression To verify if the result meets your expectations, you can make a dry run: ```sh cds migrate "*" -d migrated_projects --tagRule "(?:ext_|extension_)(.*)\.cds" --dry ``` You can find the result in the folder _migrated_projects_. ::: ### Check Migration Result To verify the result of the migration script, check the tenant's content of the HDI container. You can use any database client that can access SAP HANA databases. #### Check Content Using SAP HANA Database Explorer To see the content of an HDI Container, you can [add the tenant container to the SAP HANA Database Explorer](https://help.sap.com/docs/HANA_CLOUD/a2cea64fa3ac4f90a52405d07600047b/4e2e8382f8484edba31b8b633005e937.html). You can find the migrated extensions in table `CDS_XT_EXTENSIONS`. The table contains: - extensions parsed as `csn` strings in column **csn** - key column **tag** ![The screenshot shows the table that is explained in the accompanying text.](assets/db_explorer.png){ .adapt} ## Migrated Extension Projects As mentioned in [Save Existing Extension Projects](#save-existing-extension-projects), you can store existing extension projects locally. We recommend to upload the projects to a source repository (e. g. github), because with `@sap/cds-mtxs` the content of extension projects is no longer stored in the tenant database. With that setup you can change and push the extension again later. The content of extension projects is usually the property of the customer (subscriber). So, alternatively, the customer can [download](#download-of-migrated-extension-projects) the extension projects himself and upload them to his own source repository. ### Adapt for Streamlined MTX As described in the [extensibility guide](../extensibility/customization##start-ext-project), you usually start with an empty extension project and pull the base model of the application using [`cds pull`](../extensibility/customization#pull-base). When starting with a migrated extension project, you need to make some adaptations after running [`cds pull`](../extensibility/customization#pull-base). Previously, extension projects were using the full set of CDS files whereas extension projects based on `@sap/cds-mtxs` are using a compiled `index.csn` of the base model. This affects the references in the extension sources of the migrated project. So these references need to be adapted. Recommended steps: - Run [`cds pull`](../extensibility/customization#pull-base) to fetch the latest version of the base model as `index.csn`. - Fix the references in your extension sources. All references to the base model must use the name specified in the `cds.extends` entry of the extension _package.json_, omitting any additional subfolders. Example: `using sap.capire.bookshop from '_base/db/schema';` must be replaced by `using sap.capire.bookshop from 'base-model';`
You can see all broken references as error messages when using the CDS Editor. ### Download of Migrated Extension Projects As long as the metadata containers (`TENANT--META`) created by `@sap/cds-mtx` still exist, the customer extension projects can be downloaded using the CDS client. The [user](../extensibility/customization#cds-login) running the download command needs to have the scope `cds.ExtensionDeveloper` assigned: ```sh cds extend --download-migrated-projects ``` The command downloads an archive named `migrated_projects.tgz` that contains the existing extensions that are ready to be used with `@sap/cds-mtxs`.
# Extensibility Learn here about intrinsic capabilities to extend your applications in verticalization and customization scenarios. Extensibility of CAP applications is greatly fueled by **CDS Aspects**, which allow to easily extend existing models with new fields, entities, relationships, or new or overridden annotations [→ Learn more about using CDS Aspects in the Domain Modeling guide](../domain-modeling#separation-of-concerns). ![This screenshot is explained in the accompanying text.](assets/extensibility.drawio.svg) As illustrated in the graphic above, different parties can build and deploy CDS Aspects-based extensions: - **Customizations** – Customers/Subscribers of SaaS solutions need options to tailor these to their needs, again using CDS Aspects to add custom fields and entities. - **Toggled Features** – SaaS providers can offer pre-built enhancement features, which can be switched on selectively per tenant using Feature Toggles, for example, specialization for selected industries. - **Composition** – Finally, 3rd parties can provide pre-built extension packages for reuse, which customers can pick and compose into own solutions. - **Verticalization** – 3rd parties can provide verticalized versions of a given base application, which they can in turn operate as verticalized SaaS apps.
The following guides give detailed information to each of these options. # Extending SaaS Applications ## Introduction & Overview Subscribers (customers) of SaaS solutions frequently need to tailor these to their specific needs, for example, by adding specific extension fields and entities. All CAP-based applications intrinsically support such **SaaS extensions** out of the box. The overall process is depicted in the following figure: ![The graphic shows the three parts that are also discussed in this guide. Each part has it's steps. The first part is the one of the SaaS provider. As SaaS provider you need to deploy an extensible application and provide a guide that explains how to extend your application. In addition the SaaS provider should provide a project template for extension projects. The next part is for the SaaS customer. In this role you need to setup a tenant landscape for your extension, subscribe to the application you want to extend and authorize the extension developers. The last part is for the extension developer. As such, you start an extension project, develop and test your extension and then activate it.](assets/process_SAP_BTP.drawio.svg) In this guide, you will learn the following: - How to enable extensibility as a **SaaS provider**. - How to develop SaaS extensions as a **SaaS customer**. ## Prerequisites {#prerequisites} Before we start, you'll need a **CAP-based [multitenant SaaS application](../multitenancy/)** that you can modify and deploy. ::: tip Jumpstart You can download the ready-to-use [Orders Management application](https://github.com/SAP-samples/cloud-cap-samples/tree/main/orders): ```sh git clone https://github.com/SAP-samples/cloud-cap-samples cd cloud-cap-samples/orders cds add multitenancy ``` Also, ensure you have the latest version of `@sap/cds-dk` installed globally: ```sh npm update -g @sap/cds-dk ``` ::: ## As a SaaS Provider { #prep-as-provider } CAP provides intrinsic extensibility, which means all your entities and services are extensible by default. Your SaaS app becomes the **base app** for extensions by your customers, and your data model the **base model**. ### 1. Enable Extensibility Extensibility is enabled by running this command in your project root: ```sh cds add extensibility ``` ::: details Essentially, this automates the following steps… 1. It adds an `@sap/cds-mtxs` package dependency: ```sh npm add @sap/cds-mtxs ``` 2. It switches on cds.requires.extensibility: true in your _package.json_: ::: code-group ```json [package.json] { "name": "@capire/orders", "version": "1.0.0", "dependencies": { "@capire/common": "*", "@sap/cds": ">=5", "@sap/cds-mtxs": "^1" }, "cds": { "requires": { "extensibility": true // [!code focus] } } } ``` ::: If `@sap/cds-mtxs` is newly added to your project install the dependencies: ```sh npm i ``` ### 2. Restrict Extension Points { #restrictions } Normally, you'll want to restrict which services or entities your SaaS customers are allowed to extend and to what degree they may do so. Take a look at the following configuration: ::: code-group ```jsonc [mtx/sidecar/package.json] { "cds": { "requires": { "cds.xt.ExtensibilityService": { "element-prefix": ["x_"], "extension-allowlist": [ { "for": ["sap.capire.orders"], "kind": "entity", "new-fields": 2 }, { "for": ["OrdersService"], "new-entities": 2 } ] } } } } ``` ::: This enforces the following restrictions: - All new elements have to start with `x_` → to avoid naming conflicts. - Only entities in namespace `sap.capire.orders` can be extended, with a maximum 2 new fields allowed. - Only the `OrdersService` can be extended, with a maximum of 2 new entities allowed. [Learn more about extension restrictions.](../multitenancy/mtxs#extensibility-config){.learn-more} ### 3. Provide Template Projects {#templates} To jumpstart your customers with extension projects, it's beneficial to provide a template project. Including this template with your application and making it available as a downloadable archive not only simplifies their work but also enhances their experience. #### Create an Extension Project (Template) Extension projects are standard CAP projects extending the SaaS application. Create one for your SaaS app following these steps: 1. Create a new CAP project — `orders-ext` in our walkthrough: ```sh cd .. cds init orders-ext code orders-ext # open in VS Code ``` 2. Add this to your _package.json_: ::: code-group ```jsonc [package.json] { "name": "@capire/orders-ext", "extends": "@capire/orders", "workspaces": [ ".base" ] } ``` ::: - `name` identifies the extension within a SaaS subscription; extension developers can choose the value freely. - `extends` is the name by which the extension model will refer to the base model. This must be a valid npm package name as it will be used by `cds pull` as a package name for the base model. It doesn't have to be a unique name, nor does it have to exist in a package registry like npmjs, as it will only be used locally. - `workspaces` is a list of folders including the one where the base model is stored. `cds pull` will add this property automatically if not already present. ::: details Uniqueness of base-model name… You use the `extends` property as the name of the base model in your extension project. Currently, it's not an issue if the base model name isn't unique. However, to prevent potential conflicts, we recommend using a unique name for the base model. ::: #### Add Sample Content Create a new file _app/extensions.cds_ and fill in this content: ::: code-group ```cds [app/extensions.cds] namespace x_orders.ext; // only applies to new entities defined below using { OrdersService, sap.capire.orders.Orders } from '@capire/orders'; extend Orders with { x_new_field : String; } // ------------------------------------------- // Fiori Annotations annotate Orders:x_new_field with @title: 'New Field'; annotate OrdersService.Orders with @UI.LineItem: [ ... up to { Value: OrderNo }, { Value : x_new_field }, ... ]; ``` ::: The name of the _.cds_ file can be freely chosen. Yet, for the build system to work out of the box, it must be in either the `app`, `srv`, or `db` folder. [Learn more about project layouts.](../../get-started/#project-structure){.learn-more} ::: tip Keep it simple We recommend putting all extension files into `./app` and removing `./srv` and `./db` from extension projects. You may want to consider [separating concerns](../domain-modeling#separation-of-concerns) by putting all Fiori annotations into a separate _./app/fiori.cds_. ::: #### Add Test Data To support [quick-turnaround tests of extensions](#test-locally) using `cds watch`, add some test data. In your template project, create a file _test/data/sap.capire.orders-Orders.csv_ like that: ::: code-group ```csv [test/data/sap.capire.orders-Orders.csv] ID;createdAt;buyer;OrderNo;currency_code; 7e2f2640-6866-4dcf-8f4d-3027aa831cad;2019-01-31;john.doe@test.com;1;EUR 64e718c9-ff99-47f1-8ca3-950c850777d4;2019-01-30;jane.doe@test.com;2;EUR ``` ::: #### Add a Readme Include additional documentation for the extension developer in a _README.md_ file inside the template project. ::: code-group ```md [README.md] # Getting Started Welcome to your extension project to `@capire/orders`. It contains these folders and files, following our recommended project layout: | File or Folder | Purpose | |----------------|--------------------------------| | `app/` | all extensions content is here | | `test/` | all test content is here | | `package.json` | project configuration | | `readme.md` | this getting started guide | ## Next Steps - `cds pull` the latest models from the SaaS application - edit [`./app/extensions.cds`](./app/extensions.cds) to add your extensions - `cds watch` your extension in local test-drives - `cds push` your extension to **test** tenant - `cds push` your extension to **prod** tenant ## Learn More Learn more at https://cap.cloud.sap/docs/guides/extensibility/customization. ``` ::: ### 4. Provide Extension Guides {#guide} You should provide documentation to guide your customers through the steps to add extensions. This guide should provide application-specific information along the lines of the walkthrough steps presented in this guide. Here's a rough checklist what this guide should cover: - [How to set up test tenants](#prepare-an-extension-tenant) for extension projects - [How to assign requisite roles](#prepare-an-extension-tenant) to extension developers - [How to start extension projects](#start-ext-project) from [provided templates](#templates) - [How to find deployed app urls](#pull-base) of test and prod tenants - [What can be extended?](#about-extension-models) → which services, entities, ... - [With enclosed documentation](../../cds/cdl#doc-comment) to the models for these services and entities. ### 5. Deploy Application Before deploying your SaaS application to the cloud, you can [test-drive it locally](../multitenancy/#test-locally). Prepare this by going back to your app with `cd orders`. With your application enabled and prepared for extensibility, you are ready to deploy the application as described in the [Deployment Guide](../deployment/). ## As a SaaS Customer {#prep-as-operator} The following sections provide step-by-step instructions on adding extensions. All steps are based on our Orders Management sample which can be [started locally for testing](../multitenancy/#test-locally). ::: details On BTP… To extend a SaaS app deployed to BTP, you'll need to subscribe to it [through the BTP cockpit](../multitenancy/#subscribe). Refer to the [Deployment Guide](../deployment/to-cf) for more details on remote deployments. Also, you have to replace local URLs used in `cds` commands later with the URL of the deployed App Router. Use a passcode to authenticate and authorize you. Refer to the section on [`cds login`](#cds-login) for a simplified workflow. ::: ### 1. Subscribe to SaaS App It all starts with a customer subscribing to a SaaS application. In a productive application this is usually triggered by the platform to which the customer is logged on. The platform is using a technical user to call the application subscription API. In your local setup, you can simulate this with a [mock user](../../node.js/authentication#mock-users) `yves`. 1. In a new terminal, subscribe as tenant `t1`: ```sh cds subscribe t1 --to http://localhost:4005 -u yves: ``` Please note that the URL used for the subscription command is the sidecar URL, if a sidecar is used. Learn more about tenant subscriptions [via the MTX API for local testing](../multitenancy/mtxs#put-tenant).{.learn-more} 2. Verify that it worked by opening the [Orders Management Fiori UI](http://localhost:4004/orders/index.html#manage-orders) in a **new private browser window** and log in as `carol`, which is assigned to tenant `t1`. ![A screenshot of an SAP Fiori UI on the orders management example. It shows a table with the columns order number, customer, currency and date. The table contains two orders.](assets/image-20221004054556898.png){.mute-dark} ### 2. Prepare an Extension Tenant {#prepare-an-extension-tenant} In order to test-drive and validate the extension before activating to production, you'll first need to set up a test tenant. This is how you simulate it in your local setup: 1. Set up a **test tenant** `t1-ext` ```sh cds subscribe t1-ext --to http://localhost:4005 -u yves: ``` 2. Assign **extension developers** for the test tenant. > As you're using mocked auth, simulate this step by adding the following to the SaaS app's _package.json_, assigning user `bob` as extension developer for tenant `t1-ext`: ::: code-group ```json [package.json] { "cds": { "requires": { "auth": { "users": { "bob": { "tenant": "t1-ext", "roles": ["cds.ExtensionDeveloper"] } } } } } } ``` ::: ### 3. Start an Extension Project {#start-ext-project} Extension projects are standard CAP projects extending the subscribed application. SaaS providers usually provide **application-specific templates**, which extension developers can download and open in their editor. You can therefore use the extension template created in your walkthrough [as SaaS provider](#templates). Open the `orders-ext` folder in your editor. Here's how you do it using VS Code: ```sh code ../orders-ext ``` ![A screenshot of a readme.md file as it's described in the previous "Add a readme" section of this guide.](assets/orders-ext.png){.ignore-dark} ### 4. Pull the Latest Base Model {#pull-base} Next, you need to download the latest base model. ```sh cds pull --from http://localhost:4005 -u bob: ``` > Run `cds help pull` to see all available options. This downloads the base model as a package into an npm workspace folder `.base`. The actual folder name is taken from the `workspaces` configuration. It also prepares the extension _package.json_ to reference the base model, if the extension template does not already do so. ::: details See what `cds pull` does… 1. Gets the base-model name from the extension _package.json_, property `extends`. If the previous value is not a valid npm package name, it gets changed to `"base-model"`. In this case, existing source files may have to be manually adapted. `cds pull` will notify you in such cases. 2. It fetches the base model from the SaaS app. 3. It saves the base model in a subdirectory `.base` of the extension project. This includes file _.base/package.json_ describing the base model as an npm package, including a `"name"` property set to the base-model name. 4. In the extension _package.json_: - It configures `.base` as an npm workspace folder. - It sets the `extends` property to the base-model name. ::: ### 5. Install the Base Model To make the downloaded base model ready for use in your extension project, install it as a package: ```sh npm install ``` This will link the base model in the workspace folder to the subdirectory `node_modules/@capire/orders` (in this example). ### 6. Write the Extension {#write-extension } Edit the file _app/extensions.cds_ and replace its content with the following: ::: code-group ```cds [app/extensions.cds] namespace x_orders.ext; // for new entities like SalesRegion below using { OrdersService, sap, sap.capire.orders.Orders } from '@capire/orders'; extend Orders with { // 2 new fields.... x_priority : String enum {high; medium; low} default 'medium'; x_salesRegion : Association to x_SalesRegion; } entity x_SalesRegion : sap.common.CodeList { // Value Help key code : String(11); } // ------------------------------------------- // Fiori Annotations annotate Orders:x_priority with @title: 'Priority'; annotate x_SalesRegion:name with @title: 'Sales Region'; annotate OrdersService.Orders with @UI.LineItem: [ ... up to { Value: OrderNo }, { Value: x_priority }, { Value: x_salesRegion.name }, ... ]; ``` ::: [Learn more about what you can do in CDS extension models](#about-extension-models){.learn-more} ::: tip Make sure **no syntax errors** are shown in the [CDS editor](../../tools/cds-editors#vscode) before going on to the next steps. ::: ### 7. Test-Drive Locally {#test-locally } To conduct an initial test of your extension, run it locally with `cds watch`: ```sh cds watch --port 4006 ``` > This starts a local Node.js application server serving your extension along with the base model and supplied test data stored in an in-memory database.
> It does not include any custom application logic though. #### Add Local Test Data To improve local test drives, you can add _local_ test data for extensions. Edit the template-provided file `test/data/sap.capire.orders-Orders.csv` and add data for the new fields as follows: ::: code-group ```csv [test/data/sap.capire.orders-Orders.csv] ID;createdAt;buyer;OrderNo;currency_code;x_priority;x_salesRegion_code 7e2f2640-6866-4dcf-8f4d-3027aa831cad;2019-01-31;john.doe@test.com;1;EUR;high;EMEA 64e718c9-ff99-47f1-8ca3-950c850777d4;2019-01-30;jane.doe@test.com;2;EUR;low;APJ ``` ::: Create a new file `test/data/x_orders.ext-x_SalesRegion.csv` with this content: ::: code-group ```csv [test/data/x_orders.ext-x_SalesRegion.csv] code;name;descr AMER;Americas;North, Central and South America EMEA;Europe, the Middle East and Africa;Europe, the Middle East and Africa APJ;Asia Pacific and Japan;Asia Pacific and Japan ``` ::: #### Verify the Extension Verify your extensions are applied correctly by opening the [Orders Fiori Preview](http://localhost:4006/$fiori-preview/OrdersService/Orders#preview-app) in a **new private browser window**, log in as `bob`, and see columns _Priority_ and _Sales Region_ filled as in the following screenshot: ![This screenshot is explained in the accompanying text.](assets/image-20221004080722532.png){.mute-dark} > Note: the screenshot includes local test data, added as explained below. This test data will only be deployed to the local sandbox and not be processed during activation to the productive environment. ### 8. Push to Test Tenant {#push-extension } Let's push your extension to the deployed application in your test tenant for final verification before pushing to production. ```sh cds push --to http://localhost:4005 -u bob: ``` ::: tip `cds push` runs a `cds build` on your extension project automatically. ::: ::: details Prepacked extensions To push a ready-to-use extension archive (.tar.gz or .tgz), run `cds push `. The argument can be a local path to the archive or a URL to download it from. Run `cds help push` to see all available options. ::: > You pushed the extension with user `bob`, which in your local setup ensures they are sent to your test tenant `t1-ext`, not the production tenant `t1`. ::: details Building extensions `cds build` compiles the extension model and validates the constraints defined by the SaaS application, for example, it checks if the entities are extendable. It will fail in case of compilation or validation errors, which will in turn abort `cds push`. _Warning_ messages related to the SaaS application base model are reclassified as _info_ messages. As a consequence they will not be shown by default. Execute `cds build --log-level info` to display all messages, although they should not be of interest for the extension developer. ::: #### Verify the Extension {#test-extension } Verify your extensions are applied correctly by opening the [Order Management UI](http://localhost:4004/orders/index.html#manage-orders) in a **new private browser window**, log in as `bob`, and check that columns _Priority_ and _Sales Region_ are displayed as in the following screenshot. Also, check that there's content with a proper label in the _Sales Region_ column. ![The screenshot is explained in the accompanying text.](assets/image-20221004081826167.png){.mute-dark} ### 9. Add Data {#add-data} After pushing your extension, you have seen that the column for _Sales Region_ was added, but is not filled. To change this, you need to provide initial data with your extension. Copy the data file that you created before from `test/data/` to `db/data/` and push the extension again. [Learn more about adding data to extensions](#add-data-to-extensions) {.learn-more} ### 10. Activate the Extension {#push-to-prod} Finally, after all tests, verifications and approvals are in place, you can push the extension to your production tenant: ```sh cds push --to http://localhost:4005 -u carol: ``` > You pushed the extension with [mock user](../../node.js/authentication#mock-users) `carol`, which in your local setup ensures they are sent to your **production** tenant `t1`. ::: tip Simplify your workflow with `cds pull` and `cds push` Particularly when extending deployed SaaS apps, refer to [`cds login`](#cds-login) to save project settings and authentication data for later reuse. ::: # Appendices ## Configuring App Router {#app-router} In a deployed multitenant SaaS application, you need to set up the App Router correctly. This setup lets the CDS command-line utilities connect to the MTX Sidecar without needing to authenticate again. If you haven't used both the `cds add multitenancy` and `cds add approuter` commands, it's likely that you'll need to tweak the App Router configuration. You can do this by adding a route to the MTX Sidecar. ```json [app/router/xs-app.json] { "routes": [ { "source": "^/-/cds/.*", "destination": "mtx-api", "authenticationType": "none" } ] } ``` This ensures that the App Router doesn't try to authenticate requests to MTX Sidecar, which would fail. Instead, the Sidecar authenticates requests itself. ## About Extension Models This section explains in detail about the possibilities that the _CDS_ languages provides for extension models. All names are subject to [extension restrictions defined by the SaaS app](../multitenancy/mtxs#extensibility-config). ### Extending the Data Model Following [the extend directive](../../cds/cdl#extend) it is pretty straightforward to extend the application with the following new artifacts: - Extend existing entities with new (simple) fields. - Create new entities. - Extend existing entities with new associations. - Add compositions to existing or new entities. - Supply new or existing fields with default values, range checks, or value list (enum) checks. - Define a mandatory check on new or existing fields. - Define new unique constraints on new or existing entities. ```cds using {sap.capire.bookshop, sap.capire.orders} from '@capire/fiori'; using { cuid, managed, Country, sap.common.CodeList } from '@sap/cds/common'; namespace x_bookshop.extension; // extend existing entity extend orders.Orders with { x_Customer : Association to one x_Customers; x_SalesRegion : Association to one x_SalesRegion; x_priority : String @assert.range enum {high; medium; low} default 'medium'; x_Remarks : Composition of many x_Remarks on x_Remarks.parent = $self; } // new entity - as association target entity x_Customers : cuid, managed { email : String; firstName : String; lastName : String; creditCardNo : String; dateOfBirth : Date; status : String @assert.range enum {platinum; gold; silver; bronze} default 'bronze'; creditScore : Decimal @assert.range: [ 1.0, 100.0 ] default 50.0; PostalAddresses : Composition of many x_CustomerPostalAddresses on PostalAddresses.Customer = $self; } // new unique constraint (secondary index) annotate x_Customers with @assert.unique: { email: [ email ] } { email @mandatory; // mandatory check } // new entity - as composition target entity x_CustomerPostalAddresses : cuid, managed { Customer : Association to one x_Customers; description : String; street : String; town : String; country : Country; } // new entity - as code list entity x_SalesRegion: CodeList { key regionCode : String(11); } // new entity - as composition target entity x_Remarks : cuid, managed { parent : Association to one orders.Orders; number : Integer; remarksLine : String; } ``` ::: tip This example provides annotations for business logic handled automatically by CAP as documented in [_Providing Services_](../providing-services#input-validation). ::: Learn more about the [basic syntax of the `annotate` directive](../../cds/cdl#annotate) {.learn-more} ### Extending the Service Model In the existing in `OrdersService`, the new entities `x_CustomerPostalAddresses` and `x_Remarks` are automatically included since they are targets of the corresponding _compositions_. The new entities `x_Customers` and `x_SalesRegion` are [autoexposed](../providing-services#auto-exposed-entities) in a read-only way as [CodeLists](../../cds/common#aspect-codelist). Only if wanted to _change_ it, you would need to expose them explicitly: ```cds using { OrdersService } from '@capire/fiori'; extend service OrdersService with { entity x_Customers as projection on extension.x_Customers; entity x_SalesRegion as projection on extension.x_SalesRegion; } ``` ### Extending UI Annotations The following snippet demonstrates which UI annotations you need to expose your extensions to the SAP Fiori elements UI. Add UI annotations for the completely new entities `x_Customers, x_CustomerPostalAddresses, x_SalesRegion, x_Remarks`: ```cds using { OrdersService } from '@capire/fiori'; // new entity -- draft enabled annotate OrdersService.x_Customers with @odata.draft.enabled; // new entity -- titles annotate OrdersService.x_Customers with { ID @( UI.Hidden, Common : {Text : email} ); firstName @title : 'First Name'; lastName @title : 'Last Name'; email @title : 'Email'; creditCardNo @title : 'Credit Card No'; dateOfBirth @title : 'Date of Birth'; status @title : 'Status'; creditScore @title : 'Credit Score'; } // new entity -- titles annotate OrdersService.x_CustomerPostalAddresses with { ID @( UI.Hidden, Common : {Text : description} ); description @title : 'Description'; street @title : 'Street'; town @title : 'Town'; country @title : 'Country'; } // new entity -- titles annotate x_SalesRegion : regionCode with @( title : 'Region Code', Common: { Text: name, TextArrangement: #TextOnly } ); // new entity in service -- UI annotate OrdersService.x_Customers with @(UI : { HeaderInfo : { TypeName : 'Customer', TypeNamePlural : 'Customers', Title : { Value : email} }, LineItem : [ {Value : firstName}, {Value : lastName}, {Value : email}, {Value : status}, {Value : creditScore} ], Facets : [ {$Type: 'UI.ReferenceFacet', Label: 'Main', Target : '@UI.FieldGroup#Main'}, {$Type: 'UI.ReferenceFacet', Label: 'Customer Postal Addresses', Target: 'PostalAddresses/@UI.LineItem'} ], FieldGroup #Main : {Data : [ {Value : firstName}, {Value : lastName}, {Value : email}, {Value : status}, {Value : creditScore} ]} }); // new entity -- UI annotate OrdersService.x_CustomerPostalAddresses with @(UI : { HeaderInfo : { TypeName : 'CustomerPostalAddress', TypeNamePlural : 'CustomerPostalAddresses', Title : { Value : description } }, LineItem : [ {Value : description}, {Value : street}, {Value : town}, {Value : country_code} ], Facets : [ {$Type: 'UI.ReferenceFacet', Label: 'Main', Target : '@UI.FieldGroup#Main'} ], FieldGroup #Main : {Data : [ {Value : description}, {Value : street}, {Value : town}, {Value : country_code} ]} }) {}; // new entity -- UI annotate OrdersService.x_SalesRegion with @( UI: { HeaderInfo: { TypeName : 'Sales Region', TypeNamePlural : 'Sales Regions', Title : { Value : regionCode } }, LineItem: [ {Value: regionCode}, {Value: name}, {Value: descr} ], Facets: [ {$Type: 'UI.ReferenceFacet', Label: 'Main', Target: '@UI.FieldGroup#Main'} ], FieldGroup#Main: { Data: [ {Value: regionCode}, {Value: name}, {Value: descr} ] } } ) {}; // new entity -- UI annotate OrdersService.x_Remarks with @( UI: { HeaderInfo: { TypeName : 'Remark', TypeNamePlural : 'Remarks', Title : { Value : number } }, LineItem: [ {Value: number}, {Value: remarksLine} ], Facets: [ {$Type: 'UI.ReferenceFacet', Label: 'Main', Target: '@UI.FieldGroup#Main'} ], FieldGroup#Main: { Data: [ {Value: number}, {Value: remarksLine} ] } } ) {}; ``` #### Extending Array Values Extend the existing UI annotation of the existing `Orders` entity with new extension fields and new facets using the special [syntax for array-valued annotations](../../cds/cdl#extend-array-annotations). ```cds // extend existing entity Orders with new extension fields and new composition annotate OrdersService.Orders with @( UI: { LineItem: [ ... up to { Value: OrderNo }, // head {Value: x_Customer_ID, Label:'Customer'}, //> extension field {Value: x_SalesRegion.regionCode, Label:'Sales Region'}, //> extension field {Value: x_priority, Label:'Priority'}, //> extension field ..., // rest ], Facets: [..., {$Type: 'UI.ReferenceFacet', Label: 'Remarks', Target: 'x_Remarks/@UI.LineItem'} // new composition ], FieldGroup#Details: { Data: [..., {Value: x_Customer_ID, Label:'Customer'}, // extension field {Value: x_SalesRegion.regionCode, Label:'Sales Region'}, // extension field {Value: x_priority, Label:'Priority'} // extension field ] } } ); ``` The advantage of this syntax is that you do not have to replicate the complete array content of the existing UI annotation, you only have to add the delta. #### Semantic IDs Finally, exchange the display ID (which is by default a GUID) of the new `x_Customers` entity with a human readable text which in your case is given by the unique property `email`. ```cds // new field in existing service -- exchange ID with text annotate OrdersService.Orders:x_Customer with @( Common: { //show email, not id for Customer in the context of Orders Text: x_Customer.email , TextArrangement: #TextOnly, ValueList: { Label: 'Customers', CollectionPath: 'x_Customers', Parameters: [ { $Type: 'Common.ValueListParameterInOut', LocalDataProperty: x_Customer_ID, ValueListProperty: 'ID' }, { $Type: 'Common.ValueListParameterDisplayOnly', ValueListProperty: 'email' } ] } } ); ``` ### Localizable Texts To externalize translatable texts, use the same approach as for standard applications, that is, create a _i18n/i18n.properties_ file: ::: code-group ```properties [i18n/i18n.properties] SalesRegion_name_col = Sales Region Orders_priority_col = Priority ... ``` ::: Then replace texts with the corresponding `{i18n>...}` keys from the properties file. Make sure to run `cds build` again. Properties files must be placed in the `i18n` folder. If an entry with the same key exists in the SaaS application, the translation of the extension has preference. > This feature is available with `@sap/cds` 6.3.0 or higher. [Learn more about localization](../i18n){.learn-more} ## Simplify Your Workflow With `cds login` {#cds-login} As a SaaS extension developer, you have the option to log in to the SaaS app and thus authenticate only once. This allows you to re-run `cds pull` and `cds push` against the app without repeating the same options over and over again – and you can avoid generating a passcode every time. Achieve this by running `cds login` once. This command fetches tokens using OAuth2 from XSUAA and saves them for later use. For convenience, further settings for the current project are also stored, so you don't have to provide them again (such as the app URL and tenant subdomain). ### Where Tokens Are Stored Tokens are saved in the desktop keyring by default (libsecret on Linux, Keychain Access on macOS, or Credential Vault on Windows). Using the keyring is more secure because, depending on the platform, you can lock and unlock it, and data saved by `cds login` may be inaccessible to other applications you run. > For details, refer to the documentation of the keyring implementation used on your development machine. `cds login` therefore uses the keyring by default. To enable this, you need to install an additional Node.js module, [_keytar_](https://www.npmjs.com/package/keytar): ```sh npm i -g keytar ``` If you decide against using the keyring, you can request `cds login` to write to a plain-text file by appending `--plain`. ::: tip Switching to and from plain-text Once usage of the `--plain` option changes for a given SaaS app, `cds login` migrates pre-existing authentication data from the previous storage to the new storage. ::: ::: warning Handle secrets with caution Local storage of authentication data incurs a security risk: a potential malicious, local process might be able to perform actions you're authorized for, with the SaaS app, as your tenant. ::: > In SAP Business Application Studio, plain-text storage is enforced when using `cds login`, since no desktop keyring is available. The plain-text file resides in encrypted storage. ### How to Login If you work with Cloud Foundry (CF) and you have got the `cf` client installed, you can call `cds login` with just a passcode. The command runs the `cf` client to determine suitable apps from the org and space that you're logged in to. This allows you to interactively choose the login target from a list of apps and their respective URLs. To log in to the SaaS app in this way, first change to the folder you want to use for your extension project. Then run the following command (the one-time passcode will be prompted interactively if omitted): ```sh cds login [-p ] ``` :::details Advanced options If you need to call `cds login` automatically without user interaction, you may use the [Client Credentials](https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/) grant, which does not require a passcode. You can then omit the `-p ` option but will instead have to provide the Client ID and a specific form of client secret to authenticate. Obtain these two from the `VCAP_SERVICES` environment variable in your deployed MTX server (`@sap/cds-mtxs`). In the JSON value, navigate to `xsuaa[0].credentials`. - If you find a `key` property (Private Key of the Client Certificate), XSUAA is configured to use X.509 (mTLS). Use this Private Key by specifying `cds login … -m [:key]`. - Otherwise, find the Client Secret in the `clientsecret` property and use `cds login … -c [:]` in an analogous way. **Note:** The `key` and `clientsecret` properties are secrets that should not be stored in an unsafe location in productive scenarios! [Learn more about environment variables / `VCAP_Services`.](/node.js/cds-connect#bindings-in-cloud-platforms){.learn-more} If you leave out the respective secret (enclosed in square brackets above), you will be prompted to enter it interactively. This can be used to feed the secret from the environment to `cds login` via standard input, like so: ```sh echo $MY_KEY | cds login … -m ``` ::: For a synopsis of all options, run `cds help login`. :::details Login without CF CLI If you don't work with CF CLI, additionally provide the application URL and the subdomain as these can't be determined automatically: ```sh cds login [] -s ``` The `` is the URL that you get in your subscriber account when you subscribe to an application. You find the `` in the overview page of your subaccount in the SAP BTP Cockpit: ![Simplified UI, showing where to find the subdomain in the SAP BTP cockpit.](assets/subdomain-cockpit-sui.png) ::: ::: tip Multiple targets Should you later want to extend other SaaS applications, you can log in to them as well, and it won't affect your other logins. Logins are independent of each other, and `cds pull` etc. will be authenticated based on the requested target. ::: ### Simplified Workflow Once you've logged in to the SaaS app, you can omit the passcode, the app URL, and the tenant subdomain, so in your development cycle you can run: ```sh cds pull # develop your extension cds push # develop your extension cds push # … ``` ::: tip Override saved values with options For example, run `cds push -s -p ` to activate your extension in another subdomain. This usage of `cds push` may be considered a kind of cross-client transport mechanism. ::: ### Refreshing Tokens Tokens have a certain lifespan, after which they lose validity. To save you the hassle, `cds login` also stores the refresh token sent by XSUAA alongside the token (depending on configuration) and uses it to automatically renew the token after it has expired. By default, refresh tokens expire much later than the token itself, allowing you to work without re-entering passcodes for multiple successive days. ### Cleaning Up To remove locally saved authentication data and optionally, the project settings, run `cds logout` inside your extension project folder. Append `--delete-settings` to include saved project settings for the current project folder as well. `cds help logout` is available for more details. ::: tip When your role-collection assignments have changed, run `cds logout` followed by `cds login` in order to fetch a token containing the new set of scopes. ::: ### Debugging In case something unexpected happens, set the variable `DEBUG=cli` in your shell environment before re-running the corresponding command. ::: code-group ```sh [Mac/Linux] export DEBUG="cli" ``` ```cmd [Windows] set DEBUG=cli ``` ```powershell [Powershell] Set-Variable -Name "DEBUG" -Value "cli" ``` ::: ## Add Data to Extensions As described in [Add Data](#add-data), you can provide local test data and initial data for your extension. In this guide we copied local data from the `test/data` folder into the `db/data` folder. When using SQLite, this step can be further simplified. For `sap.capire.orders-Orders.csv`, just add the _new_ columns along with the primary key: ` ::: code-group ```csv [sap.capire.orders-Orders.csv] ID;x_priority;x_salesRegion_code 7e2f2640-6866-4dcf-8f4d-3027aa831cad;high;EMEA 64e718c9-ff99-47f1-8ca3-950c850777d4;low;APJ ``` ::: ::: warning _❗ Warning_ Adding data only for the missing columns doesn't work when using SAP HANA as a database. With SAP HANA, you always have to provide the full set of data. ::: # Feature Toggles {{$frontmatter?.synopsis}} ## Introduction and Overview CAP feature-toggled aspects allow SaaS providers to create pre-built features as CDS models, extending the base models with new fields, entities, as well as annotations for SAP Fiori UIs. These features can be assigned to individual SaaS customers (tenants), users, and requests and are then activated dynamically at runtime, as illustrated in the following figure. ![This graphic shows an inbound request passing authentication and then the CAP runtime queries the database as well as the model provider service to know which features belong to the inbound request.](./assets/feature-toggles.drawio.svg) ### Get `cloud-cap-samples-java` for step-by-step Exercises {.java} The following steps will extend the [CAP samples for Java](https://github.com/SAP-samples/cloud-cap-samples-java) app to demonstrate how features can extend data models, services, as well as SAP Fiori UIs. If you want to exercise these steps, get [cloud-cap-samples-java](https://github.com/SAP-samples/cloud-cap-samples-java) before, and prepare to extend the *Fiori* app:
```sh git clone https://github.com/SAP-samples/cloud-cap-samples-java cd cloud-cap-samples-java mvn clean install ```
Now, open the app in your editor, for example, for VS Code type: ```sh code . ``` ### Get `cap/samples` for Step-By-Step Exercises {.node} The following steps will extend the [cap/samples/fiori](https://github.com/sap-samples/cloud-cap-samples/blob/main/fiori) app to demonstrate how features can extend data models, services, as well as SAP Fiori UIs. If you want to exercise these steps, get [cap/samples](https://github.com/sap-samples/cloud-cap-samples) before, and prepare to extend the *fiori* app: ```sh git clone https://github.com/sap-samples/cloud-cap-samples samples cd samples npm install ``` Now, open the `fiori` app in your editor, for example, by this if you're using VS Code on macOS: ```sh code fiori ``` ## Enable Feature Toggles {.node} ### Add `@sap/cds-mtxs` Package Dependency For example, like this: ```sh npm add @sap/cds-mtxs ``` ### Switch on `cds.requires.toggles` Switch on feature toggle support by adding cds.requires.toggles: true. ## Adding Features in CDS Add a subfolder per feature to folder *fts* and put `.cds` files into it. The name of the folder is the name you later on use in feature toggles to switch the feature on/off. In our samples app, we add two features `isbn` and `reviews` as depicted in the following screenshot: ![This screenshot is explained in the accompanying text.](./assets/image-20220628101642511.png){.ignore-dark} > The name of the *.cds* files within the *fts/* subfolders can be freely chosen. All *.cds* files found in there will be served, with special handling for *index.cds* files, as usual. ### Feature *fts/isbn* Create a file *fiori/fts/isbn/schema.cds* with this content: ```cds using { CatalogService, sap.capire.bookshop.Books } from '../../app/browse/fiori-service'; // Add new field `isbn` to Books extend Books with { isbn : String @title:'ISBN'; } // Display that new field in list on Fiori UI annotate CatalogService.Books with @( UI.LineItem: [... up to {Value:author}, {Value:isbn}, ...] ); ``` This feature adds a new field `isbn` to entity `Books` and extends corresponding SAP Fiori annotations to display this field in the *Browse Books* list view. ::: tip Note that all features will be deployed to each tenant database in order to allow toggling per user/request. ::: ### Feature *fts/reviews* Create a file *fiori/fts/reviews/schema.cds* with this content: ```cds using { CatalogService } from '../../app/browse/fiori-service'; // Display existing field `rating` in list on Fiori UI annotate CatalogService.Books with @( UI.LineItem: [... up to {Value:author}, {Value:rating}, ...] ); ``` This feature extends corresponding SAP Fiori annotations to display already existing field `rating` in the *Browse Books* list view. ### Limitations ::: warning Note the following limitations for `.cds` files in features: - no `.cds` files in subfolders, for example, `fts/isbn/sub/file.cds` - no `using` dependencies between features - further limitations re `extend aspect` → to be documented ::: ## Toggling Features In principle, features can be toggled per request, per user, or per tenant; most commonly they'll be toggled per tenant, as demonstrated in the following. ### In Development
CAP Node.js' `mocked-auth` strategy has built-in support for toggling features per tenant, per user, or per request. To demonstrate toggling features per tenant, or user, you can add these lines of configuration to our `package.json` of the SAP Fiori app: ```json {"cds":{ "requires": { "auth": { "users": { "carol": { "tenant": "t1" }, "erin": { "tenant": "t2" }, "fred": { "tenant": "t2", "features":[] } }, "tenants": { "t1": { "features": ["isbn"] }, "t2": { "features": "*" } } } } }} ```
CAP Java's [Mock User Authentication with Spring Boot](../../java/security#mock-users) allows to assign feature toggles to users based on the mock user configuration. To demonstrate toggling features per user, you can add these lines to the mock user configuration in the `srv/src/main/resources/application.yaml` file: ```yaml cds: security.mock.users: - name: carol features: - isbn - name: erin features: - isbn - reviews - name: fred features: ```
In effect of this, for the user `carol` the feature `isbn` is enabled, for `erin`, the features `isbn` and `reviews` are enabled, and for the user `fred` all features are disabled. ### In Production
::: warning No features toggling for production yet Note that the previous sample is only for demonstration purposes. As user and tenant management is outside of CAP's scope, there's no out-of-the-box feature toggles provider for production yet. → Learn more about that in the following section [*Feature Vector Providers*](#feature-vector-providers). :::
For productive use, the mock user configuration must not be used. The set of active features is determined per request by the [Feature Toggles Info Provider](../../java/reflection-api#feature-toggles-info-provider). You can register a [Custom Implementation](../../java/reflection-api#custom-implementation) as a Spring bean that computes the active feature set based on the request's `UserInfo` and `ParameterInfo`.
## Test-Drive Locally {.node} To test feature toggles, just run your CAP server as usual, then log on with different users, assigned to different tenants, to see the effects. ### Run `cds watch` Start the CAP server with `cds watch` as usual: ```sh cds watch ``` → in the log output, note the line reporting: ```js [cds] - serving cds.xt.ModelProviderService { path: '/-/cds/model-provider', impl: '@sap/cds/srv/model-provider.js' } ``` > The `ModelProviderService` is used by the runtime to get feature-enhanced models. ### See Effects in SAP Fiori UIs {#test-fiori-node} To see the effects in the UIs open three anonymous browser windows, one for each user to log in, and: 1. [Open SAP Fiori app in browser](http://localhost:4004/fiori-apps.html) and go to [Browse Books](http://localhost:4004/fiori-apps.html#Books-display). 2. Log in as `carol` and see `ISBN` column in list. 3. Log in as `erin` and see `Ratings` and `ISBN` columns in list. 4. Log in as `fred` and no features for *Fred*, even though same tenant as *Erin*. For example the displayed UI should look like that for `erin`: ![A standard SAP Fiori UI including the new columns ratings and isbn that are available to erin.](assets/image-20220630132726831.png) ## Model Provider in Sidecar The `ModelProviderService`, which is used for toggling features, is implemented in Node.js only. To use it with CAP Java apps, you run it in a so-called *MTX sidecar*. For a CAP Node.js project, this service is always run embedded with the main application. ### Create Sidecar as Node.js Project An MTX sidecar is a standard, yet minimalistic Node.js CAP project. By default it's added to a subfolder *mtx/sidecar* within your main project, containing just a *package.json* file: ::: code-group ```json [mtx/sidecar/package.json] { "name": "mtx-sidecar", "version": "0.0.0", "dependencies": { "@sap/cds": "^7", "@sap/cds-mtxs": "^1", "express": "^4" }, "cds": { "requires": { "cds.xt.ModelProviderService": "in-sidecar" }, "[development]": { "requires": { "auth": "dummy" }, "server": { "port": 4005 } } } } ``` ::: [Learn more about setting up **MTX sidecars**.](../multitenancy/mtxs#sidecars){.learn-more} ### Add Remote Service Link to Sidecar
::: tip In Node.js apps you usually don't consume services from the sidecar. The *ModelProviderService* is served both, embedded in the main app as well as in the sidecar. The following is documented for the sake of completeness only... ::: You can use the `from-sidecar` preset to tell the CAP runtime to use the remote model provider from the sidecar: ```json "cds":{ "requires": { "toggles": true, "cds.xt.ModelProviderService": "from-sidecar" } } ``` [Learn more about configuring ModelProviderService.](../multitenancy/mtxs#model-provider-config){.learn-more}
You need to configure the CAP Java application to request the CDS model from the Model Provider Service. This is done in the `application.yaml` file of your application. To enable the Model Provider Service for local development, add the following configuration to the `default` profile: ```yaml cds: model: provider: url: http://localhost:4005 # remove, in case you need tenant extensibility extensibility: false ```
### Test-Drive Sidecar Locally With the setup as described in place, you can run the main app locally with the Model Provider as sidecar. Simply start the main app and the sidecar in two separate shells: **First, start the sidecar** as the main app now depends on the sidecar: ```sh cds watch mtx/sidecar ``` **Then, start the main app** in the second shell:
```sh cds watch ```
```sh mvn spring-boot:run ```
#### Remote `getCsn()` Calls to Sidecar at Runtime {.node} When you now run and use our application again as described in the previous section [See Effects in SAP Fiori UIs](#test-fiori-node), you can see in the trace logs that the main app sends `getCsn` requests to the sidecar, which in response to that reads and returns the main app's models. That means, the models from two levels up the folder hierarchy as configured by `root: ../..` for development. ### See Effects in SAP Fiori UIs {#test-fiori-java .java} To see the effects in the UIs open three anonymous browser windows, one for each user to log in, and: 1. [Open SAP Fiori app in browser](localhost:8080/fiori.html) and go to [Browse Books](localhost:8080/fiori.html#browse-books). 2. Log in as `carol` and see `ISBN` column in list. 3. Log in as `erin` and see `Ratings` and `ISBN` columns in list. 4. Log in as `fred` and no features for *Fred*, even though same tenant as *Erin*. For example the displayed UI should look like that for `erin`: ![A standard SAP Fiori UI including the new columns ratings and isbn that are available to erin.](assets/image-20220630132726831.png) ## Feature Vector Providers {.node} In principle, features can be toggled *per request* using the `req.features` property (`req` being the standard HTTP req object here, not the CAP runtimes `req` object). This property is expected to contain one of the following: - An array with feature names, for example, `['isbn','reviews']`. - A string with comma-separated feature names, for example, `'isbn,reviews'`. - An object with keys being feature names, for example, `{isbn:true,reviews:true}`. So, to add support for a specific feature toggles management you can add a simple Express.js middleware as follows, for example, in your `server.js`: ```js const cds = require ('@sap/cds') cds.on('bootstrap', app => app.use ((req,res,next) => { req.features = req.headers.features || 'isbn' next() })) ``` ## Feature-Toggled Custom Logic
[Evaluate the `FeatureTogglesInfo` in custom code](../../java/reflection-api#using-feature-toggles-in-custom-code) to check if a feature is enabled: ```java @Autowired FeatureTogglesInfo features; ... if (features.isEnabled("discount")) { // specific coding when feature 'discount' is enabled... } ```
Within your service implementations, you can react on feature toggles by inspecting `cds.context.features` like so: ```js const { features } = cds.context if ('isbn' in features) { // specific coding when feature 'isbn' is enabled... } if ('reviews' in features) { // specific coding when feature 'reviews' is enabled... } // common coding... ``` Or alternatively: ```js const { isbn, reviews } = cds.context.features if (isbn) { // specific coding when feature 'isbn' is enabled... } if (reviews) { // specific coding when feature 'reviews' is enabled... } // common coding... ```
# Reuse and Compose {{$frontmatter?.synopsis}} ## Introduction and Overview CAP promotes reuse and composition by importing content from reuse packages. Reused content, shared and imported that way, can comprise models, code, initial data, and i18n bundles. ### Usage Scenarios By applying CAP's techniques for reuse, composition, and integration, you can address several different usage scenarios, as depicted in the following illustration. ![This graphic is explained in the accompanying text.](assets/scenarios.drawio.svg) 1. **Verticalized/Composite Solutions** — Pick one or more reuse packages/services. Enhance them, mash them up into a composite solution, and offer this as a new packaged solution to clients. 2. **Prebuilt Extension Packages** — Instead of offering a new packaged solution, you could also just provide your enhancements as a prebuilt extension package, for example, for **verticalization**, which you in turn offer to others as a reuse package. 3. **Prebuilt Integration Packages** — Prebuilt extension packages could also involve prefabricated integrations to services in back-end systems, such as S/4HANA and SAP SuccessFactors. 4. **Prebuilt Business Data Packages** — A variant of prebuilt integration packages, in which you would provide a reuse package that provides initial data for certain entities, like a list of *Languages*, *Countries*, *Regions*, *Currencies*, etc. 5. **Customizing SaaS Solutions** — Customers, who are subscribers of SaaS solutions, can apply the same techniques to adapt SaaS solutions to their needs. They can use prebuilt extension or business data packages, or create their own custom-defined ones. ### Examples from [cap/samples](https://github.com/sap-samples/cloud-cap-samples) In the following sections, we frequently refer to examples from [cap/samples](https://github.com/sap-samples/cloud-cap-samples): ![The screenshot is explained in the following text.](assets/cap-samples.drawio.svg) - **[@capire/bookshop](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookshop)** provides a basic bookshop app and **reuse services** . - **[@capire/common](https://github.com/sap-samples/cloud-cap-samples/tree/main/common)** is a **prebuilt extension** and **business data** package for *Countries*, *Currencies*, and *Languages*. - **[@capire/reviews](https://github.com/sap-samples/cloud-cap-samples/tree/main/reviews)** provides an independent **reuse service**. - **[@capire/orders](https://github.com/sap-samples/cloud-cap-samples/tree/main/orders)** provides another independent **reuse service**. - **[@capire/bookstore](https://github.com/sap-samples/cloud-cap-samples/tree/main/bookstore)** combines all of the above into a **composite application**. ### Preparation for Exercises If you want to exercise the code snippets in following sections, do the following: **1)**   Get cap/samples: ```sh git clone https://github.com/sap-samples/cloud-cap-samples samples cd samples npm install ``` **2)**   Start a sample project: ```sh cds init sample cd sample npm i # ... run the upcoming commands in here ``` ## Importing Reuse Packages { #import} CAP and CDS promote reuse of prebuilt content based on `npm` or `Maven` techniques. The following figure shows the basic procedure for `npm`. ![This graphic shows how packages are used, provided, and deployed. ALl the details around that are explained in the following sections.](assets/reuse-overview.drawio.svg) > We use `npm` and `Maven` as package managers simply because we didn't want to reinvent the wheel here. ### Using `npm add/install` from _npm_ Registries Use _`npm add/install`_ to import reuse packages to your project, like so: ```sh npm add @capire/bookshop @capire/common ``` This installs the content of these packages into your project's `node_modules` folder and adds corresponding dependencies: ::: code-group ```json [package.json] { "name": "sample", "version": "1.0.0", "dependencies": { "@capire/bookshop": "^1.0.0", "@capire/common": "^1.0.0", ... } } ``` ::: > These dependencies allow you to use `npm outdated`, `npm update`, and `npm install` later to get the latest versions of imported packages. ### Importing from Other Sources In addition to importing from _npm_ registries, you can also import from local sources. This can be other CAP projects that you have access to, or tarballs of reuse packages, for example, downloaded from some marketplace. ```sh npm add ~/Downloads/@capire-bookshop-1.0.0.tgz npm add ../bookshop ``` > You can use `npm pack` to create tarballs from your projects if you want to share them with others. ### Importing from Maven Dependencies Add the dependency to the reuse package to your `pom.xml`: ::: code-group ```xml [pom.xml] com.sap.capire bookshop 1.0.0 ``` ::: As Maven dependencies are - in contrast to `npm` packages - downloaded into a global cache, you need to make the artifacts from the reuse package available in your project locally. The CDS Maven Plugin provides a simple goal named `resolve`, that performs this task for you and extracts reuse packages into the `target/cds/` folder of the Maven project. Include this goal into the `pom.xml`, if not already present: ::: code-group ```xml [pom.xml] com.sap.cds cds-maven-plugin ${cds.services.version} ... cds.resolve resolve ... ``` ::: ### Embedding vs. Integrating Reuse Services { #embedding-vs-integration} By default, when importing reuse packages, all imported content becomes an integral part of your project, it literally becomes **embedded** in your project. This applies to all the things an imported package can contain, such as: - Domain models - Service definitions - Service implementations - i18n bundles - Initial data [See an example for a data package for `@sap/cds/common`](../../cds/common#prebuilt-data){ .learn-more} However, you decide which parts to actually use and activate in your project by means of model references as shown in the following sections. Instead of embedding reuse content, you can also **integrate** with remote services, deployed as separate microservices as outlined in [*Service Integration*](#service-integration). ## Reuse & Extend Models {#reuse-models} Even though all imported content is embedded in your project, you decide which parts to actually use and activate by means of model references. For example, if an imported package comes with three service definitions, it's still you who decides which of them to serve as part of your app, if any. The rule is: ::: tip Active by Reachability Everything that you are referring to from your own models is served. Everything outside of your models is ignored. ::: ### Via `using from` Directives Use the definitions from imported models through [`using` directives](../../cds/cdl#model-imports) as usual. For example, like in [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/srv/mashup.cds), simply add all: ::: code-group ```cds [bookstore/srv/mashup.cds] using from '@capire/bookshop'; using from '@capire/common'; ``` ::: The `cds` compiler finds the imported content in `node_modules` when processing imports with absolute targets as shown previously. ### Using _index.cds_ Entry Points {#index-cds} The above `using from` statements assume that the imported packages provide _index.cds_ in their roots as [public entry points](#entry-points), which they do. For example see [@capire/bookshop/index.cds](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookshop/index.cds): ::: code-group ```cds [bookshop/index.cds] // exposing everything... using from './db/schema'; using from './srv/cat-service'; using from './srv/admin-service'; ``` ::: This _index.cds_ imports and therefore activates everything. Running `cds watch` in your project would show you this log output, indicating that all initial data and services from your imported packages are now embedded and served from your app: ```log [cds] - connect to db > sqlite { database: ':memory:' } > filling sap.common.Currencies from common/data/sap.common-Currencies.csv > filling sap.common.Currencies_texts from common/data/sap.common-Currencies_texts.csv > filling sap.capire.bookshop.Authors from bookshop/db/data/sap.capire.bookshop-Authors.csv > filling sap.capire.bookshop.Books from bookshop/db/data/sap.capire.bookshop-Books.csv > filling sap.capire.bookshop.Books_texts from bookshop/db/data/sap.capire.bookshop-Books_texts.csv > filling sap.capire.bookshop.Genres from bookshop/db/data/sap.capire.bookshop-Genres.csv /> successfully deployed to sqlite in-memory db [cds] - serving AdminService { at: '/admin', impl: 'bookshop/srv/admin-service.js' } [cds] - serving CatalogService { at: '/browse', impl: 'bookshop/srv/cat-service.js' } ``` ### Using Different Entry Points If you don't want everything, but only a part, you can change your `using from` directives like this: ```cds using { CatalogService } from '@capire/bookshop/srv/cat-service'; ``` The output of `cds watch` would reduce to: ```log [cds] - connect to db > sqlite { database: ':memory:' } > filling sap.capire.bookshop.Authors from bookshop/db/data/sap.capire.bookshop-Authors.csv > filling sap.capire.bookshop.Books from bookshop/db/data/sap.capire.bookshop-Books.csv > filling sap.capire.bookshop.Books_texts from bookshop/db/data/sap.capire.bookshop-Books_texts.csv > filling sap.capire.bookshop.Genres from bookshop/db/data/sap.capire.bookshop-Genres.csv /> successfully deployed to sqlite in-memory db [cds] - serving CatalogService { at: '/browse', impl: 'bookshop/srv/cat-service.js' } ``` > Only the CatalogService is served now. ::: tip Check the _readme_ files that come with reuse packages for information about which entry points are safe to use. ::: ### Extending Imported Definitions You can freely use all definitions from the imported models in the same way as you use definitions from your own models. This includes using declared types, adding associations to imported entities, building views on top of imported entities, and so on. You can even extend imported definitions, for example, add elements to imported entities, or add/override annotations, without limitations. Here's an example from the [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/srv/mashup.cds): ::: code-group ```cds [bookstore/srv/mashup.cds] using { sap.capire.bookshop.Books } from '@capire/bookshop'; using { ReviewsService.Reviews } from '@capire/reviews'; // Extend Books with access to Reviews and average ratings extend Books with { reviews : Composition of many Reviews on reviews.subject = $self.ID; rating : Decimal; } ``` ::: ## Reuse & Extend Code {#reuse-code} Service implementations, in particular custom-coding, are also imported and served in embedding projects. Follow the instructions if you need to add additional custom handlers. ### In Node.js One way to add your own implementations is to replace the service implementation as follows: 1. Add/override the `@impl` annotation ```cds using { CatalogService } from '@capire/bookshop'; annotate CatalogService with @impl:'srv/my-cat-service-impl'; ``` 2. Place your implementation in `srv/my-cat-service-impl.js`: ::: code-group ```js [srv/my-cat-service-impl.js] module.exports = cds.service.impl (function(){ this.on (...) // add your event handlers }) ``` ::: 3. If the imported package already had a custom implementation, you can include that as follows: ::: code-group ```js [srv/my-cat-service-impl.js] const base_impl = require ('@capire/bookshop/srv/cat-service') module.exports = cds.service.impl (async function(){ this.on (...) // add your event handlers await base_impl.call (this,this) }) ``` ::: > Make sure to invoke the base implementation exactly like that, with `await`. And make sure to check the imported package's readme to check whether access to that implementation module is safe. ### In Java You can provide your own implementation in the same way, as you do for your own services: 1. Import the service in your CDS files ```cds using { CatalogService } from 'com.sap.capire/bookshop'; ``` 2. Add your own implementation next to your other event handler classes: ```java @Component @ServiceName("CatalogService") public class CatalogServiceHandler implements EventHandler { @On(/* ... */) void myHandler(EventContext context) { // ... } } ``` ## Reuse & Extend UIs {#reuse-uis} If imported packages provide UIs, you can also serve them as part of your app — for example, using standard [express.js](https://expressjs.com) middleware means in Node.js. The *@capire/bookstore* app has this [in its `server.js`](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/server.js) to serve [the Vue.js app imported with *@capire/bookshop*](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookshop/app/vue) using the `app.serve().from()` method: ::: code-group ```js [bookstore/server.js] const express = require('express') const cds = require('@sap/cds') // Add routes to UIs from imported packages cds.once('bootstrap',(app)=>{ app.serve ('/bookshop') .from ('@capire/bookshop','app/vue') app.serve ('/reviews') .from ('@capire/reviews','app/vue') app.serve ('/orders') .from('@capire/orders','app/orders') }) ``` ::: [More about Vue.js in our _Getting Started in a Nutshell_](../../get-started/in-a-nutshell#uis){.learn-more} [Learn more about serving Fiori UIs.](../../advanced/fiori){.learn-more} This ensures all static content for the app is served from the imported package. In both cases, all dynamic requests to the service endpoint anyways reach the embedded service, which is automatically served at the same endpoint it was served in the bookshop. In case of Fiori elements-based UIs, the reused UIs can be extended by [extending their models as decribed above](#reuse-models), in this case overriding or adding Fiori annotations. ## Service Integration Instead of embedding and serving imported services as part of your application, you can decide to integrate with them, having them deployed and run as separate microservices. ### Import the Remote Service's APIs This is described in the [Import Reuse Packages section](#import) → for example using `npm add`. Here's the effect of this step in [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/package.json): ::: code-group ```json [bookstore/package.json] "dependencies": { "@capire/bookshop": "^1.0.0", "@capire/reviews": "^1.0.0", "@capire/orders": "^1.0.0", "@capire/common": "^1.0.0", ... }, ``` ::: ### Configuring Required Services To configure required remote services in Node.js, simply add the respective entries to the [`cds.requires` config option](../../node.js/cds-env). You can see an example in [@capire/bookstore/package.json](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/package.json), which integrates [@capire/reviews](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/reviews) and [@capire/orders](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/orders) as remote service: ::: code-group ```json [bookstore/package.json] "cds": { "requires": { "ReviewsService": { "kind": "odata", "model": "@capire/reviews" }, "OrdersService": { "kind": "odata", "model": "@capire/orders" }, } } ``` ::: > Essentially, this tells the service loader to not serve that service as part of your application, but expects a service binding at runtime in order to connect to the external service provider. #### Restricted Reuse Options Because models of integrated services only serve as imported APIs, you're restricted with respect to how you can use the models of services to integrate with. For example, only adding fields is possible, or cross-service navigation and expands. Yet, there are options to make some of these work programmatically. This is explained in the [next section](#delegating-calls) based on the integration of [@capire/reviews](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/reviews) in [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore). ### Delegating Calls to Remote Services { #delegating-calls} Let's start from the following use case: The bookshop app exposed through [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore) will allow end users to see the top 10 book reviews in the details page. To avoid [CORS issues](https://developer.mozilla.org/de/docs/Web/HTTP/CORS), the request from the UI goes to the main `CatalogService` serving the end user's UI and is delegated from that to the remote `ReviewsService`, as shown in this sequence diagram: ![This TAM graphic shows how the requests are routed between services.](assets/delegate-requests.drawio.svg) And this is how we do that in [@cap/bookstore](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/srv/mashup.js): ::: code-group ```js [bookstore/srv/mashup.js] const CatalogService = await cds.connect.to ('CatalogService') const ReviewsService = await cds.connect.to ('ReviewsService') CatalogService.prepend (srv => srv.on ('READ', 'Books/reviews', (req) => { console.debug ('> delegating request to ReviewsService') const [id] = req.params, { columns, limit } = req.query.SELECT return ReviewsService.tx(req).read ('Reviews',columns).limit(limit).where({subject:String(id)}) })) ``` ::: Let's look at that step by step: 1. We connect to both the `CatalogService` (local) and the `ReviewsService` (remote) to mash them up. 2. We register an `.on` handler with the `CatalogService`, which delegates the incoming request to the `ReviewsService`. 3. We wrap that into a call to `.prepend` because the `.on` handler needs to supersede the default generic handlers provided by the CAP runtime → see [ref docs for `srv.prepend`.](../../node.js/core-services#srv-prepend) ### Running with Mocked Remote Services {#mocking-required-services} If you start [@capire/bookstore](https://github.com/SAP-samples/cloud-cap-samples/tree/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore) locally with `cds watch`, all [required services](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/package.json#L15-L22) are automatically mocked, as you can see in the log output when the server starts: ```log [cds] - serving AdminService { at: '/admin', impl: 'bookshop/srv/admin-service.js' } [cds] - serving CatalogService { at: '/browse', impl: 'bookshop/srv/cat-service.js' } [cds] - mocking OrdersService { at: '/orders', impl: 'orders/srv/orders-service.js' } [cds] - mocking ReviewsService { at: '/reviews', impl: 'reviews/srv/reviews-service.js' } ``` > → `OrdersService` and `ReviewsService` are mocked, that is, served in the same process, in the same way as the local services. This allows development and testing functionality with minimum complexity and overhead in fast, closed-loop dev cycles. As all services are co-located in the same process, sharing the same database, you can send requests like this, which join/expand across *Books* and *Reviews*: ```http GET http://localhost:4004/browse/Books/201? &$expand=reviews &$select=ID,title,rating ``` ### Testing Remote Integration Locally {#testing-locally} As a next step, following CAP's [Grow-as-you-go](../../about/#grow-as-you-go) philosophy, we can run the services as separate processes to test the remote integration, but still locally in a low-complexity setup. We use the [_automatic binding by `cds watch`_](#bindings-via-cds-watch) as follows: 1. Start the three servers separately, each in a separate shell (from within the root folder in your cloned _[cap/samples]( https://github.com/sap-samples/cloud-cap-samples)_ project): ```sh cds watch orders --port 4006 ``` ```sh cds watch reviews --port 4005 ``` ```sh cds watch bookstore --port 4004 ``` 2. Send a few requests to the reviews service (port 4005) to add `Reviews`: ```http POST http://localhost:4005/Reviews Content-Type: application/json;IEEE754Compatible=true Authorization: Basic itsme:secret {"subject":"201", "title":"boo", "rating":3 } ``` 3. Send a request to bookshop (port 4004) to fetch reviews via `CatalogService`: ```http GET http://localhost:4004/browse/Books/201/reviews? &$select=rating,date,title &$top=3 ``` > You can find a script for this in [@capire/bookstore/test/requests.http](https://github.com/SAP-samples/cloud-cap-samples/blob/7b7686cb29aa835e17a95829c56dc3285e6e23b5/bookstore/test/requests.http). ### Binding Required Services Service bindings provide the details about how to reach a required service at runtime, that is, providing the necessary credentials, most prominently the target service's `url`. #### Basic Mechanism Using `cds.env` and Process env Variables {#bindings-via-cds-env} At the end of the day, the CAP Node.js runtime expects to find the service bindings in the respective entries in `cds.env.requires`: 1. Configured required services constitute endpoints for service bindings: ::: code-group ```json [package.json] "cds": { "requires": { "ReviewsService": {...}, } } ``` ::: 2. These are made available to the runtime via `cds.env.requires`. ```js const { ReviewsService } = cds.env.requires ``` 3. Service bindings essentially fill in `credentials` to these entries. ```js const { ReviewsService } = cds.env.requires //> ReviewsService.credentials = { //> url: "http://localhost:4005/reviews" //> } ``` While you could do the latter in test suites, you would never provide credentials in a hard-coded way like that in productive code. Instead, you'd use one of the options presented in the following sections. #### Automatic Bindings by `cds watch` {#bindings-via-cds-watch} When running separate services locally as described [in the previous section](#testing-locally), this is done automatically by `cds watch`, as indicated by this line in the bootstrapping log output: ```log [cds] - using bindings from: { registry: '~/.cds-services.json' } ``` You can cmd/ctrl-click or double click on that to see the file's content, and find something like this: ::: code-group ```json [~/.cds-services.json] { "cds": { "provides": { "OrdersService": { "kind": "odata", "credentials": { "url": "http://localhost:4006/orders" } }, "ReviewsService": { "kind": "odata", "credentials": { "url": "http://localhost:4005/reviews" } }, "AdminService": { "kind": "odata", "credentials": { "url": "http://localhost:4004/admin" } }, "CatalogService": { "kind": "odata", "credentials": { "url": "http://localhost:4004/browse" } } } } } ``` ::: Whenever you start a CAP server with `cds watch`, this is what happens automatically: 1. For all *provided* services, corresponding entries are written to _~/cds-services.json_ with respective `credentials`, namely the `url`. 2. For all *required* services, corresponding entries are fetched from _~/cds-services.json_. If found, the `credentials` are filled into the respective entry in `cds.env.requires.` [as introduced previously](#bindings-via-cds-env). In effect, all the services that you start locally in separate processes automatically receive their required bindings so they can talk to each other out of the box. #### Through Process Environment Variables {#bindings-via-process-env} You can pass credentials as process environment variables, for example in ad-hoc tests from the command line: ```sh export cds_requires_ReviewsService_credentials_url=http://localhost:4005/reviews cds watch bookstore ``` ... or add them to a local `.env` file for repeated local tests: ::: code-group ```properties [.env] cds.requires.ReviewsService.credentials = { "url": "http://localhost:4005/reviews" } ``` ::: > Note: never check in or deploy these `.env` files! #### Through `VCAP_SERVICES` {#bindings-via-vcap_services} When deploying to Cloud Foundry, service bindings are provided in `VCAP_SERVICES` process environment variables [as documented here](../../node.js/cds-connect#vcap_services). #### In Target Cloud Environments {#bindings-in-cloud-environments} Find information about how to do so in different environment under these links: - [Deploying Services using MTA Deployer](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c2b99f19e9264c4d9ae9221b22f6f589/33548a721e6548688605049792d55295.html) - [Service Bindings in SAP BTP Cockpit](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/0e6850de6e7146c3a17b86736e80ee2e.html) - [Service Bindings using the Cloud Foundry CLI](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/296cd5945fd84d7d91061b2b2bcacb93.html) - [Service Binding in Kyma](hhttps://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/d1aa23c492694d669c89a8d214f29147.html) ## Providing Reuse Packages In general, every CAP-based product can serve as a reuse package consumed by others. There's actually not much to do. Just create models and implementations as usual. The following sections are about additional things to consider as a provider of a reuse package. ### Considerations for Maven-based reuse packages When providing your reuse package as a Maven dependency you need to ensure that the CDS, CSV and i18n files are included into the JAR. Place them in a `cds` folder in your `resources` folder under a unique module directory (for example, leveraging group ID and artifact ID): ```txt src/main/resources/cds/ com.sap.capire/bookshop/ index.cds CatalogService.cds data/ com.sap.capire.bookshop-Books.csv i18n/ i18n.properties ``` This structure ensures that the CDS Maven Plugin `resolve` goal extracts these files correctly to the `target/cds/` folder. >Note that `com.sap.capire/bookshop` is used when importing the models with a `using` directive. ### Provide Public Entry Points {#entry-points} Following the Node.js approach, there's no public/private mechanism in CDS. Instead, it's good and proven practice to add an _index.cds_ in the root folder of reuse packages, similar to the use of _index.js_ files in Node. For example: ::: code-group ```cds [provider/index.cds] namespace my.reuse.package; using from './db/schema'; using from './srv/cat-service'; using from './srv/admin-service'; ``` ::: This allows your users to refer to your models in `using` directives using just the package name, like so: ::: code-group ```cds [consumer/some.cds] using { my.thing } from 'my-reuse-package'; ``` ::: In addition, you might want to provide other entry points to ease partial usage options. For example, you could provide a _schema.cds_ file in your root, to allow using the domain model without services: ::: code-group ```cds [consumer/more.cds] using { my.domain.entity } from 'my-reuse-package/schema'; using { my.service } from 'my-reuse-package/services'; ``` ::: ### Provide Custom Handlers #### In Node.js In general, custom handlers can be placed in files matching the naming of the _.cds_ files they belong to. In a reuse package, you have to use the `@impl` annotation to make it explicit which custom handler to use. In addition you need to use the fully qualified module path inside the `@impl` annotation. Imagine that our bookshop is an _@sap_-scoped reuse module and the _CatalogService_ has a custom handler. This is how the service definition would look: ::: code-group ```cds [bookshop/srv/cat-service.cds] service CatalogService @(impl: '@sap/bookshop/srv/cat-service.js') {...} ``` ::: #### In Java If your reuse project is Spring Boot independent, register your custom event handler classes in a `CdsRuntimeConfiguration`: ::: code-group ```java [src/main/java/com/sap/capire/bookshop/BookshopConfiguration.java] package com.sap.capire.bookshop; public class BookshopConfiguration implements CdsRuntimeConfiguration { @Override public void eventHandlers(CdsRuntimeConfigurer configurer) { configurer.eventHandler(new CatalogServiceHandler()); } } ``` ::: Additionally, register the `CdsRuntimeConfiguration` class in a `src/main/resources/META-INF/services/com.sap.cds.services.runtime.CdsRuntimeConfiguration` file to be detected by CAP Java: ::: code-group ``` txt [src/main/resources/META-INF/services/com.sap.cds.services.runtime.CdsRuntimeConfiguration] com.sap.capire.bookshop.BookshopConfiguration ``` ::: Alternatively, if your reuse project is Spring Boot-based, define your event handler classes as Spring beans. Then use Spring Boot's [auto-configuration mechanism](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.developing-auto-configuration) to ensure that your classes are registered automatically when importing the reuse package as a dependency. ### Add a Readme You should inform potential consumers about the recommended ways to reuse content provided by your package. At least provide information about: - What is provided – schemas, services, data, and so on - What are the recommended, stable entry points ### Publish/Share with Consumers The preferred way to share reuse packages is by publishing to registries, like _npmjs.org_, _pkg.github.com_ or _Maven Central_. This allows consumers to apply proper version management. However, at the end of the day, any other way to share packages, which you create with `npm pack` or `mvn package` would work as well. ## Customizing SaaS Usage Subscribers of SaaS solutions can use the same *reuse and extend* techniques to tailor the application to their requirements, for example by: - Adding/overriding annotations - Adding custom fields and entities - Adding custom data - Adding custom i18n bundles - Importing prebuilt extension packages The main difference is how and from where the import happens: 1. The reuse package, in this case, is the subscribed SaaS application. 2. The import happens via `cds pull`. 3. The imported package is named according to the `cds.extends` entry in package.json 4. The extensions are applied via `cds push`. [Learn more in the **SaaS Extensibility** guide.](customization){.learn-more} # Core Data Services (CDS) Language Reference Documentation { .subtitle} CDS is the backbone of the SAP Cloud Application Programming Model (CAP). It provides the means to declaratively capture service definitions and data models, queries, and expressions. The CDS toolkit allows to parse from a variety of source languages into a uniform format and to compile it into various target languages. !["The graphic is explained in the accompanying text."](./assets/csn.drawio.svg) At runtime, CDS models are plain JavaScript objects complying to the _[Core Schema Notation (CSN)](./csn)_, an open specification derived from [JSON Schema](https://json-schema.org/). You can easily create or interpret these models, which foster extensions by 3rd-party contributions. Models are processed dynamically at runtime and can also be created dynamically. > We use the terms _CDS_ or _CDS models_ as synonym to your models written in CDL. [See the Nature of Models for more details](models){.learn-more}
# Conceptual Definition Language (CDL) The *Conceptual Definition Language (CDL)* is a human-readable language for defining CDS models. Sources are commonly provided in files with`.cds` extensions and get compiled into [CSN representations](csn). Following sections provide a reference of all language constructs in CDL, which also serves as a reference of all corresponding CDS concepts and features. ## Language Preliminaries - [Keywords & Identifiers](#keywords-identifiers) - [Built-in Types](#built-in-types) - [Literals](#literals) - [Model Imports](#model-imports) - [Namespaces](#namespaces) - [Comments](#comments) ### Keywords & Identifiers *Keywords* in CDL are used to prelude statements, such as imports and namespace directives as well as entity and type declarations. *Identifiers* are used to refer to definitions. ```cds namespace capire.bookshop; using { managed } from `@sap/cds/common`; aspect entity : managed { key ID: Integer } entity Books : entity { title : String; author : Association to Authors; } entity Authors : entity { name : String; } ``` ::: details Noteworthy... In the example above `entity` shows up as a keyword, as well as an identifier of an aspect declaration and references to that. As indicated by the syntax coloring, `Association` is not a keyword, but a type name identifier, similar to `String`, `Integer`, `Books` and `Authors`. ::: Keywords are *case-insensitive*, but are most commonly used in lowercase notation. Identifiers are *case-significant*, that is, `Foo` and `foo` would identify different things. Identifiers have to comply to `/[$A-Za-z_]\w*/` or be enclosed in `![`...`]` like that: ```cds type ![Delimited Identifier] : String; ``` ::: warning Avoid using delimited identifiers Delimited identifiers in general, but in articular non-ansi characters, or keywords as identifiers should be avoided as much as possible, for reasons of interoperability. ::: ### Built-in Types The following table lists the built-in types available to all CDS models, and can be used to define entity elements or custom types as follows: ```cds entity Books { key ID : UUID; title : String(111); stock : Integer; price : Price; } type Price : Decimal; ``` These types are used to define the structure of entities and services, and are mapped to respective database types when the model is deployed. | CDS Type | Remarks | ANSI SQL (1) | | --- | --- | --- | | `UUID` | CAP generates [RFC 4122](https://tools.ietf.org/html/rfc4122)-compliant UUIDs (2) | _NVARCHAR(36)_ | | `Boolean` | Values: `true`, `false`, `null`, `0`, `1` | _BOOLEAN_ | | `Integer` | Same as `Int32` by default | _INTEGER_ | | `Int16` | Signed 16-bit integer, range *[ -215 ... +215 )* | _SMALLINT_ | | `Int32` | Signed 32-bit integer, range *[ -231 ... +231 )* | _INTEGER_ | | `Int64` | Signed 64-bit integer, range *[ -263 ... +263 )* | _BIGINT_ | | `UInt8` | Unsigned 8-bit integer, range *[ 0 ... 255 ]* | _TINYINT_ (3) | | `Decimal` (`prec`, `scale`) | A *decfloat* type is used if arguments are omitted | _DECIMAL_ | | `Double` | Floating point with binary mantissa | _DOUBLE_ | | `Date` | e.g. `2022-12-31` | _DATE_ | | `Time` | e.g. `24:59:59` | _TIME_ | | `DateTime` | _sec_ precision | _TIMESTAMP_ | | `Timestamp` | _µs_ precision, with up to 7 fractional digits | _TIMESTAMP_ | | `String` (`length`) | Default *length*: 255; on HANA: 5000 (4) | _NVARCHAR_ | | `Binary` (`length`) | Default *length*: 255; on HANA: 5000 (5) | _VARBINARY_ | | `LargeBinary` | Unlimited data, usually streamed at runtime | _BLOB_ | | `LargeString` | Unlimited data, usually streamed at runtime | _NCLOB_ | | `Map` | Mapped to *NCLOB* for HANA. | *JSON* type | | `Vector` (`dimension `) | Requires SAP HANA Cloud QRC 1/2024, or later | _REAL_VECTOR_ | ```cds entity Books { key ID : UUID; title : String(111); stock : Integer; price : Price; } type Price : Decimal; ``` These types are used to define the structure of entities and services, and are mapped to respective database types when the model is deployed. > (1) Concrete mappings to specific databases may differ. > > (2) See also [Best Practices](../guides/domain-modeling#don-t-interpret-uuids). > > (3) Not available on PostgreSQL and H2. > > (4) Configurable through `cds.cdsc.defaultStringLength`. > > (5) Configurable through `cds.cdsc.defaultBinaryLength`. #### See also... [Additional Reuse Types and Aspects by `@sap/cds/common`](common) {.learn-more} [Mapping to OData EDM types](../advanced/odata#type-mapping) {.learn-more} [HANA-native Data Types](../advanced/hana#hana-types){.learn-more} ### Literals The following literals can be used in CDL (mostly as in JavaScript, Java, and SQL): ```cds true , false , null // as in all common languages 11 , 2.4 , 1e3, 1.23e-11 // for numbers 'A string''s literal' // for strings { foo:'boo', bar:'car' } // for records [ 1, 'two', {three:4} ] // for arrays ``` [Learn more about literals and their representation in CSN.](./csn#literals) {.learn-more} #### Date & Time Literals In addition, type-keyword-prefixed strings can be used for date & time literals: ```cds date'2016-11-24' time'16:11:32' timestamp'2016-11-24T12:34:56.789Z' ``` #### Multiline String Literals {#multiline-literals} Use string literals enclosed in **single or triple backticks** for multiline strings: ```cds @escaped: `OK Emoji: \u{1f197}` @multiline: ``` This is a CDS multiline string. - The indentation is stripped. - \u{0055}nicode escape sequences are possible, just like common escapes from JavaScript such as \r \t \n and more! ``` @data: ```xml
The tag is ignored by the core-compiler but may be used for syntax highlighting, similar to markdown.
``` entity DocumentedEntity { // ... } ``` Within those strings, escape sequences from JavaScript, such as `\t` or `\u0020`, are supported. Line endings are normalized. If you don't want a line ending at that position, end a line with a backslash (`\`). For string literals inside triple backticks, indentation is stripped and tagging is possible. ### Model Imports #### The `using` Directive {#using} Using directives allows to import definitions from other CDS models. As shown in line three below you can specify aliases to be used subsequently. You can import single definitions as well as several ones with a common namespace prefix. Optional: Choose a local alias. ::: code-group ```cds [using-from.cds] using foo.bar.scoped.Bar from './contexts'; using foo.bar.scoped.nested from './contexts'; using foo.bar.scoped.nested as specified from './contexts'; entity Car : Bar {} //> : foo.bar.scoped.Bar entity Moo : nested.Zoo {} //> : foo.bar.scoped.nested.Zoo entity Zoo : specified.Zoo {} //> : foo.bar.scoped.nested.Zoo ``` ::: Multiple named imports through ES6-like deconstructors: ```cds using { Foo as Moo, sub.Bar } from './base-model'; entity Boo : Moo { /*...*/ } entity Car : Bar { /*...*/ } ``` > Also in the deconstructor variant of `using` shown in the previous example, specify fully qualified names. #### Model Resolution Imports in `cds` work very much like [`require` in Node.js](https://nodejs.org/api/modules.html#requireid) and `import`s in [ES6](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import). In fact, we reuse **[Node's module loading mechanisms](https://nodejs.org/api/modules.html#modules_all_together)**. Hence, the same rules apply: - Relative path resolution
Names starting with `./` or `../` are resolved relative to the current model. - Resolving absolute references
Names starting with `/` are resolved absolute to the file system. - Resolving module references
Names starting with neither `.` nor `/` such as `@sap/cds/common` are fetched for in `node_modules` folders: - Files having _.cds_, _.csn_, or _.json_ as suffixes, appended in order - Folders, from either the file set in `cds.main` in the folder's _package.json_ or `index.` file. ::: tip To allow for loading from precompiled _.json_ files it's recommended to **omit _.cds_ suffixes** in import statements, as shown in the provided examples. ::: ### Namespaces #### The `namespace` Directive To prefix the names of all subsequent definitions, place a `namespace` directive at the top of a model. This is comparable to other languages, like Java. ::: code-group ```cds[namespace.cds] namespace foo.bar; entity Foo {} //> foo.bar.Foo entity Bar : Foo {} //> foo.bar.Bar ``` ::: A namespace is not an object of its own. There is no corresponding definition in CSN. #### The `context` Directive {#context} Use `contexts` for nested namespace sections. ::: code-group ```cds[contexts.cds] namespace foo.bar; entity Foo {} //> foo.bar.Foo context scoped { entity Bar : Foo {} //> foo.bar.scoped.Bar context nested { entity Zoo {} //> foo.bar.scoped.nested.Zoo } } ``` ::: #### Scoped Definitions {#scoped-names} You can define types and entities with other definitions' names as prefixes: ```cds namespace foo.bar; entity Foo {} //> foo.bar.Foo entity Foo.Bar {} //> foo.bar.Foo.Bar type Foo.Bar.Car {} //> foo.bar.Foo.Bar.Car ``` #### Fully Qualified Names A model ultimately is a collection of definitions with unique, fully qualified names. For example, the second model above would compile to this [CSN](./csn): ::: code-group ```json [contexts.json] {"definitions":{ "foo.bar.Foo": { "kind": "entity" }, "foo.bar.scoped": { "kind": "context" }, "foo.bar.scoped.Bar": { "kind": "entity", "includes": [ "foo.bar.Foo" ] }, "foo.bar.scoped.nested": { "kind": "context" }, "foo.bar.scoped.nested.Zoo": { "kind": "entity" } }} ``` ::: ### Comments CDL supports line-end, block comments, and *doc* comments as in Java and JavaScript: ```cds // line-end comment /* block comment */ /** doc comment */ ``` #### Doc Comments {#doc-comment} A multi-line comment of the form `/** … */` at an [annotation position](#annotation-targets) is considered a *doc comment*: ```cds /** * I am the description for "Employee" */ entity Employees { key ID : Integer; /** * I am the description for "name" */ name : String; } ``` The text of a doc comment is stored in CSN in the property `doc`. When generating OData EDM(X), it appears as value for the annotation `@Core.Description`. When generating output for deployment to SAP HANA, the first paragraph of a doc comment is translated to the HANA `COMMENT` feature for tables, table columns, and for views (but not for view columns): ```sql CREATE TABLE Employees ( ID INTEGER, name NVARCHAR(...) COMMENT 'I am the description for "name"' ) COMMENT 'I am the description for "Employee"' ``` ::: tip Propagation of doc comments can be stopped via an empty one: `/** */`. ::: In CAP Node.js, doc comments need to be switched on when calling the compiler: ::: code-group ```sh [CLI] cds compile foo.cds --docs ``` ```js [JavaScript] cds.compile(..., { docs: true }) ``` ::: ::: tip Doc comments are enabled by default in CAP Java. In CAP Java, doc comments are automatically enabled by the [CDS Maven Plugin](../java/developing-applications/building#cds-maven-plugin). In generated interfaces they are [converted to corresponding Javadoc comments](../java/assets/cds-maven-plugin-site/generate-mojo.html#documentation). ::: ## Entities & Type Definitions - [Entity Definitions](#entity-definitions) - [Type Definitions](#type-definitions) - [Structured Types](#structured-types) - [Arrayed Types](#arrayed-types) - [Virtual Elements](#virtual-elements) - [Calculated elements](#calculated-elements) - [Default Values](#default-values) - [Type References](#type-references) - [Constraints](#constraints) - [Enums](#enums) ### Entity Definitions {#entities} Entities are structured types with named and typed elements, representing sets of (persisted) data that can be read and manipulated using usual CRUD operations. They usually contain one or more designated primary key elements: ```cds define entity Employees { key ID : Integer; name : String; jobTitle : String; } ``` > The `define` keyword is optional, that means `define entity Foo` is equal to `entity Foo`. ### Type Definitions {#types} You can declare custom types to reuse later on, for example, for elements in entity definitions. Custom-defined types can be simple, that is derived from one of the predefined types, structure types or [Associations](#associations). ```cds define type User : String(111); define type Amount { value : Decimal(10,3); currency : Currency; } define type Currency : Association to Currencies; ``` > The `define` keyword is optional, that means `define type Foo` is equal to `type Foo`. [Learn more about **Definitions of Named Aspects**.](#aspects){.learn-more} ### Structured Types You can declare and use custom struct types as follows: ```cds type Amount { value : Decimal(10,3); currency : Currency; } entity Books { price : Amount; } ``` Elements can also be specified with anonymous inline struct types. For example, the following is equivalent to the definition of `Books` above: ```cds entity Books { price : { value : Decimal(10,3); currency : Currency; }; } ``` ### Arrayed Types Prefix a type specification with `array of` or `many` to signify array types. ```cds entity Foo { emails: many String; } entity Bar { emails: many { kind:String; address:String; }; } entity Car { emails: many EmailAddress; } entity Car { emails: EmailAddresses; } type EmailAddresses : many { kind:String; address:String; } type EmailAddress : { kind:String; address:String; } ``` > Keywords `many` and `array of` are mere syntax variants with identical semantics and implementations. When deployed to SQL databases, such fields are mapped to [LargeString](types) columns and the data is stored denormalized as JSON array. With OData V4, arrayed types are rendered as `Collection` in the EDM(X). ::: warning Filter expressions, [instance-based authorization](../guides/security/authorization#instance-based-auth) and [search](../guides/providing-services#searching-data) are not supported on arrayed elements. ::: #### Null Values For arrayed types the `null` and `not null` constraints apply to the _members_ of the collections. The default is `not null` indicating that the collections can't hold `null` values. ::: warning An empty collection is represented by an empty JSON array. A `null` value is invalid for an element with arrayed type. ::: In the following example the collection `emails` may hold members that are `null`. It may also hold a member where the element `kind` is `null`. The collection `emails` itself must not be `null`! ```cds entity Bar { emails : many { kind : String null; address : String not null; } null; // -> collection emails may hold null values, overwriting default } ``` ### Virtual Elements An element definition can be prefixed with modifier keyword `virtual`. This keyword indicates that this element isn't added to persistent artifacts, that is, tables or views in SQL databases. Virtual elements are part of OData metadata. By default virtual elements are annotated with `@Core.Computed: true`, not writable for the client and will be [silently ignored](../guides/providing-services#readonly). This means also, that they are not accessible in custom event handlers. If you want to make virtual elements writable for the client, you explicitly need to annotate these elements with `@Core.Computed: false`. Still those elements are not persisted and therefore, for example, not sortable or filterable. ```cds entity Employees { [...] virtual something : String(11); } ``` ### Calculated Elements Elements of entities and aspects can be specified with a calculation expression, in which you can refer to other elements of the same entity/aspect. This can be either a value expression or an expression that resolves to an association. Calculated elements with a value expression are read-only, no value must be provided for them in a WRITE operation. When reading such a calculated element, the result of the expression is returned. They come in two variants: "on-read" and "on-write". The difference between them is the point in time when the expression is evaluated. #### On-read ```cds entity Employees { firstName : String; lastName : String; name : String = firstName || ' ' || lastName; name_upper = upper(name); addresses : Association to many Addresses; city = addresses[kind='home'].city; } ``` For a calculated element with "on-read" semantics, the calculation expression is evaluated when reading an entry from the entity. Using such a calculated element in a query or view definition is equivalent to writing the expression directly into the query, both with respect to semantics and to performance. In CAP, it is implemented by replacing each occurrence of a calculated element in a query by the respective expression. Entity using calculated elements: ```cds entity EmployeeView as select from Employees { name, city }; ``` Equivalent entity: ```cds entity EmployeeView as select from Employees { firstName || ' ' || lastName as name : String, addresses[kind='home'].city as city }; ``` Calculated elements "on-read" are a pure convenience feature. Instead of having to write the same expression several times in queries, you can define a calculated element **once** and then simply refer to it. In the _definition_ of a calculated element "on-read", you can use almost all expressions that are allowed in queries. Some restrictions apply: * Subqueries are not allowed. * Nested projections (inline/expand) are not allowed. * A calculated element can't be key. A calculated element can be *used* in every location where an expression can occur. A calculated element can't be used in the following cases: * in the ON condition of an unmanaged association * as the foreign key of a managed association * in a query together with nested projections (inline/expand) ::: warning For the Node.js runtime, only the new database services under the _@cap-js_ scope support this feature. ::: #### On-write Calculated elements "on-write" (also referred to as "stored" calculated elements) are defined by adding the keyword `stored`. A type specification is mandatory. ```cds entity Employees { firstName : String; lastName : String; name : String = (firstName || ' ' || lastName) stored; } ``` For a calculated element "on-write", the expression is already evaluated when an entry is written into the database. The resulting value is then stored/persisted like a regular field, and when reading from the entity, it behaves like a regular field as well. Using a stored calculated element can improve performance, in particular when it's used for sorting or filtering. This is paid for by higher memory consumption. While calculated elements "on-read" are handled entirely by CAP, the "on-write" variant is implemented by using the corresponding feature for database tables. The previous entity definition results in the following table definition: ```sql -- SAP HANA syntax -- CREATE TABLE Employees ( firstName NVARCHAR, lastName NVARCHAR, name NVARCHAR GENERATED ALWAYS AS (firstName || ' ' || lastName) ); ``` For the definition of calculated elements on-write, all the on-read variant's restrictions apply and referencing localized elements isn't allowed. In addition, there are restrictions that depend on the particular database. Currently all databases supported by CAP have a common restriction: The calculation expression may only refer to fields of the same table row. Therefore, such an expression must not contain subqueries, aggregate functions, or paths with associations. No restrictions apply for reading a calculated element on-write. #### Association-like calculated elements {#association-like-calculated-elements} A calculated element can also define a filtered association/composition using infix filters: ```cds entity Employees { addresses : Association to many Addresses; homeAddress = addresses [1: kind='home']; } ``` For such a calculated element, no explicit type can be specified. Only a single association or composition can occur in the expression, and a filter must be specified. The effect essentially is like [publishing an association with an infix filter](#publish-associations-with-filter). ### Default Values As in SQL you can specify default values to fill in upon INSERTs if no value is specified for a given element. ```cds entity Foo { bar : String default 'bar'; boo : Integer default 1; } ``` Default values can also be specified in custom type definitions: ```cds type CreatedAt : Timestamp default $now; type Complex { real : Decimal default 0.0; imag : Decimal default 0.0; } ``` ### Type References If you want to base an element's type on another element of the same structure, you can use the `type of` operator. ```cds entity Author { firstname : String(100); lastname : type of firstname; // has type "String(100)" } ``` For referencing elements of other artifacts, you can use the element access through `:`. Element references with `:` don't require `type of` in front of them. ```cds entity Employees { firstname: Author:firstname; lastname: Author:lastname; } ``` ### Constraints Element definitions can be augmented with constraint `not null` as known from SQL. ```cds entity Employees { name : String(111) not null; } ``` ### Enums You can specify enumeration values for a type as a semicolon-delimited list of symbols. For string types, declaration of actual values is optional; if omitted, the actual values are the string counterparts of the symbols. ```cds type Gender : String enum { male; female; non_binary = 'non-binary'; } entity Order { status : Integer enum { submitted = 1; fulfilled = 2; shipped = 3; canceled = -1; }; } ``` To enforce your _enum_ values during runtime, use the [`@assert.range` annotation](../guides/providing-services#assert-range). For localization of enum values, model them as [code list](./common#adding-own-code-lists).
## Views & Projections {#views} Use `as select from` or `as projection on` to derive new entities from existing ones by projections, very much like views in SQL. When mapped to relational databases, such entities are in fact translated to SQL views but they're frequently also used to declare projections without any SQL views involved. The entity signature is inferred from the projection. - [The `as select from` Variant](#as-select-from) - [The `as projection on` Variant](#as-projection-on) - [Views with Inferred Signatures](#views-with-inferred-signatures)
- [Views with Parameters](#views-with-parameters) ### The `as select from` Variant {#as-select-from} Use the `as select from` variant to use all possible features an underlying relational database would support using any valid [CQL](./cql) query including all query clauses. ```cds entity Foo1 as select from Bar; //> implicit {*} entity Foo2 as select from Employees { * }; entity Foo3 as select from Employees LEFT JOIN Bar on Employees.ID=Bar.ID { foo, bar as car, sum(boo) as moo } where exists ( SELECT 1 as anyXY from SomeOtherEntity as soe where soe.x = y ) group by foo, bar order by moo asc; ``` ### The `as projection on` Variant {#as-projection-on} Use the `as projection on` variant instead of `as select from` to indicate that you don't use the full power of SQL in your query. For example, having a restricted query in an entity allows us to serve such an entity from external OData services. ```cds entity Foo as projection on Bar {...} ``` Currently the restrictions of `as projection on` compared to `as select from` are: - no explicit, manual `JOINs` - no explicit, manual `UNIONs` - no sub selects in from clauses Over time, we can add additional checks depending on specific outbound protocols. ### Views with Inferred Signatures By default views inherit all properties and annotations from their primary underlying base entity. Their [`elements`](./csn#structured-types) signature is **inferred** from the projection on base elements. Each element inherits all properties from the respective base element, except the `key` property. The `key` property is only inherited if all of the following applies: - No explicit `key` is set in the query. - All key elements of the primary base entity are selected (for example, by using `*`). - No path expression with a to-many association is used. - No `union`, `join` or similar query construct is used. For example, the following definition: ```cds entity SomeView as select from Employees { ID, name, job.title as jobTitle }; ``` Might result in this inferred signature: ```cds entity SomeView { key ID: Integer; name: String; jobTitle: String; }; ``` Note: CAP does **not** enforce uniqueness for key elements of a view or projection. Use a CDL cast to set an element's type, if one of the following conditions apply: + You don't want to use the inferred type. + The query column is an expression (no inferred type is computed). ```cds entity SomeView as select from Employees { ID : Integer64, name : LargeString, 'SAP SE' as company : String }; ``` ::: tip By using a cast, annotations and other properties are inherited from the provided type and not the base element, see [Annotation Propagation](#annotation-propagation) :::
### Views with Parameters You can equip views with parameters that are passed in whenever that view is queried. Default values can be specified. Refer to these parameters in the view's query using the prefix `:`. ```cds entity SomeView ( foo: Integer, bar: Boolean ) as SELECT * from Employees where ID=:foo; ``` When selecting from a view with parameters, the parameters are passed by name. In the following example, `UsingView` also has a parameter `bar` that is passed down to `SomeView`. ```cds entity UsingView ( bar: Boolean ) as SELECT * from SomeView(foo: 17, bar: :bar); ``` For Node.js, there's no programmatic API yet. You need to provide a [CQN snippet](/cds/cqn#select). In CAP Java, run a select statement against the view with named [parameter values](/java/working-with-cql/query-execution#querying-views): ::: code-group ```js [Node] SELECT.from({ ref: [{ id: 'UsingView', args: { bar: { val: true }}} ]} ) ``` ```Java [Java] var params = Map.of("bar", true); Result result = service.run(Select.from("UsingView"), params); ``` ::: [Learn more about how to expose views with parameters in **Services - Exposed Entities**.](#exposed-entities){ .learn-more} [Learn more about views with parameters for existing HANA artifacts in **Native SAP HANA Artifacts**.](../advanced/hana){ .learn-more} ## Associations Associations capture relationships between entities. They are like forward-declared joins added to a table definition in SQL. - [Unmanaged Associations](#unmanaged-associations) - [Managed Associations](#managed-associations) - [To-many Associations](#to-many-associations) - [Many-to-many Associations](#many-to-many-associations) - [Compositions](#compositions) - [Managed Compositions](#managed-compositions) ### Unmanaged Associations Unmanaged associations specify arbitrary join conditions in their `on` clause, which refer to available foreign key elements. The association's name (`address` in the following example) is used as the alias for the to-be-joined target entity. ```cds entity Employees { address : Association to Addresses on address.ID = address_ID; address_ID : Integer; //> foreign key } ``` ```cds entity Addresses { key ID : Integer; } ``` ### Managed (To-One) Associations {#managed-associations} For to-one associations, CDS can automatically resolve and add requisite foreign key elements from the target's primary keys and implicitly add respective join conditions. ```cds entity Employees { address : Association to Addresses; } ``` This example is equivalent to the [unmanaged example above](#unmanaged-associations), with the foreign key element `address_ID` being added automatically upon activation to a SQL database. The names of the automatically added foreign key elements cannot be changed. > Note: For adding foreign key constraints on database level, see [Database Constraints.](../guides/databases#database-constraints). If the target has a single primary key, a default value can be provided. This default applies to the generated foreign key element `address_ID`: ```cds entity Employees { address : Association to Addresses default 17; } ``` ### To-many Associations For to-many associations specify an `on` condition following the canonical expression pattern `. = $self` as in this example: ```cds entity Employees { key ID : Integer; addresses : Association to many Addresses on addresses.owner = $self; } ``` ```cds entity Addresses { owner : Association to Employees; //> the backlink } ``` > The backlink can be any managed to-one association on the _many_ side pointing back to the _one_ side. ### Many-to-many Associations For many-to-many association, follow the common practice of resolving logical many-to-many relationships into two one-to-many associations using a link entity to connect both. For example: ```cds entity Employees { [...] addresses : Association to many Emp2Addr on addresses.emp = $self; } entity Emp2Addr { key emp : Association to Employees; key adr : Association to Addresses; } ``` [Learn more about **Managed Compositions for Many-to-many Relationships**.](#for-many-to-many-relationships){.learn-more}
### Compositions Compositions constitute document structures through _contained-in_ relationships. They frequently show up in to-many header-child scenarios. ```cds entity Orders { key ID: Integer; //... Items : Composition of many Orders.Items on Items.parent = $self; } entity Orders.Items { key pos : Integer; key parent : Association to Orders; product : Association to Products; quantity : Integer; } ``` :::info Contained-in relationship Essentially, Compositions are the same as _[associations](#associations)_, just with the additional information that this association represents a _contained-in_ relationship so the same syntax and rules apply in their base form. ::: ::: warning Limitations of Compositions of one Using of compositions of one for entities is discouraged. There is often no added value of using them as the information can be placed in the root entity. Compositions of one have limitations as follow: - Very limited Draft support. Fiori elements does not support compositions of one unless you take care of their creation in a custom handler. - No extensive support for modifications over paths if compostions of one are involved. You must fill in foreign keys manually in a custom handler. ::: ### Managed Compositions of Aspects {#managed-compositions} Use managed compositions variant to nicely reflect document structures in your domain models, without the need for separate entities, reverse associations, and unmanaged `on` conditions. #### With Inline Targets ```cds entity Orders { key ID: Integer; //... Items : Composition of many { key pos : Integer; product : Association to Products; quantity : Integer; } }; ``` Managed Compositions are mostly syntactical sugar: Behind the scenes, they are unfolded to the [unmanaged equivalent as shown above](#compositions) by automatically adding a new entity, the name of which being constructed as a [scoped name](#scoped-names) from the name of parent entity, followed by the name of the composition element, that is `Orders.Items` in the previous example. You can safely use this name at other places, for example to define an association to the generated child entity: ```cds entity Orders { // … specialItem : Association to Orders.Items; }; ``` #### With Named Targets Instead of anonymous target aspects you can also specify named aspects, which are unfolded the same way as anonymous inner types, as shown in the previous example: ```cds entity Orders { key ID: Integer; //... Items : Composition of many OrderItems; } aspect OrderItems { key pos : Integer; product : Association to Products; quantity : Integer; } ``` #### Default Target Cardinality If not otherwise specified, a managed composition of an aspect has the default target cardinality *to-one*. #### For Many-to-many Relationships Managed Compositions are handy for [many-to-many relationships](#many-to-many-associations), where a link table usually is private to one side. ```cds entity Teams { [...] members : Composition of many { key user: Association to Users; } } entity Users { [...] teams: Association to many Teams.members on teams.user = $self; } ``` And here's an example of an attributed many-to-many relationship: ```cds entity Teams { [...] members : Composition of many { key user : Association to Users; role : String enum { Lead; Member; Collaborator; } } } entity Users { ... } ``` To navigate between _Teams_ and _Users_, you have to follow two associations: `members.user` or `teams.up_`. In OData, to get all users of all teams, use a query like the following: ```cds GET /Teams?$expand=members($expand=user) ``` ### Publish Associations in Projections {#publish-associations} As associations are first class citizens, you can put them into the select list of a view or projection ("publish") like regular elements. A `select *` includes all associations. If you need to rename an association, you can provide an alias. ```cds entity P_Employees as projection on Employees { ID, addresses } ``` The effective signature of the projection contains an association `addresses` with the same properties as association `addresses` of entity `Employees`. #### Publish Associations with Infix Filter {#publish-associations-with-filter} When publishing an unmanaged association in a view or projection, you can add a filter condition. The ON condition of the resulting association is the ON condition of the original association plus the filter condition, combined with `and`. ```cds entity P_Authors as projection on Authors { *, books[stock > 0] as availableBooks }; ``` In this example, in addition to `books` projection `P_Authors` has a new association `availableBooks` that points only to those books where `stock > 0`. If the filter condition effectively reduces the cardinality of the association to one, you should make this explicit in the filter by adding a `1:` before the condition: ```cds entity P_Employees as projection on Employees { *, addresses[1: kind='home'] as homeAddress // homeAddress is to-one } ``` Filters usually are provided only for to-many associations, which usually are unmanaged. Thus publishing with a filter is almost exclusively used for unmanaged associations. Nevertheless you can also publish a managed association with a filter. This will automatically turn the resulting association into an unmanaged one. You must ensure that all foreign key elements needed for the ON condition are explicitly published. ```cds entity P_Books as projection on Books { author.ID as authorID, // needed for ON condition of deadAuthor author[dateOfDeath is not null] as deadAuthor // -> unmanaged association }; ``` Publishing a _composition_ with a filter is similar, with an important difference: in a deep Update, Insert, or Delete statement the respective operation does not cascade to the target entities. Thus the type of the resulting element is set to `cds.Association`. [Learn more about `cds.Association`.](/cds/csn#associations){.learn-more} In [SAP Fiori Draft](../advanced/fiori#draft-support), it behaves like an "enclosed" association, that means, it points to the target draft entity. In the following example, `singleItem` has type `cds.Association`. In draft mode, navigating along `singleItems` doesn't leave the draft tree. ```cds @odata.draft.enabled entity P_orders as projection on Orders { *, Items[quantity = 1] as singleItems } ``` ## Annotations This section describes how to add Annotations to model definitions written in CDL, focused on the common syntax options, and fundamental concepts. Find additional information in the [OData Annotations](../advanced/odata#annotations) guide. - [Annotation Syntax](#annotation-syntax) - [Annotation Targets](#annotation-targets) - [Annotation Values](#annotation-values) - [Expressions as Annotation Values](#expressions-as-annotation-values) - [Records as Syntax Shortcuts](#records-as-syntax-shortcuts) - [Annotation Propagation](#annotation-propagation) - [The `annotate` Directive](#annotate) - [Extend Array Annotations](#extend-array-annotations) ### Annotation Syntax Annotations in CDL are prefixed with an `@` character and can be placed before a definition, after the defined name or at the end of simple definitions. ```cds @before entity Foo @inner { @before simpleElement @inner : String @after; @before structElement @inner { /* elements */ } } ``` Multiple annotations can be placed in each spot separated by whitespaces or enclosed in `@(...)` and separated by comma - like the following are equivalent: ```cds entity Foo @( my.annotation: foo, another.one: 4711 ) { /* elements */ } ``` ```cds @my.annotation:foo @another.one: 4711 entity Foo { /* elements */ } ``` For an `@inner` annotation, only the syntax `@(...)` is available. #### Using `annotate` Directives Instead of interspersing annotations with definitions, you can also use the `annotate` directive to add annotations to existing definitions. ```cds annotate Foo with @( my.annotation: foo, another.one: 4711 ); ``` [Learn more about the `annotate` directive in the _Aspects_ chapter below.](#annotate){.learn-more} ### Annotation Targets You can basically annotate any named thing in a CDS model, such as: Contexts and services: ```cds @before context foo.bar @inner { ... } @before service Sue @inner { ... } ``` Definitions and elements with simple or struct types: ```cds @before type Foo @inner : String @after; @before entity Foo @inner { @before key ID @inner : String @after; @before title @inner : String @after; @before struct @inner { ...elements... }; } ``` Enums: ```cds … status : String @inner enum { open @after; closed @after; cancelled @after; accepted @after; rejected @after; } ``` Columns in a view definition's query: ```cds … as select from Foo { @before expr as alias @inner : String, … } ``` Parameters in view definitions: ```cds … with parameters ( @before param @(inner) : String @after ) … ``` Actions/functions including their parameters and result: ```cds @before action doSomething @inner ( @before param @(inner) : String @after ) returns @before resultType; ``` Or in case of a structured result: ```cds action doSomething() returns @before { @before resultElem @inner : String @after; }; ``` ### Annotation Values Values can be literals, references, or expressions. Expressions are explained in more detail in the next section. If no value is given, the default value is `true` as for `@aFlag` in the following example: ```cds @aFlag //= true, if no value is given @aBoolean: false @aString: 'foo' @anInteger: 11 @aDecimal: 11.1 @aSymbol: #foo @aReference: foo.bar @anArray: [ /* can contain any kind of value */ ] @anExpression: ( foo.bar * 17 ) // expression, see next section ``` As described in the [CSN spec](./csn#literals), the previously mentioned annotations would compile to CSN as follows: ```jsonc { "@aFlag": true, "@aBoolean": false, "@aString": "foo", "@anInteger": 11, "@aDecimal": 11.1, "@aSymbol": {"#":"foo"}, "@aReference": {"=":"foo.bar"}, "@anArray": [ /* … */ ], "@anExpression": { /* see next section */ } } ``` ::: tip In contrast to references in [expressions](#expressions-as-annotation-values), plain references aren't checked or resolved by CDS parsers or linkers. They're interpreted and evaluated only on consumption-specific modules. For example, for SAP Fiori models, it's the _4odata_ and _2edm(x)_ processors. ::: ### Records as Syntax Shortcuts Annotations in CDS are flat lists of key-value pairs assigned to a target. The record syntax - that is, `{key:, ...}` - is a shortcut notation that applies a common prefix to nested annotations. For example, the following are equivalent: ```cds @Common.foo.bar @Common.foo.car: 'wheels' ``` ```cds @Common: { foo.bar, foo.car: 'wheels' } ``` ```cds @Common.foo: { bar } @Common.foo.car: 'wheels' ``` ```cds @Common.foo: { bar, car: 'wheels' } ``` and they would show up as follows in a parsed model (→ see [CSN](./csn)): ```json { "@Common.foo.bar": true, "@Common.foo.car": "wheels" } ``` ### Annotation Propagation {#annotation-propagation} Annotations are inherited from types and base types to derived types, entities, and elements as well as from elements of underlying entities in case of views. For example, given this view definition: ```cds using Books from './bookshop-model'; entity BooksList as select from Books { ID, genre : Genre, title, author.name as author }; ``` * `BooksList` would inherit annotations from `Books` * `BooksList:ID` would inherit from `Books:ID` * `BooksList:author` would inherit from `Books:author.name` * `BooksList.genre` would inherit from type `Genre` The rules are: 1. Entity-level properties and annotations are inherited from the **primary** underlying source entity — here `Books`. 2. Each element that can **unambiguously** be traced back to a single source element, inherits that element's properties. 3. An explicit **cast** in the select clause cuts off the inheritance, for example, as for `genre` in our previous example. ::: tip Propagation of annotations can be stopped via value `null`, for example, `@anno: null`. ::: ### Expressions as Annotation Values {#expressions-as-annotation-values} In order to use an expression as an annotation value, it must be enclosed in parentheses: ```cds @anExpression: ( foo.bar * 11 ) ``` Syntactically, the same expressions are supported as in a select item or in the where clause of a query, except subqueries. The expression can of course also be a single reference or a simple value: ```cds @aRefExpr: ( foo.bar ) @aValueExpr: ( 11 ) ``` Some advantages of using expressions as "first class" annotation values are: * syntax and references are checked by the compiler * code completion * [automatic path rewriting in propagated annotations](#propagation) * [automatic translation of expressions in OData annotations](#odata-annotations) ::: info Limitations Elements that are not available to the compiler, for example the OData draft decoration, can't be used in annotation expressions. ::: #### Name resolution Each path in the expression is checked: * For an annotation assigned to an entity, the first path step is resolved as element of the entity. * For an annotation assigned to an entity element, the first path step is resolved as the annotated element or its siblings. * If the annotation is assigned to a subelement of a structured element, the top level elements of the entity can be accessed via `$self`. * A parameter `par` can be accessed via `:par`, just like parameters of a parametrized entity in queries. * For an annotation assigned to a bound action or function, elements of the respective entity can be accessed via `$self`. * The draft-specific elements `IsActiveEntity`, `HasActiveEntity`, and `HasDraftEntity` can be referred to with respective magic variables `$draft.IsActiveEntity`, `$draft.HasActiveEntity`, and `$draft.HasDraftEntity`. During draft augmentation, `$draft.<...>` is rewritten to `$self.<...>` for all draft enabled entities (root and sub nodes, but not for named types or entity parameters). * If a path can't be resolved successfully, compilation fails with an error. In contrast to `@aReference: foo.bar`, a single reference written as expression `@aRefExpr: ( foo.bar )` is checked by the compiler. ```cds @MyAnno: (a) // reference to element entity Foo (par: Integer) { key ID : Integer; @MyAnno: (:par) // reference to entity parameter a : Integer; @MyAnno: (a) // reference to sibling element b : Integer; s { @MyAnno: (y) // reference to sibling element x : Integer; @MyAnno: ($self.a) // reference to top level element y : Integer; } } actions { @MyAnno: ($self.a) action A () } ``` #### CSN Representation In CSN, the expression is represented as a record with two properties: * A string representation of the expression is stored in property `=`. * A tokenized representation of the expression is stored in one of the properties `xpr`, `ref`, `val`, `func`, etc. (like if the expression was written in a query). ```json { "@anExpression": { "=": "foo.bar * 11", "xpr": [ {"ref": ["foo", "bar"]}, "*", {"value": 11} ] }, "@aRefExpr": { "=": "foo.bar", "ref": ["foo", "bar"] }, "@aValueExpr": { "=": "11", "val": 11 } } ``` Note the different CSN representations for a [plain value](#annotation-values) `"@anInteger": 11` and a value written as expression `@aValueExpr: ( 11 )`, respectively. #### Propagation [Annotations are propagated](#annotation-propagation) in views/projections, via includes, and along type references. If the annotation value is an expression, it sometimes is necessary to adapt references inside the expression during propagation, for example, when a referenced element is renamed in a projection. The compiler automatically takes care of the necessary rewriting. When a reference in an annotation expression is rewritten, the `=` property is set to `true`. Example: ```cds entity E { @Common.Text: (text) code : Integer; text : String; } entity P as projection on E { code, text as descr } ``` When propagated to element `code` of projection `P`, the annotation is automatically rewritten to `@Common.Text: (descr)`. ::: details Resulting CSN ```jsonc { "definitions": { "E": { // ... "elements": { // ... "code": { // original annotation "@Common.Text": { "=": "text", "ref": ["text"] }, "type": "cds.Integer" }, "text": {"type": "cds.String"} } }, "P": { // ... "elements": { // ... "code": { // propagated annotation, reference adapted "@Common.Text": { "=": true, "ref": ["descr"] }, "type": "cds.Integer" }, "descr": {"type": "cds.String"} } } } } ``` ::: ::: info There are situations where automatic rewriting doesn't work, resulting in the compiler error [`anno-missing-rewrite`](https://cap.cloud.sap/docs/cds/compiler/messages#anno-missing-rewrite). Some of these situations are going to be addressed in upcoming releases. ::: #### CDS Annotations Using an expression as annotation value only makes sense if the evaluator of the annotation is prepared to deal with the new CSN representation. Currently, the CAP runtimes only support expressions in the `where` property of the `@restrict` annotation. ```cds entity Orders @(restrict: [ { grant: 'READ', to: 'Auditor', where: (AuditBy = $user.id) } ]) {/*...*/} ``` More annotations are going to follow in upcoming releases. Of course, you can use this feature also in your custom annotations, where you control the code that evaluates the annotations. #### OData Annotations The OData backend of the CAP CDS compiler supports expression-valued annotations. See [Expressions in OData Annotations](../advanced/odata#expression-annotations). ### Extend Array Annotations {#extend-array-annotations} Usually, the annotation value provided in an `annotate` directive overwrites an already existing annotation value. If the existing value is an array, the *ellipsis* syntax allows to insert new values **before** or **after** the existing entries, instead of overwriting the complete array. The ellipsis represents the already existing array entries. Of course, this works with any kind of array entries. This is a sample of an existing array: ```cds @anArray: [3, 4] entity Foo { /* elements */ } ``` This shows how to extend the array: ```cds annotate Foo with @anArray: [1, 2, ...]; //> prepend new values: [1, 2, 3, 4] annotate Foo with @anArray: [..., 5, 6]; //> append new values: [3, 4, 5, 6] annotate Foo with @anArray: [1, 2, ..., 5, 6]; //> prepend and append ``` It's also possible to insert new entries at **arbitrary positions**. For this, use `... up to` with a *comparator* value that identifies the insertion point. ```cds [... up to , newEntry, ...] ``` `... up to` represents the existing entries of the array from the current position up to and including the first entry that matches the comparator. New entries are then inserted behind the matched entry. If there's no match, new entries are appended at the end of the existing array. This is a sample of an existing array: ```cds @anArray: [1, 2, 3, 4, 5, 6] entity Bar { /* elements */ } ``` This shows how to insert values after `2` and `4`: ```cds annotate Bar with @anArray: [ ... up to 2, // existing entries 1, 2 2.1, 2.2, // insert new entries 2.1, 2.2 ... up to 4, // existing entries 3, 4 4.1, 4.2, // insert new entries 4.1, 4.2 ... // remaining existing entries 5, 6 ]; ``` The resulting array is: ```js [1, 2, 2.1, 2.2, 3, 4, 4.1, 4.2, 5, 6] ``` If your array entries are objects, you have to provide a *comparator object*. It matches an existing entry, if all attributes provided in the comparator match the corresponding attributes in an existing entry. The comparator object doesn't have to contain all attributes that the existing array entries have, simply choose those attributes that sufficiently characterize the array entry after which you want to insert. Only simple values are allowed for the comparator attributes. Example: Insert a new entry after `BeginDate`. ```cds @UI.LineItem: [ { $Type: 'UI.DataFieldForAction', Action: 'TravelService.acceptTravel', Label: '{i18n>AcceptTravel}' }, { Value: TravelID, Label: 'ID' }, { Value: BeginDate, Label: 'Begin' }, { Value: EndDate, Label: 'End' } ] entity TravelService.Travel { /* elements */ } ``` For this, you provide a comparator object with the attribute `Value`: ```cds annotate TravelService.Travel with @UI.LineItem: [ ... up to { Value: BeginDate }, // ... up to with comparator object { Value: BeginWeekday, Label: 'Day of week' }, // new entry ... // remaining array entries ]; ``` ::: tip Only direct annotations can be extended using `...`. It's not supported to extend propagated annotations, for example, from aspects or types. :::
## Aspects CDS's aspects allow to flexibly extend definitions by new elements as well as overriding properties and annotations. They're based on a mixin approach as known from Aspect-oriented Programming methods. - [The `extend` Directive](#extend) - [The `annotate` Directive](#annotate) - [Named Aspects](#named-aspects) - [Shortcut Syntax `:`](#includes) - [Extending Views / Projections](#extend-view) - See also: [Aspect-oriented Modelling](aspects) ### The `extend` Directive {#extend} Use `extend` to add extension fields or to add/override metadata to existing definitions, for example, annotations, as follows: ```cds extend Foo with @title:'Foo'; extend Bar with @title:'Bar' { newField : String; extend nestedStructField { newField : String; extend existingField @title:'Nested Field'; } } ``` ::: details Note the nested `extend` for existing fields Make sure that you prepend the `extend` keyword to nested elements if you want to modify them. Without that a new field with that name would be added. If you only want to add annotations to an existing field, you can use [the **annotate** directive.](#annotate) instead. ::: You can also directly extend a single element: ```cds extend Foo:nestedStructField with { newField : String; } ``` With `extend` you can enlarge the *length* of a String or *precision* and *scale* of a Decimal: ```cds extend User with (length:120); extend Books:price.value with (precision:12,scale:3); ``` The extended type or element directly must have the respective property. For multiple conflicting `extend` statements, the last `extend` wins, that means in three files `a.cds <- b.cds <- c.cds`, where `<-` means `using from`, the `extend` from `c.cds` is applied, as it is the last in the dependency chain. ### The `annotate` Directive {#annotate} The `annotate` directive allows to annotate already existing definitions that may have been [imported](#model-imports) from other files or projects. ```cds annotate Foo with @title:'Foo'; annotate Bar with @title:'Bar' { nestedStructField { existingField @title:'Nested Field'; } } ``` ::: details `annotate` is a shortcut for `extend` ... The `annotate` directive is essentially a shortcut variant of the [`extend` directive](#extend), with the default mode being switched to `extend`ing existing fields instead of adding new ones. For example, the following is equivalent to the previous example: ```cds extend Foo with @title:'Foo'; extend Bar with @title:'Bar' { extend nestedStructField { extend existingField @title:'Nested Field'; } } ``` ::: You can also directly annotate a single element: ```cds annotate Foo:existingField @title: 'Simple Field'; annotate Foo:nestedStructField.existingField @title:'Nested Field'; ``` ### Named Aspects You can use `extend` or `annotate` with predefined aspects, to apply the same extensions to multiple targets: ```cds aspect SomeAspect { created { at: Timestamp; _by: User; } } ``` ```cds extend Foo with SomeAspect; extend Bar with SomeAspect; ``` If you use `extend`, all nested fields in the named aspect are interpreted as being extension fields. If you use `annotate`, the nested fields are interpreted as existing fields and the annotations are copied to the corresponding target elements. The named extension can be anything, for example, including other `types` or `entities`. Use keyword `aspect` as shown in the example to declare definitions that are only meant to be used in such extensions, not as types for elements. ### Includes -- `:` as Shortcut Syntax {#includes} You can use an inheritance-like syntax option to extend a definition with one or more [named aspects](#named-aspects) as follows: ```cds define entity Foo : SomeAspect, AnotherAspect { key ID : Integer; name : String; [...] } ``` This is syntactical sugar and equivalent to using a sequence of [extends](#extend) as follows: ```cds define entity Foo {} extend Foo with SomeAspect; extend Foo with AnotherAspect; extend Foo with { key ID : Integer; name : String; [...] } ``` You can apply this to any definition of an entity or a structured type. ### Extending Views and Projections { #extend-view} Use the `extend with columns` variant to extend the select list of a projection or view entity and do the following: * Include more elements existing in the underlying entity. * Add new calculated fields. * Add new unmanaged associations. ```cds extend SomeView with columns { foo as moo @woo, 1 + 1 as two, bar : Association to Bar on bar.ID = moo } ``` Enhancing nested structs isn't supported. Furthermore, the table alias of the view's data source is not accessible in such an extend. You can use the common [`annotate` directive](#annotate) to just add/override annotations of a view's elements.
## Services - [Service Definitions](#service-definitions) - [Exposed Entities](#exposed-entities) - [(Auto-) Redirected Associations](#auto-redirect) - [Auto-exposed Targets](#auto-expose) - [Custom Actions/Functions](#actions) - [Custom-defined Events](#events) - [Extending Services](#extend-service) ### Service Definitions CDS allows to define service interfaces as collections of exposed entities enclosed in a `service` block, which essentially is and acts the same as [`context`](#context): ```cds service SomeService { entity SomeExposedEntity { ... }; entity AnotherExposedEntity { ... }; } ``` The endpoint of the exposed service is constructed by its name, following some conventions (the string `service` is dropped and kebab-case is enforced). If you want to overwrite the path, you can add the `@path` annotation as follows: ```cds @path: 'myCustomServicePath' service SomeService { ... } ``` [Watch a short video by DJ Adams on how the `@path` annotations works.](https://www.youtube.com/shorts/Q_PipD_7yBs){.learn-more} ### Exposed Entities The entities exposed by a service are most frequently projections on entities from underlying data models. Standard view definitions, using [`as select from`](#views) or [`as projection on`](#as-projection-on), can be used for exposing entities. ```cds service CatalogService { entity Product as projection on data.Products { *, created.at as since } excluding { created }; } service MyOrders { //> $user only implemented for SAP HANA entity Order as select from data.Orders { * } where buyer=$user.id; entity Product as projection on CatalogService.Product; } ``` ::: tip You can optionally add annotations such as `@readonly` or `@insertonly` to exposed entities, which, will be enforced by the CAP runtimes in Java and Node.js. ::: Entities can be also exposed as views with parameters: ```cds service MyOrders { entity OrderWithParameter( foo: Integer ) as select from data.Orders where id=:foo; } ``` A parametrized view like modeled in the section on [`view with parameter`](#views-with-parameters) can be exposed as follows: ```cds service SomeService { entity ViewInService( p1: Integer, p2: Boolean ) as select from data.SomeView(foo: :p1, bar: :p2) {*}; } ``` Then the OData request for views with parameters should look like this: ```cds GET: /OrderWithParameter(foo=5)/Set or GET: /OrderWithParameter(5)/Set GET: /ViewInService(p1=5, p2=true)/Set ``` To expose an entity, it's not necessary to be lexically enclosed in the service definition. An entity's affiliation to a service is established using its fully qualified name, so you can also use one of the following options: - Add a namespace. - Use the service name as prefix. In the following example, all entities belong to/are exposed by the same service: ::: code-group ```cds [myservice.cds] service foo.MyService { entity A { /*...*/ }; } entity foo.MyService.B { /*...*/ }; ``` ::: ::: code-group ```cds [another.cds] namespace foo.MyService; entity C { /*...*/ }; ``` ::: ### (Auto-) Redirected Associations {#auto-redirect} When exposing related entities, associations are automatically redirected. This ensures that clients can navigate between projected entities as expected. For example: ```cds service AdminService { entity Books as projection on my.Books; entity Authors as projection on my.Authors; //> AdminService.Authors.books refers to AdminService.Books } ``` #### Resolving Ambiguities Auto-redirection fails if a target can't be resolved unambiguously, that is, when there is more than one projection with the same minimal 'distance' to the source. For example, compiling the following model with two projections on `my.Books` would produce this error: ::: danger Target "Books" is exposed in service "AdminService" by multiple projections "AdminService.ListOfBooks", "AdminService.Books" - no implicit redirection. ::: ```cds service AdminService { entity ListOfBooks as projection on my.Books; entity Books as projection on my.Books; entity Authors as projection on my.Authors; //> which one should AdminService.Authors.books refer to? } ``` #### Using `redirected to` with Projected Associations You can use `redirected to` to resolve the ambiguity as follows: ```cds service AdminService { entity ListOfBooks as projection on my.Books; entity Books as projection on my.Books; entity Authors as projection on my.Authors { *, // [!code focus] books : redirected to Books //> resolved ambiguity // [!code focus] }; } ``` #### Using `@cds.redirection.target` Annotations Alternatively, you can use the boolean annotation `@cds.redirection.target` with value `true` to make an entity a preferred redirection target, or with value `false` to exclude an entity as target for auto-redirection. ```cds service AdminService { @cds.redirection.target: true // [!code focus] entity ListOfBooks as projection on my.Books; // [!code focus] entity Books as projection on my.Books; entity Authors as projection on my.Authors; } ``` ### Auto-Exposed Entities {#auto-expose} Annotate entities with `@cds.autoexpose` to automatically expose them in services containing entities with associations referring to them. For example, given the following entity definitions: ```cds // schema.cds namespace schema; entity Bar @cds.autoexpose { key id: Integer; } using { sap.common.CodeList } from '@sap/cds/common'; entity Car : CodeList { key code: Integer; } //> inherits @cds.autoexpose from sap.common.CodeList ``` ... a service definition like this: ```cds using { schema as my } from './schema.cds'; service Zoo { entity Foo { //... bar : Association to my.Bar; car : Association to my.Car; } } ``` ... would result in the service being automatically extended like this: ```cds extend service Zoo with { // auto-exposed entities: @readonly entity Foo_bar as projection on Bar; @readonly entity Foo_car as projection on Car; } ``` You can still expose such entities explicitly, for example, to make them read-write: ```cds service Sue { entity Foo { /*...*/ } entity Bar as projection on my.Bar; } ``` [Learn more about **CodeLists in `@sap/cds/common`**.](./common#code-lists){.learn-more} ### Custom Actions and Functions {#actions} Within service definitions, you can additionally specify `actions` and `functions`. Use a comma-separated list of named and typed inbound parameters (optional) and a response type (optional for actions), which can be either a: - [Predefined Type](#types) - [Reference to a custom-defined type](#types) - [Inline definition of an anonymous structured type](#structured-types) ```cds service MyOrders { entity Order { /*...*/ }; // unbound actions / functions type cancelOrderRet { acknowledge: String enum { succeeded; failed; }; message: String; } action cancelOrder ( orderID:Integer, reason:String ) returns cancelOrderRet; function countOrders() returns Integer; function getOpenOrders() returns array of Order; } ``` ::: tip The notion of actions and functions in CDS adopts that of [OData](https://docs.oasis-open.org/odata/odata/v4.0/os/part1-protocol/odata-v4.0-os-part1-protocol.html#_Toc372793737); actions and functions on service-level are _unbound_ ones. ::: #### Bound Actions and Functions { #bound-actions} Actions and functions can also be bound to individual entities of a service, enclosed in an additional `actions` block as the last clause in an entity/view definition. ```cds service CatalogService { entity Products as projection on data.Products { ... } actions { // bound actions/functions action addRating (stars: Integer); function getViewsCount() returns Integer; } } ``` Bound actions and functions have a binding parameter that is usually implicit. It can also be modeled explicitly: the first parameter of a bound action or function is treated as binding parameter, if it's typed by `[many] $self`. Use Explicit Binding to control the naming of the binding parameter. Use the keyword `many` to indicate that the action or function is bound to a collection of instances rather than to a single one. ```cds service CatalogService { entity Products as projection on data.Products { ... } actions { // bound actions/functions with explicit binding parameter action A1 (prod: $self, stars: Integer); action A2 (in: many $self); // bound to collection of Products } } ``` Explicitly modelled binding parameters are ignored for OData V2. ### Custom-Defined Events {#events} Similar to [Actions and Functions](../cds/cdl#actions) you can declare `events`, which a service emits via messaging channels. Essentially, an event declaration looks very much like a type definition, specifying the event's name and the type structure of the event messages' payload. ```cds service MyOrders { ... event OrderCanceled { orderID: Integer; reason: String; } } ``` An event can also be defined as projection on an entity, type, or another event. Only the effective signature of the projection is relevant. ```cds service MyOrders { ... event OrderCanceledNarrow : projection on OrderCanceled { orderID } } ``` ### Extending Services {#extend-service} You can [extend](#extend) services with additional entities and actions much as you would add new entities to a context: ```cds extend service CatalogService with { entity Foo {}; function getRatings() returns Integer; } ``` Similarly, you can [extend](#extend) entities with additional actions as you would add new elements: ```cds extend entity CatalogService.Products with actions { function getRatings() returns Integer; } ```
[JSON Schema]: https://json-schema.org [OpenAPI]: https://www.openapis.org # Core Schema Notation (CSN) CSN (pronounced as "_Season_") is a notation for compact representations of CDS models — tailored to serve as an optimized format to share and interpret models with minimal footprint and dependencies. It's similar to [JSON Schema] but goes beyond JSON's abilities, in order to capture full-blown _Entity-Relationship Models_ and [Extensions](#aspects). This makes CSN models a perfect source to generate target models, such as [OData/EDM](../advanced/odata) or [OpenAPI] interfaces, as well as persistence models for SQL or NoSQL databases. ## Anatomy A CSN model in **JSON**: ```json { "requires": [ "@sap/cds/common", "./db/schema" ], "definitions": { "some.type": { "type": "cds.String", "length": 11 }, "another.type": { "type": "some.type" }, "structured.type": { "elements": { "foo": { "type": "cds.Integer" }, "bar": { "type": "cds.String" } }} }, "extensions": [ { "extend":"Foo", "elements":{ "bar": { "type": "cds.String" } }} ] } ``` The same model in **YAML**: ```yaml requires: - @sap/cds/common - ./db/schema definitions: some.type: {type: cds.String, length: 11} another.type: {type: some.type } structured.type: elements: foo: {type: cds.Integer} bar: {type: cds.String} extensions: [ - extend: Foo elements: bar: {type: cds.String} ] ``` The same model as a **plain JavaScript** object: ```js ({ requires:[ '@sap/cds/common', './db/schema' ], definitions: { 'some.type': { type:"cds.String", length:11 }, 'another.type': { type:"some.type" }, 'structured.type': { elements: { 'foo': { type:"cds.Integer" }, 'bar': { type:"cds.String" } }} }, extensions: [ { extend:'Foo', elements:{ 'bar': { type:"cds.String" } } ], }) ``` For the remainder of this spec, you see examples in plain JavaScript representation with the following **conventions**: ```js ({property:...}) // a CSN-specified property name ({'name':...}) // a definition's declared name "value" // a string value, including referred names 11, true // number and boolean literal values ``` #### Properties * [`requires`](#imports) – an array listing [imported models](#imports) * [`definitions`](#definitions) – a dictionary of named [definitions](#definitions) * [`extensions`](#aspects) – an array of unnamed [aspects](#aspects) * [`i18n`](#i18n) – a dictionary of dictionaries of [text translations](#i18n) > [!TIP] All properties are optional > For example, one model could contain a few definitions, while another one only contains some extensions. > [!NOTE] References are case-sensitive > All references in properties like `type` or `target` use exactly the same notation regarding casing as their targets' names. To avoid problems when translating models to case-insensitive environments like SQL databases, avoid case-significant names and references. For example, avoid two different definitions in the same scope whose names only differ in casing, such as `foo` and `Foo`. ## Literals There are several places where literals can show up in models, such as in SQL expressions, calculated fields, or annotations. Standard literals are represented as in JSON: | Kind | Example | |-----------------------|--------------------------| | Globals | `true`, `false`, `null` | | Numbers1 | `11` or `2.4` | | Strings | `"foo"` | | Dates2 | `"2016-11-24"` | | Times2 | `"16:11Z"` | | DateTimes2 | `"2016-11-24T16:11Z"` | | Records | `{"foo":, ...}` | | Arrays | `[, ...]` | In addition, CSN specifies these special forms for references, expressions, and `enum` symbols: | Kind | Example | |--------------------------|-----------------------| | Unparsed Expressions | `{"=":"foo.bar < 9"}` | | Enum symbols3 | `{"#":"asc"}` | #### Remarks >1 This is as in JSON and shares the same issues when decimals are mapped to doubles with potential rounding errors. The same applies to Integer64. Use strings to avoid that, if applicable. > >2 Also, as in JSON, dates, and times are represented just as strings as specified in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601); consumers are assumed to know the types and handle the values correctly. > >3 As enum symbols are equal to their values, it frequently suffices to just provide them as strings. Similar to time and dates in CSN and JSON, the consumers are assumed to know the types and handle the values correctly. The `{"#":...}` syntax option is to serve cases where you have to distinguish the kind only based on the provided value, for example, in untyped annotations. ## Definitions Each entry in the `definitions` dictionary is essentially a type definition. The name is the absolute, fully qualified name of the definition, and the value is a record with the definition details. #### Example ```js ({definitions:{ 'Name': {type:"cds.String"}, 'Currency': {type:"cds.String", length:3}, 'USD': {type:"Currency"}, 'Amount': {elements:{ 'value': {type:"cds.Decimal", precision:11, scale:3}, 'currency': {type:"Currency"}, }}, 'SortOrder':{enum:{ 'asc':{}, 'desc':{} }} }}) ``` The __name__ of a definition is its key in the enclosing dictionary, like in `definitions` for top-level entries or in `elements` for structured types and entities. Names **must**: * Be nonempty strings. * Neither start, nor end with `.` or `::`. * Not contain substrings `..` or `:::`. * Not contain the substring `::` more than once. #### Properties {#def-properties} * `kind` – one of `context`, `service`, `entity`, `type`, `action`, `function`, or `annotation` * `type` – an optional base type that this definition is derived from * [`elements`][elements] – optional dictionary of [_elements_][elements] in case of structured types Property `kind` is always omitted for [elements] and can be omitted for top-level [type definitions]. These examples are semantically equivalent: ```js Foo1 = { type:"cds.String" } Foo2 = { type:"cds.String", kind:"type" } ``` ## Type Definitions [type definitions]: #type-definitions Custom-defined types are entries in [`definitions`](#definitions) with an optional property `kind`=`"type"` and the following properties. | Property | Used for | |------------|-----------------------------------------------------------------------| | `type` | [Scalar Types](#scalar-types), [Structured Types](#structured-types), and [Associations](#associations) | | `elements` | [Structured Types](#structured-types) | | `items` | [Arrayed Types](#arrayed-types) | | `enum` | [Enumeration Types](#enumeration-types) | #### Example ```js ({definitions: { 'scalar.type': {type:"cds.String", length:3 }, 'struct.type': {elements:{'foo': {type:"cds.Integer"}}}, 'arrayed.type': {items:{type:"cds.Integer"}}, 'enum.type': {enum:{ 'asc':{}, 'desc':{} }} }}) ``` #### Properties * `kind` – omitted or _`"type"`_ * `type` – the base type, this definition is derived from * [`elements`][elements] – optional element definitions for [_structured types_][struct]. * [`items`][arrays] – optional definition of item types for [_arrayed types_][arrays]. * [`enum`][enum] – an optional dictionary of enum members for [_enumeration types_][enum]. * `value` – a constant [literal value](#literals) or calculation expression * `default` – a default [value or expression](#literals) * `localized` _= true_ if this type was declared like _foo : localized String_ * `...` – other type-specific properties, for example, a String's `length` ### Scalar Types Scalar types always have property `type` specified, plus optional type-specific parameter properties. ```js ({definitions:{ 'scalar.type': {type:"cds.String", length:3 }, }}) ``` See the [CDL reference docs](types) for an overview of CDS' built-in types. While in [CDS sources](cdl) you can refer to these types without prefix, they always have to be specified with their **fully qualified names in CSN**, for example: ```js ({definitions: { 'Foo': { type:"cds.Integer" }, 'Bar': { type:"cds.Decimal", precision:11, scale:3 }, }}) ``` ### Structured Types [struct]: #structured-types [elements]: #structured-types [Structured Types]: #structured-types Structured types are signified by the presence of an `elements` property. The value of `elements` is a dictionary of `elements`. The name is the local name of the element and the values in turn are [Type Definitions](#type-definitions). The optional property `includes` contains a list of fully qualified entity-, aspect-, or type-names. Elements, actions, and annotations from those definitions are then copied into the structured type. ```js ({definitions:{ 'structured.type': {elements:{ 'foo': {type:"cds.Integer"}, 'bar': {type:"cds.String"} }} }}) ``` ### Arrayed Types [arrays]: #arrayed-types Arrayed types are signified by the presence of a property `items`. The value of which is in turn a [type definition](#type-definitions) that specifies the arrayed items' type. ```js ({definitions:{ 'arrayed.type': {items:{type:"cds.Integer"}} }}) ``` ### Enumeration Types [enum]: #enumeration-types The `enum` property is a dictionary of enum member elements with the name being the enum symbol and the value being a [CQN literal value expression](cxn#literal-values). The literal expression optionally specifies a constant `val` as a [literal](#literals) plus optional annotations. An enumeration type can specify an explicit `type` (for example, _Decimal_) but can also omit it and refer from given enumeration values, or _String_ as default. ```js ({definitions:{ 'Gender': {enum:{ 'male':{}, 'female':{}, 'non_binary': { val: 'non-binary' } }}, 'Status': {enum:{ 'submitted': {val:1}, 'fulfilled': {val:2} }}, 'Rating': {type:"cds.Decimal", enum:{ 'low': {val:0}, 'medium': {val:50}, 'high': {val:100} }} }}) ``` ## Entity Definitions [entities]: #entity-definitions [entity]: #entity-definitions Entities are [structured types](#structured-types) with **_kind_** =`'entity'`. In addition, one or more elements usually have property `key` set to true, to flag the entity's primary key. #### Example ```js ({definitions:{ 'Products': {kind:"entity", elements:{ 'ID': {type:"cds.Integer", key:true}, 'title': {type:"cds.String", notNull:true}, 'price': {type:"Amount", virtual:true}, }} }}) ``` #### Properties * `kind` – is always _`"entity"`_ * `elements` – as in [Structured Types], optionally equipped with one or more of these boolean properties: * `key` – signifies that the element is (part of) the primary key * `virtual` – has this element ignored in generic persistence mapping * `notNull` – the _not null_ constraint as in SQL * `includes` – as in [Structured Types] ### View Definitions [views]: #view-definitions [view]: #view-definitions Views are entities defined as projections on underlying entities. In CSN, views are signified by the presence of property `query`, which captures the projection as a [CQN](cqn) expression. #### Example ```js ({definitions:{ 'Foo': { kind:"entity", query: { SELECT:{ from: {ref:['Bar']}, columns: [ {ref:['title']}, {ref:['price']} ] } }} }}) ``` #### Properties * `kind` – mandatory; always _`"entity"`_ * `query` – the parsed query in [CQN](cqn) format * `elements` – optional [elements signature](#views-with-declared-signatures), omitted and inferred * `params` – optional [parameters](#views-with-parameters) ### Views with Declared Signatures Views with declared signatures have the additional property `elements` filled in as in [entities](cdl#entities): ```js ({definitions:{ 'with.declared.signature': {kind:"entity", elements: { 'title': {type:"cds.String"}, 'price': {type:"Amount"} }, query: { SELECT:{...} }, } }}) ``` ### Views with Parameters Views with parameters have an additional property `params` – an optional dictionary of parameter [type definitions](#type-definitions): ```js ({definitions:{ 'with.params': {kind:"entity", params: { 'ID': { type: 'cds.Integer' } }, query: { SELECT:{...} }, } }}) ``` ### Projections Use the `projection` property for views if you don't need the full power of SQL. See `as projection on` in [CDL](./cdl#as-projection-on) for restrictions. ```js ({ definitions: { 'Foo': { kind: "entity", projection: { from: { ref: ['Bar'] }, columns: [ '*' ] } } }}) ``` #### Properties * `kind` – mandatory; always _`"entity"`_ * `projection` – the parsed query; equivalent to `query.SELECT`, see [CQN](cqn) * `elements` – optional [elements signature](#views-with-declared-signatures), omitted and inferred ## Associations Associations are like [scalar type definitions](#scalar-types) with `type` being `cds.Association` or `cds.Composition` plus additional properties specifying the association's `target` and optional information like `on` conditions or foreign `keys`. ### Basic to-one Associations The basic form of associations are *to-one* associations to a designated target: ```js ({definitions:{ 'Books': { kind:"entity", elements:{ 'author': { type:"cds.Association", target:"Authors" }, }}, //> an association type-def 'Currency': { type:"cds.Association", target:"Currencies" }, }}) ``` ### With Specified `cardinality` {#assoc-card} Add property `cardinality` to explicitly specify a *to-one* or *to-many* relationship: ```js ({definitions:{ 'Authors': { kind:"entity", elements:{ 'books': { type:"cds.Association", target:"Books", cardinality:{max:"*"} }, }}, }}) ``` Property `cardinality` is an object `{src?,min?,max}` with... * `src` set to `1` give a hint to database optimizers, that a source entity always exists * `min` specifying the target's minimum cardinality – default: `0` * `max` specifying the target's maximum cardinality – default: `1` In summary, the default cardinality is _[0..1]_, which means *to-one*. ### With Specified `on` Condition {#assoc-on} So-called *unmanaged* associations have an explicitly specified `on` condition: ```js ({definitions:{ 'Authors': { kind:"entity", elements:{ 'books': { type:"cds.Association", target:"Books", cardinality{max:"*"}, on: [{ref:['books', 'author']}, '=', {ref:['$self']}] }, }} }}) ``` ### With Specified `keys` {#assoc-keys} Managed to-one associations automatically use the target's designated primary `key` elements. You can overrule this by explicitly specifying alternative target properties to be used in the `keys` property: ```js ({definitions:{ 'Books': {kind:"entity", elements:{ 'genre': {type:"cds.Association", target:"Genres", keys:[ {ref:["category"], as:"cat"}, {ref:["name"]}, ]}, }}, }}) ``` Property `keys` has the format and mechanisms of [CQN projections](cqn#select). ## Annotations Annotations are represented as properties, prefixed with `@`. This format applies to type/entity-level annotations as well as to element-level ones. #### Example ```js ({definitions:{ 'Employees': {kind:"entity", '@title':"Mitarbeiter", '@readonly':true, elements:{ 'firstname': {type:"cds.String", '@title':"Vorname"}, 'surname': {type:"cds.String", '@title':"Nachname"}, } }, }}) ``` Annotations are used to add custom information to definitions, the prefixed `@` acts as a protection against conflicts with built-in/standard properties. They're flat lists of key-value pairs, with keys being fully qualified property names and values being represented as introduced in the section [Literals and Expressions](#literals). ## Aspects In parsed-only models, the top-level property `extensions` holds an array of unapplied extensions or annotations (→ see also [Aspects in CDL](cdl#aspects)). The entries are of this form: ```js ext = { extend|annotate: , : , … } ``` with: - `extend` or `annotate` referring to the definition to be extended or annotated - `` being the property that should be extended, for example, `elements` if an entity should be extended with further elements ### Extend with \ The most basic form allows to express an extension of a named definition with another named definition (→ see [Named Aspects](cdl#named-aspects)): ```js csn = { extensions:[ { extend:"TargetDefinition", includes:["NamedAspect"]} ]} ``` ### Extend with \ The form `{ extend:, : , … }` allows to add elements to an existing [struct] definition as well as to add or override annotations of the target definition: ```js csn = { extensions:[ // extend Foo with @foo { ..., bar: String; } { extend: "Foo", '@foo': true, elements: { // adds a new element 'bar' bar: { type: "cds.String", '@bar': true }, } }, ]} ``` ### annotate with \ The form `{ annotate:, : , … }` allows to add or override annotations of the target definition as well as those of nested elements: ```js csn = {extensions:[ // annotate Foo with @foo; { annotate:"Foo", '@foo':true }, // annotate Foo with @foo { boo @boo } { annotate:"Foo", '@foo':true, elements: { // annotates existing element 'boo' boo: {'@boo':true }, }}, ]} ``` ## Services Services are definitions with _kind =`'service'`_: ```js ({definitions:{ 'MyOrders': {kind:"service"} }}) ``` ### Actions / Functions Entity definitions (for _bound_ actions/functions) can have an additional property `actions`. The keys of these `actions` are the (local) names of actions/functions. _Unbound_ actions/functions of a service are represented as top level definitions. Example: ```js ({definitions:{ 'OrderService': {kind:"service"}, 'OrderService.Orders': {kind:"entity", elements:{...}, actions:{ 'validate': {kind:"function", returns: {type: "cds.Boolean"} } }}, 'OrderService.cancelOrder': {kind:"action", params:{ 'orderID': {type:"cds.Integer"}, 'reason': {type:"cds.String"}, }, returns: {elements:{ 'ack': {enum:{ 'succeeded':{}, 'failed':{} }}, 'msg': {type:"cds.String"}, }} } }} }}) ``` #### Properties * `kind` – either `"action"` or `"function"` as in _OData_ * `params` – a dictionary with the values being [Type Definitions](#type-definitions) * `returns` – a [Type Definition](#type-definitions) describing the response > Note: The definition of the response can be a reference to a declared type or the inline definition of a new (structured) type. ## Imports The `requires` property lists other models to import definitions from. It is the CSN equivalent of the CDL [`using` directive](./cdl#using). #### Example ```js ({ requires: [ '@sap/cds/common', './db/schema' ], // [...] }) ``` As in Node.js the filenames are either absolute module names or relative filenames, starting with `./` or `../`. ## i18n A CSN may optionally contain a top-level `i18n` property, which can contain translated texts. The expected structure is as follows: ```js ({ i18n: { 'language-key': { 'text-key': "some string" } } }) ``` This data must be written and handled by the application, there's no out-of-the-box support for this by CAP. # Query Notation (CQN) ## Introduction CQN is a canonical plain object representation of CDS queries. Such query objects can be obtained by parsing [CQL](./cql), by using the [query builder APIs](../node.js/cds-ql), or by simply constructing respective objects directly in your code. For example, the following three snippets all construct the same query object: ```js // Parsing CQL tagged template strings let query = cds.ql `SELECT from Foo` ``` ```js // Query building let query = SELECT.from (ref`Foo`) ``` ```js // Constructing plain CQN objects let query = {SELECT:{from:[{ref:['Foo']}]}} ``` Such queries can be [executed with `cds.run`](../node.js/core-services#srv-run-query): ```js let results = await cds.run (query) ``` Following is a detailed specification of the CQN as [TypeScript declarations](https://www.typescriptlang.org/docs/handbook/declaration-files/introduction.html), including all query types and their properties, as well as the fundamental expression types. Find the [full CQN type definitions in the appendix below](#full-cqn-d-ts-file). ## SELECT Following is the TypeScript declaration of `SELECT` query objects: ```tsx class SELECT { SELECT: { distinct? : true count? : true one? : true from : source columns? : column[] where? : xo[] having? : xo[] groupBy? : expr[] orderBy? : order[] limit? : { rows: val, offset: val } }} ``` > Using: > [`source`](#source), > [`colum`](#column), > [`xo`](#xo), > [`expr`](#expr), > [`order`](#order), > [`val`](#val) CQL SELECT queries enhance SQL's SELECT statements with these noteworthy additions: - The `from` clause supports [`{ref}`](#ref) paths with *[infix filters](#infix)*. - The `columns` clause supports deeply *[nested projections](#expand)*. - The `count` property requests the total count, similar to OData's `$count`. - The `one` property causes a single row object to be read instead of an array. Also `SELECT` statements with `from` as the only mandatory property are allowed, which is equivalent to SQL's `SELECT * from ...`. ### `.from` ###### source Property `from` specifies the source of the query, which can be a table, a view, or a subquery. It is specified with type `source` as follows: ```tsx class SELECT { SELECT: { //... from : source // [!code focus] }} ``` ```tsx type source = ref &as | SELECT | { join : 'inner' | 'left' | 'right' args : [ source, source ] on? : expr } ``` > Using: > [`ref`](#ref), > [`as`](#as), > [`expr`](#expr) > > Used in: > [`SELECT`](#select) ### `.columns` ###### column ###### as ###### cast ###### infix ###### expand Property `columns` specifies the columns to be selected, projected, or aggregated, and is specified as an array of `column`s: ```tsx class SELECT { SELECT: { //... columns : column[] // [!code focus] }} ``` ```tsx type column = '*' | expr &as &cast | ref &as &( { expand?: column[] } | { inline?: column[] } ) &infix ``` ```tsx interface as { as?: name } interface cast { cast?: {type:name} } interface infix { orderBy? : order[] where? : expr limit? : { rows: val, offset: val } } ``` > Using: > [`expr`](#expr), > [`name`](#name), > [`ref`](#ref), > > Used in: > [`SELECT`](#select) ### `.where` ### `.having` ### `.search` Properties `where`, and `having`, specify the filter predicates to be applied to the rows selected, or grouped, respectively. Property `search` is of same kind and is used for full-text search. ```tsx class SELECT { SELECT: { where : xo[] // [!code focus] having : xo[] // [!code focus] search : xo[] // [!code focus] }} ``` ### `.orderBy` ###### order ```tsx class SELECT { SELECT: { //... orderBy : order[] // [!code focus] }} ``` ```tsx type order = expr & { sort : 'asc' | 'desc' nulls : 'first' | 'last' } ``` > Using: > [`expr`](#expr) > > Used in: > [`SELECT`](#select) > ## INSERT ## UPSERT CQN representations for `INSERT` and `UPSERT` are essentially identical: ```tsx class INSERT { INSERT: UPSERT['UPSERT'] } class UPSERT { UPSERT: { into : ref entries? : data[] columns? : string[] values? : scalar[] rows? : scalar[][] from? : SELECT }} ``` ```tsx interface data { [elm:string]: scalar | data | data[] } ``` > Using: > [`ref`](#ref), > [`expr`](#expr) > [`scalar`](#scalar), > [`SELECT`](#select) > > See also: > [`UPDATE.data`](#data), Data to be inserted can be specified in one of the following ways: * Using [`entries`](#entries) as an array of records with name-value pairs. * Using [`values`](#values) as in SQL's _values_ clauses. * Using [`rows`](#rows) as an array of one or more `values`. The latter two options require a `columns` property to specify names of columns to be filled with the values in the same order. ### `.entries` Allows input data to be specified as records with name-value pairs, including _deep_ inserts. ```js let q = {INSERT:{ into: { ref: ['Books'] }, entries: [ { ID:201, title:'Wuthering Heights' }, { ID:271, title:'Catweazle' } ]}} ``` ```js let q = {INSERT:{ into: { ref: ['Authors'] }, entries: [ { ID:150, name:'Edgar Allen Poe', books: [ { ID:251, title:'The Raven' }, { ID:252, title:'Eleonora' } ]} ]}} ``` [See definition in `INSERT` summary](#insert) {.learn-more} ### `.values` {#scalar} Allows input data to be specified as an single array of values, as in SQL. ```js let q = {INSERT:{ into: { ref: ['Books'] }, columns: [ 'ID', 'title', 'author_id', 'stock' ], values: [ 201, 'Wuthering Heights', 101, 12 ] }} ``` [See definition in `INSERT` summary](#insert) {.learn-more} ### `.rows` Allows input data for multiple rows to be specified as arrays of values. ```js let q = {INSERT:{ into: { ref: ['Books'] }, columns: [ 'ID', 'title', 'author_id', 'stock' ], rows: [ [ 201, 'Wuthering Heights', 101, 12 ], [ 252, 'Eleonora', 150, 234 ] ] }} ``` [See definition in `INSERT` summary](#insert) {.learn-more} ## UPDATE ```tsx class UPDATE { UPDATE: { entity : ref where? : expr data : data with : changes }} ``` > Using: > [`ref`](#ref), > [`expr`](#expr), > [`data`](#data), > [`changes`](#changes) ### `.data` Data to be updated can be specified in property `data` as records with name-value pairs, same as in [`INSERT.entries`](#entries). ```tsx interface data { [element:name]: scalar | data | data[] } ``` > Using: > [`name`](#name), > [`scalar`](#scalar) ### `.with` ###### changes Property `with` specifies the changes to be applied to the data, very similar to property [`data`](#data) with the difference to also allow [expressions](#expressions) as values. ```tsx interface changes { [element:name]: scalar | expr | changes | changes[] } ``` > Using: > [`name`](#name), > [`expr`](#expr), > [`scalar`](#scalar) ## DELETE ```js class DELETE { DELETE: { from : ref where? : expr }} ``` > Using: > [`ref`](#ref), > [`expr`](#expr) ## Expressions ###### expr ###### ref ###### val ###### xpr ###### list ###### func ###### param ###### xo ###### name ###### scalar Expressions can be entity or element references, query parameters, literal values, lists of all the former, function calls, sub selects, or compound expressions. ```tsx type expr = ref | val | xpr | list | func | param | SELECT ``` ```tsx type ref = { ref: ( name | { id:name &infix })[] } type val = { val: scalar } type xpr = { xpr: xo[] } type list = { list: expr[] } type func = { func: string, args: expr[] } type param = { ref: [ '?' | number | string ], param: true } ``` ```tsx type xo = expr | keyword | operator type operator = '=' | '==' | '!=' | '<' | '<=' | '>' | '>=' type keyword = 'in' | 'like' | 'and' | 'or' | 'not' type scalar = number | string | boolean | null type name = string ``` >[!note] > CQN by intent does not _understand_ expressions and therefore > keywords and operators are just represented as plain strings in flat > `xo` sequences. This allows us to translate to and from any other query languages, > including support for native SQL features.
## Full `cqn.d.ts` File ::: code-group ```tsx [cqn.d.ts] /** * `INSERT` and `UPSERT` queries are represented by the same internal * structures. The `UPSERT` keyword is used to indicate that the * statement should be updated if the targeted data exists. * The `into` property specifies the target entity. * * The data to be inserted or updated can be specified in different ways: * * - in the `entries` property as deeply nested records. * - in the `columns` and `values` properties as in SQL. * - in the `columns` and `rows` properties, with `rows` being array of `values`. * - in the `from` property with a `SELECT` query to provide the data to be inserted. * * The latter is the equivalent of SQL's `INSERT INTO ... SELECT ...` statements. */ export class INSERT { INSERT: UPSERT['UPSERT'] } export class UPSERT { UPSERT: { into : ref entries? : data[] columns? : string[] values? : scalar[] rows? : scalar[][] from? : SELECT }} /** * `UPDATE` queries are used to capture modifications to existing data. * They support a `where` clause to specify the rows to be updated, * and a `with` clause to specify the new values. Alternatively, the * `data` property can be used to specify updates with plain data only. */ export class UPDATE { UPDATE: { entity : ref where? : expr data : data with : changes }} /** * `DELETE` queries are used to remove data from a target datasource. * They support a `where` clause to specify the rows to be deleted. */ export class DELETE { DELETE: { from : ref where? : expr }} /** * `SELECT` queries are used to retrieve data from a target datasource, * and very much resemble SQL's `SELECT` statements, with these noteworthy * additions: * * - The `from` clause supports `{ref}` paths with infix filters. * - The `columns` clause supports deeply nested projections. * - The `count` property requests the total count, similar to OData's `$count`. * - The `one` property indicates that only a single record object shall be * returned instead of an array. * * Also, CDS, and hence CQN, supports minimalistic `SELECT` statements with a `from` * as the only mandatory property, which is equivalent to SQL's `SELECT * from ...`. */ export class SELECT { SELECT: { distinct? : true count? : true one? : true from : source columns? : column[] where? : xo[] having? : xo[] groupBy? : expr[] orderBy? : order[] limit? : { rows: val, offset: val } }} type source = OneOf< ref &as | SELECT | { join : 'inner' | 'left' | 'right' args : [ source, source ] on? : expr }> type column = OneOf< '*' | expr &as &cast | ref &as & OneOf<( { expand?: column[] } | { inline?: column[] } )> &infix > type order = expr & { sort : 'asc' | 'desc' nulls : 'first' | 'last' } interface changes { [elm:string]: OneOf< scalar | expr | changes | changes[] >} interface data { [elm:string]: OneOf< scalar | data | data[] >} interface as { as?: name } interface cast { cast?: {type:name} } interface infix { orderBy? : order[] where? : expr limit? : { rows: val, offset: val } } /** * Expressions can be entity or element references, query parameters, * literal values, lists of all the former, function calls, sub selects, * or compound expressions. */ export type expr = OneOf< ref | val | xpr | list | func | param | SELECT > export type ref = { ref: OneOf< name | { id:name &infix } >[] } export type val = { val: scalar } export type xpr = { xpr: xo[] } export type list = { list: expr[] } export type func = { func: string, args: expr[] } export type param = { ref: [ '?' | number | string ], param: true } /** * This is used in `{xpr}` objects as well as in `SELECT.where` clauses to * represent compound expressions as flat `xo` sequences. * Note that CQN by intent does not _understand_ expressions and therefore * keywords and operators are just represented as plain strings. * This allows us to translate to and from any other query languages, * including support for native SQL features. */ type xo = OneOf< expr | keyword | operator > type operator = '=' | '==' | '!=' | '<' | '<=' | '>' | '>=' type keyword = 'in' | 'like' | 'and' | 'or' | 'not' type scalar = number | string | boolean | null type name = string // --------------------------------------------------------------------------- // maybe coming later... declare class CREATE { CREATE: {} } declare class DROP { DROP: {} } // --------------------------------------------------------------------------- // internal helpers... type OneOf = Partial<(U extends any ? (k:U) => void : never) extends (k: infer I) => void ? I : never> ``` ::: # Common Types and Aspects _@sap/cds/common_ {.subtitle}
CDS ships with a prebuilt model *`@sap/cds/common`* that provides common types and aspects for reuse. [ISO 3166]: https://en.wikipedia.org/wiki/ISO_3166 [ISO 3166-1]: https://en.wikipedia.org/wiki/ISO_3166-1 [ISO 3166-2]: https://en.wikipedia.org/wiki/ISO_3166-2 [ISO 3166-3]: https://en.wikipedia.org/wiki/ISO_3166-3 [ISO 4217]: https://en.wikipedia.org/wiki/ISO_4217 [ISO/IEC 15897]: https://en.wikipedia.org/wiki/ISO/IEC_15897 [tzdata]: https://en.wikipedia.org/wiki/Tz_database [localized data]: ../guides/localized-data [temporal data]: ../guides/temporal-data ## Why Use _@sap/cds/common_? It's recommended that all applications use the common types and aspects provided through _@sap/cds/common_ to benefit from these features: * **Concise** and **comprehensible** models → see also [Conceptual Modeling](../guides/domain-modeling) * **Foster interoperability** between all applications * **Proven best practices** captured from real applications * **Streamlined** data models with **minimal entry barriers** * **Optimized** implementations and runtime performance * **Automatic** support for [localized](../guides/localized-data) code lists and [value helps](../advanced/fiori#pre-defined-types-in-sap-cds-common) * **Extensibility** using [Aspects](../guides/domain-modeling#aspect-oriented-modeling) * **Verticalization** through third-party extension packages For example, usage is as simple as indicated in the following sample: ```cds using { Country } from '@sap/cds/common'; entity Addresses { street : String; town : String; country : Country; //> using reuse type } ``` ### Outcome = Optimized Best Practice The final outcomes in terms of modeling patterns, persistence structures, and implementations is essentially the same as with native means, if you would have collected design experiences from prior solutions, such as we did. ::: tip All the common reuse features of _@sap/cds/common_ are provided only through this ~100 line .cds model. Additional runtime support isn't required. _@sap/cds/common_ merely uses basic CDS modeling features as well as generic features like [localized data] and [temporal data] (which only need minimal runtime support with minimal overhead). ::: In effect, the results are **straightforward**, capturing **best practices** we learned from real business applications, with **minimal footprint**, **optimized performance**, and **maximized adaptability** and **extensibility**. ## Common Reuse Aspects _@sap/cds/common_ defines the following [aspects](cdl#aspects) for use in your entity definitions. They give you shortcuts, for concise and comprehensible models, interoperability and out-of-the-box runtime features connected to them. ### Aspect `cuid` Use `cuid` as a convenient shortcut, to add canonical, universally unique primary keys to your entity definitions. These examples are equivalent: ```cds entity Foo : cuid {...} ``` ```cds entity Foo { key ID : UUID; [...] } ``` > The service provider runtimes automatically fill in UUID-typed keys like these with auto-generated UUIDs. [Learn more about **canonical keys** and **UUIDs**.](../guides/domain-modeling#prefer-canonic-keys){ .learn-more} ### Aspect `managed` Use `managed`, to add four elements to capture _created by/at_ and latest _modified by/at_ management information for records. The following examples are equivalent- ```cds entity Foo : managed {...} ``` ```cds entity Foo { createdAt : Timestamp @cds.on.insert : $now; createdBy : User @cds.on.insert : $user; modifiedAt : Timestamp @cds.on.insert : $now @cds.on.update : $now; modifiedBy : User @cds.on.insert : $user @cds.on.update : $user; [...] } ``` ::: tip `modifiedAt` and `modifiedBy` are set whenever the respective row was modified, that means, also during `CREATE` operations. ::: The annotations `@cds.on.insert/update` are handled in generic service providers so to fill in those fields automatically. [Learn more about **generic service features**.](../guides/domain-modeling#managed-data){ .learn-more} ### Aspect `temporal` This aspect basically adds two canonical elements, `validFrom` and `validTo` to an entity. It also adds a tag annotation that connects the CDS compiler's and runtime's built-in support for _[Temporal Data](../guides/temporal-data)_. This built-in support covers handling date-effective records and time slices, including time travel. All you've to do is, add the temporal aspect to respective entities as follows: ```cds entity Contract : temporal {...} ``` [Learn more about **temporal data**.][temporal data]{ .learn-more} ## Common Reuse Types {#code-types} _@sap/cds/common_ provides predefined easy-to-use types for _Countries_, _Currencies_, and _Languages_. Use these types in all applications to foster interoperability. ### Type `Country` [`Country`]: #country The reuse type `Country` is defined in _@sap/cds/common_ as a simple managed [Association](cdl#associations) to the [code list](#code-lists) for [countries](#entity-countries) as follows: ```cds type Country : Association to sap.common.Countries; ``` Here's an example of how you would use that reuse type: ```cds using { Country } from '@sap/cds/common'; entity Addresses { street : String; town : String; country : Country; //> using reuse type } ``` The [code lists](#code-lists) define a key element `code`, which results in a foreign key column `country_code` in your SQL table for Addresses. For example: ```sql CREATE TABLE Addresses ( street NVARCHAR(5000), town NVARCHAR(5000), country_code NVARCHAR(3) -- foreign key ); ``` [Learn more about **managed associations**.](cdl#associations){ .learn-more} ### Type `Currency` The type for an association to [Currencies](#entity-currencies). ```cds type Currency : Association to sap.common.Currencies; ``` [It's the same as for `Country`.](#type-country){ .learn-more} ### Type `Language` The type for an association to [Languages](#entity-languages). ```cds type Language : Association to sap.common.Languages; ``` [It's the same as for `Country`.](#type-country){ .learn-more} ### Type `Timezone` The type for an association to [Timezones](#entity-timezones). ```cds type Timezone : Association to sap.common.Timezones; ``` [It's the same as for `Country`.](#type-country){ .learn-more} ## Common Code Lists { #code-lists} As seen in the previous section, the reuse types `Country`, `Currency`, and `Language` are defined as associations to respective code list entities. They act as code list tables for respective elements in your domain model. > Note: You rarely have to refer to the code lists in consuming models, but always only do so transitively by using the corresponding reuse types [as shown previously](#code-types). #### Namespace: `sap.common` The following definitions are within namespace `sap.common`... ### Aspect `CodeList` This is the base definition for the three code list entities in _@sap/cds/common_. It can also be used for your own code lists. ```cds aspect sap.common.CodeList { name : localized String(111); descr : localized String(1111); } ``` [Learn more about **localized** keyword.](../guides/localized-data){ .learn-more} ### Entity `Countries` The code list entity for countries is meant to be used with **[ISO 3166-1] two-letter alpha codes** as primary keys. For example, `'GB'` for the United Kingdom. Nevertheless, it's defined as `String(3)` to allow you to fill in three-letter codes, if needed. ```cds entity sap.common.Countries : CodeList { key code : String(3); //> ISO 3166-1 alpha-2 codes (or alpha-3) } ``` ### Entity `Currencies` The code list entity for currencies is meant to be used with **[ISO 4217] three-letter alpha codes** as primary keys, for example, `'USD'` for US Dollar. In addition, it provides an element to hold the minor unit fractions and for common currency symbols. ```cds entity sap.common.Currencies : CodeList { key code : String(3); //> ISO 4217 alpha-3 codes symbol : String(5); //> for example, $, €, £, ₪, ... minorUnit : Int16; //> for example, 0 or 2 } ``` ### Entity `Languages` The code list entity for countries is meant to be used with POSIX locales as defined in **[ISO/IEC 15897]** as primary keys. For example, `'en_GB'` for British English. ```cds entity sap.common.Languages : CodeList { key code : sap.common.Locale; //> for example, en_GB } ``` [Learn more on **normalized locales**.](../guides/i18n#normalized-locales){ .learn-more} ### Entity `Timezones` The code list entity for time zones is meant to be used with primary keys like _Area/Location_, as defined in the [IANA time zone database][tzdata]. Examples are `America/Argentina/Buenos_Aires`, `Europe/Berlin`, or `Etc/UTC`. ```cds entity sap.common.Timezones : CodeList { key code : String(100); //> for example, Europe/Berlin } ``` [Learn more about time zones in JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) {.learn-more} [Learn more about time zones in Java](https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/time/ZoneId.html) {.learn-more} ### SQL Persistence The following table definition represents the resulting SQL persistence of the `Countries` code list, with the ones for `Currencies` and `Languages` alike: ```sql -- the basic code list table CREATE TABLE sap_common_Countries ( name NVARCHAR(255), descr NVARCHAR(1000), code NVARCHAR(3), PRIMARY KEY(code) ); ``` ### Minimalistic Design The models for code lists are intentionally minimalistic to keep the entry barriers as low as possible, focusing on the bare minimum of what all applications generally need: a unique code and localizable fields for name and full name or descriptions. **ISO alpha codes** for languages, countries, and currencies were chosen because they: 1. Are most common (most projects would choose that) 2. Are most efficient (as these codes are also frequently displayed on UIs) 3. Guarantee minimal entry barriers (bringing about 1 above) 4. Guarantee best support (for example, by readable foreign keys) Assumption is that ~80% of all apps don't need more than what is already covered in this minimalistic model. Yet, in case you need more, you can easily leverage CDS standard features to adapt and extend these base models to your needs as demonstrated in the section [Adapting to your needs](#adapting-to-your-needs). ## Aspects for Localized Data Following are types and aspects mostly used behind the scenes for [localized data](../guides/localized-data).
For example given this entity definition: ```cds entity Foo { key ID : UUID; name : localized String; descr : localized String; } ``` When unfolding the `localized` fields, we essentially add `.texts` entities in these steps: 1. Add a new entity `Foo.texts` which inherits from `TextsAspects`: ```cds entity Foo.texts : sap.common.TextsAspects { ... } ``` Which in turn unfolds to: ```cds entity Foo.texts { key locale : sap.common.Locale; } ``` 2. Add the primary key of the main entity `Foo`: ```cds entity Foo.texts { key locale : sap.common.Locale; key ID : UUID; // [!code focus] } ``` 3. Add the localized fields: ```cds entity Foo.texts { key locale : sap.common.Locale; key ID : UUID; name : String; // [!code focus] descr : String; // [!code focus] } ``` #### Namespace: `sap.common` The following definitions are with namespace `sap.common`... ### Aspect `TextsAspect` {#texts-aspects} This aspect is used when generating `.texts` entities for the unfolding of localized elements. It can be extended, which effectively extends all generated `.texts` entities. ```cds aspect sap.common.TextsAspect { key locale: sap.common.Locale; } ``` [Learn more about **Extending .texts entities**.](../guides/localized-data#extending-texts-entities){ .learn-more} ### Type `Locale` {#locale-type} ```cds type sap.common.Locale : String(14) @title: '{i18n>LanguageCode}'; ``` The reuse type `sap.common.Locale` is used when generating `.texts` entities for the unfolding of *localized* elements. [Learn more about **localized data**.](../guides/localized-data){ .learn-more} ### SQL Persistence In addition, the base entity these additional tables and views are generated behind the scenes to efficiently deal with translations: ```sql -- _texts table for translations CREATE TABLE Foo_texts ( ID NVARCHAR(36), locale NVARCHAR(14), name NVARCHAR(255), descr NVARCHAR(1000), PRIMARY KEY(ID, locale) ); ``` ```sql -- view to easily read localized texts with automatic fallback CREATE VIEW localized_Foo AS SELECT code, COALESCE (localized.name, name) AS name, COALESCE (localized.descr, descr) AS descr FROM Foo ( LEFT JOIN Foo_texts AS localized ON localized.code= code AND localized.locale = SESSION_CONTEXT('locale') ) ``` [Learn more about **localized data**.](../guides/localized-data){ .learn-more} ## Providing Initial Data You can provide initial data for the code lists by placing CSV files in a folder called `data` next to your data models. The following is an example of a `csv` file to provide data for countries: ::: code-group ```csv [db/data/sap.common-Countries.csv] code;name;descr AU;Australia;Commonwealth of Australia CA;Canada;Canada CN;China;People's Republic of China (PRC) FR;France;French Republic DE;Germany;Federal Republic of Germany IN;India;Republic of India IL;Israel;State of Israel MM;Myanmar;Republic of the Union of Myanmar GB;United Kingdom;United Kingdom of Great Britain and Northern Ireland US;United States;United States of America (USA) EU;European Union;European Union ``` ::: [Learn more about the database aspects of **Providing Initial Data**.](../guides/databases#providing-initial-data){ .learn-more} ### Add Translated Texts In addition, you can provide translations for the `sap.common.Countries_texts` table as follows: ::: code-group ```csv [db/data/sap.common-Countries_texts.csv] code;locale;name;descr AU;de;Australien;Commonwealth Australien CA;de;Kanada;Canada CN;de;China;Volksrepublik China FR;de;Frankreich;Republik Frankreich DE;de;Deutschland;Bundesrepublik Deutschland IN;de;Indien;Republik Indien IL;de;Israel;Staat Israel MM;de;Myanmar;Republik der Union Myanmar GB;de;Vereinigtes Königreich;Vereinigtes Königreich Großbritannien und Nordirland US;de;Vereinigte Staaten;Vereinigte Staaten von Amerika EU;de;Europäische Union;Europäische Union ``` ::: [Learn more about **Localization/i18n**.](../guides/localized-data){ .learn-more} ### Using Tools like Excel You can use Excel or similar tools to maintain these files. For example, the following screenshot shows how we maintained the above two files in Numbers on a Mac: ![This screenshot is explained in the accompanying text.](./assets/csv-numbers.png) ### Using Prebuilt Content Package {#prebuilt-data} Package [@sap/cds-common-content](https://www.npmjs.com/package/@sap/cds-common-content) provides prebuilt data for the entities `Countries`, `Currencies`, `Languages`, and `Timezones`. Add it your project: ```sh npm add @sap/cds-common-content --save ``` Use it in your `cds` files: ```cds using from '@sap/cds-common-content'; ``` [Learn more about integrating reuse packages](../guides/extensibility/composition){.learn-more} ## Adapting to Your Needs As stated, the predefined definitions are minimalistic by intent. Yet, as _@sap/cds/common_ is also just a CDS model, you can apply all the standard features provided by [CDS](./cdl), especially CDS' [Aspects](./cdl#aspects) to adapt, and extend these definitions to your needs. Let's look at a few examples of what could be done. You can combine these extensions in an effective model. ::: tip You can do such extensions in the models of your project. You can also collect your extensions into reuse packages and share them as common definitions with several consuming projects, similar to _@sap/cds/common_ itself. ::: [Learn more about providing reuse packages.](../guides/extensibility/composition){ .learn-more} ### Adding Detailed Fields as of [ISO 3166-1] ```cds using { sap.common.Countries } from '@sap/cds/common'; extend Countries { numcode : Integer; //> ISO 3166-1 three-digit numeric codes alpha3 : String(3); //> ISO 3166-1 three-letter alpha codes alpha4 : String(4); //> ISO 3166-3 four-letter alpha codes independent : Boolean; status : String(111); statusRemark : String(1111); remarkPart3 : String(1111); } ``` > Value lists in SAP Fiori automatically search in the new text fields as well. ### Protecting Certain Entries Some application logic might have to be hard-coded against certain entries in code lists. Therefore, these entries have to be protected against changes and removal. For example, let's assume a code list for payment methods defined as follows: ```cds entity PaymentMethods : sap.common.CodeList { code : String(11); } ``` Let's further assume the entires with code `Main` and `Travel` are required by implementations and hence must not be changed or removed. Have a look at a couple of solutions.
#### Programmatic Solution A fallback, and at the same time, the most open, and most flexible approach, is to use a custom handler to assert that. For example, in Node.js: ```js srv.on ('DELETE', 'PaymentMethods', req=>{ const entry = req.query.DELETE.where[2].val if (['Main','Travel'].includes(entry)) return req.reject(403, 'these entries must not be deleted') }) ``` ### Using Different Foreign Keys Let's assume you prefer to have references to the latest code list entries without adjusting foreign keys. This can be achieved by adding and using numeric ISO codes for foreign keys instead of the alpha codes. ::: code-group ```cds [your-common.2.cds] namespace your.common; using { sap.common.Countries } from '@sap/cds/common'; // Extend Countries code list with fields for numeric codes extend Countries { numcode : Integer; //> ISO 3166-1 three-digit numeric codes } // Define an own Country type using numcodes for foreign keys type Country : Association to Countries { numcode }; ``` ::: You can use your own definition of `Country` instead of the one from _@sap/cds/common_ in your models as follows: ```cds using { your.common.Country } from './your-common.2'; entity Addresses { //... country : Country; } ``` ### Mapping to SAP S/4HANA or ABAP Table Signatures ```cds using { sap.common.Countries } from '@sap/cds/common'; entity Countries4GFN as projection on Countries { code as CountryCodeAlpha2, name as CountryShortName, // ... } entity Countries4ABAP as projection on Countries { code as LAND, // ... } ``` These views are updatable on SAP HANA and many other databases. You can also use CDS to expose them through corresponding OData services in order to ease integration with SAP S/4HANA or older ABAP backends. ## Adding Own Code Lists As another example of adaptations, let's add support for subdivisions, that means regions, as of [ISO 3166-2] to countries. ### Defining a New Code List Entity ::: code-group ```cds [your-common.4.1.cds] using sap from '@sap/cds/common'; // new code list for regions entity Regions : sap.common.CodeList { key code : String(5); // ISO 3166-2 alpha5 codes, like DE-BW country : Association to sap.common.Countries; } // bi-directionally associate Regions with Countries extend sap.common.Countries { regions : Composition of many Regions on regions.country = $self; } ``` ::: `Regions` is a new, custom-defined code list entity defined in the same way as the predefined ones in _@sap/cds/common_. In particular, it inherits all elements and annotations from the base definition [`sap.common.CodeList`](#code-lists). For example, the `@cds.autoexpose` annotation, which provides that `Regions` is auto-exposed in any OData service that has exposed entities with associations to it. The localization of the predefined elements `name` and `descr` is also inherited. [Learn in our sample how an own code list can be used to localize `enum` values.](https://github.com/SAP-samples/cap-sflight/blob/236de55b58fd0620dcd1d4f043779a7c632391b1/db/schema.cds#L60){.learn-more} ### Defining a New Reuse Type Following the pattern for codes in _@sap/cds/common_ a bit more, you can also define a reuse type for regions as a managed association: ::: code-group ```cds [your-common.4.2.cds] using { Regions } from './your-common.4.1'; /*>skip<*/ // Define an own reuse type referring to Regions type Region : Association to Regions; ``` ::: ### Using the New Reuse Type and Code List This finally allows you to add respective elements, the same way you do it with predefined reuse types. These elements receive the same support from built-in generic features. For example: ```cds using { Country, Region } from './your-common.4.2'; entity Addresses { street : String; town : String; country : Country; //> pre-defined reuse type region : Region; //> your custom reuse type } ``` ## Code Lists with Validity Even ISO codes may change over time and you may have to react to that in your applications. For example, when Burma was renamed to Myanmar in 1989. Let's investigate strategies on how that can be updated in our code lists. ### Accommodating Changes The renaming from Burma to Myanmar in 1989, was reflected in [ISO 3166] as follows (_the alpha-4 codes as specified in [ISO 3166-3] signify entries officially deleted from [ISO 3166-1] code lists_): | Name | Alpha-2 | Alpha-3 | Alpha-4 | Numeric | |---------|---------|---------|---------|---------| | Burma | BU | BUR | BUMM | 104 | | Myanmar | MM | MMR | | 104 | By default, and with the given default definitions in _@sap/cds/common_, this would have been reflected as a new entry for Myanmar and you'd have the following choices on what to do with the existing records in your data: * **(a)** Adjust foreign keys for records so that it always reflects the current state. * **(b)** Keep foreign keys as is for cases where the old records reflect the state effective at the time they were created or valid. ### Exclude Outdated Entries from Pick Lists (Optional) Although outdated entries like the one for Burma have to remain in the code lists as targets for references from historic records in other entities, you would certainly want to exclude it from all pick lists used in UIs when entering new data. This is how you could achieve that: #### 1. Extend the Common Code List Entity ```cds using { sap.common.Countries } from '@sap/cds/common'; extend Countries with { validTo: Date default '9999-12-31'; } ``` #### 2. Fill Validity Boundaries in Code Lists: | code | name | validTo | |------|---------|------------| | BU | Burma | 1989-06-18 | | MM | Myanmar | 9999-12-31 | #### 3. Model Pick List Entity Add the following line to your service definition: ```cds entity CountriesPickList as projection on sap.common.Countries where validTo >= $now; ``` Basically, the entity `Countries` serves all standard requests, and the new entity `CountriesPickList` is built for the value help only. This entity is a projection that gives you only those records that are valid right now. #### 4. Include Pick List with Validity on the UI This snippet equips UI fields for a `countries` association with a value help from the `CountriesPickList` entity. ```cds annotate YourService.EntityName with { countries @( Common: { Text: country.name , // TextArrangement: #TextOnly, ValueList: { Label: 'Country Value Help', CollectionPath: 'CountriesPickList', Parameters: [ { $Type: 'Common.ValueListParameterInOut', LocalDataProperty: country_code, ValueListProperty: 'code' }, { $Type: 'Common.ValueListParameterDisplayOnly', ValueListProperty: 'name' } ] } }, ); } ``` # Compiler Messages This page lists selected error messages and explanations on how to fix them. It is not a complete list of all compiler messages. ::: warning Note on message IDs Message IDs are not finalized, yet. They can change at short notice. ::: ## anno-duplicate-unrelated-layer An annotation is assigned multiple times through unrelated layers. A _layer_ can be seen as a group of connected sources, for example CDL files. They form a cyclic connection through their dependencies (for example, `using` in CDL). If there are no cyclic dependencies, a single CDL file is equivalent to a layer. #### Example Erroneous code example using four CDS files: ```cds // (1) Base.cds: Contains the artifact that should be annotated entity FooBar { } // (2) FooAnnotate.cds: First unrelated layer to Base.cds using from './Base'; annotate FooBar with @Anno: 'Foo'; // (3) BarAnnotate.cds: Second unrelated layer to Base.cds using from './Base'; annotate FooBar with @Anno: 'Bar'; // (4) All.cds: Combine all files ❌ using from './FooAnnotate'; using from './BarAnnotate'; ``` In (4) the compiler will warn that there are duplicate annotations in unrelated layers. That is because (2) and (3) are unrelated, i.e. they do not have a connection. Due to these unrelated layers, the compiler can't decide in (4) which annotation should be applied first. Instead of passing (4) to the compiler, you can also pass (2) and (3) to it. Because there are no cyclic dependencies between the files, each file represents one layer. #### How to Fix Remove one of the duplicate annotations. Chances are, that only one was intended to begin with. For the erroneous example above, remove the annotation from (3). Alternatively, add an annotation assignment to (4). This annotation has precedence and the error will vanish. For the example above, (4) will look like this: ```cds // (4) All.cds: Combine all files using from './FooAnnotate'; using from './BarAnnotate'; // This annotation has precedence. annotate FooBar with @Anno: 'Bar'; ``` You can also make (3) depend on (2) so that they are no longer in unrelated layers and the compiler can determine which annotation to apply. ```cds // (3) BarAnnotate.cds: Now depends on (2) using from './FooAnnotate'; annotate FooBar with @Anno: 'Bar'; ``` This works because there is now a defined dependency order. ## anno-missing-rewrite A propagated annotation containing expressions can't be rewritten and would end up with invalid paths. While propagating annotations containing expressions such as `@anno: (path)`, the compiler ensures that the path remains valid. If necessary, the paths have to be rewritten, e.g. when being propagated to projections that rename their source's elements. If rewriting is not possible, this error is emitted. #### Example Erroneous code example: ```cds type T : { @anno: (sibling) elem: String; sibling: String; }; type TString : T:elem; // ❌ there is no `sibling` ``` The annotating `@anno` would be propagated to `TString`. However, because its path refers to an element that is not reachable at `TString`, the path can't be rewritten and compilation fails. #### How to Fix Explicitly override the annotation. Either remove it by setting its value to `null` or by using another value. ```cds // (1) direct annotation @anno: null type TString : T:elem; // (2) annotate statement type TString : T:elem; annotate TString with @(anno: null); ``` Variant (1) may not always be applicable, e.g. if annotations in a structured type would need to be overridden. In those cases, use variant (2) and assign annotations via the `annotate` statement. ## check-proper-type-of An element in a `type of` expression doesn't have proper type information. The message's severity is `Info` but may be raised to `Error` in the SQL, SAP HANA, and OData backends. These backends require elements to have a type. Otherwise, they aren't able to render elements (for example, to SQL columns). #### Example Erroneous code example: ```cds entity Foo { key id : Integer; }; view ViewFoo as select from Foo { 1+1 as calculatedField @(anno) }; entity Bar { // ❌ `e` has no proper type but has the annotation `@anno`. e : ViewFoo:calculatedField; }; ``` `ViewFoo:calculatedField` is a calculated field without an explicit type. `type of` is used in `E:e`'s type specification. You would expect the element to have a proper type. However, because the referenced element is calculated, the compiler isn't able to determine the correct type. The element still inherits `ViewFoo:calculatedField`'s annotations and other properties but won't have a proper type, which is required by some backends. #### How to Fix Assign an explicit type to `ViewFoo:calculatedField`. ```cds view ViewFoo as select from Foo { 1+1 as calculatedField @(anno) : Integer }; ``` #### Related Messages - [`def-missing-type`](#def-missing-type) ## def-duplicate-autoexposed Two or more entities with the same name can't be auto-exposed in the same service. Auto-exposure is a compiler feature which makes it easier for developers to write services. Auto-exposure uses the name of the entity to expose it in the service. It ignores the entity's namespace and context. This may lead to name collisions. The message's severity is `Error` and is raised by the compiler. You need to adapt your model to fix the error. #### Example Erroneous code example: ```cds // (1) entity ns.first.Foo { key parent : Association to one ns.Base; }; // (2) entity ns.second.Foo { key parent : Association to one ns.Base; }; // (3) entity ns.Base { key id : UUID; to_first : Composition of many ns.first.Foo; to_second : Composition of many ns.second.Foo; } service ns.MyService { // (4) ❌ entity BaseView as projection on ns.Base; }; ``` Both (1) and (2) define an entity `Foo`, but in different namespaces. For example, they could be located in different files with a `namespace` statement. (3) contains compositions of both `first.Foo` and `second.Foo`. In (4), a projection on `Base` is exposed in service `MyService`. Both composition targets are auto-exposed. However, because the namespaces of (2) and (3) are ignored, a name collision happens. #### How to Fix You need to explicitly expose one or more entities under a name that does not exist in the service, yet. For the erroneous example above, you could add these two lines to the service `ns.MyService`: ```cds entity first.Foo as projection on ns.first.Foo; // (5) entity second.Foo as projection on ns.second.Foo; // (6) ``` Here we reuse the namespaces `first` and `second`. We don't use `ns` because it's the common namespace. But you can choose any other name. The compiler will pick up both manually exposed entities and will correctly redirect all associations. _Note:_ For the example, it is sufficient to expose only one entity. If you remove (6), you will get these two projections: - `ns.MyService.first.Foo` for (5) - `ns.MyService.Foo` for (6) Where (6) is the name chosen by the compiler. #### Notes on auto-exposure You may wonder why the compiler does not reuse the namespace when auto-exposing entities. The reason is that the resulting auto-exposed names could become _long_ names that don't seem natural nor intuitive. We chose to expose the entity name because that's what most developers want to do when they manually expose entities. #### Other Notes This message was called `duplicate-autoexposed` in cds-compiler v3 and earlier. ## def-missing-type A type artifact doesn't have proper type information. The message's severity is `Info` but may be raised to `Error` in the SQL, SAP HANA, and OData backends. These backends require types to have type information. Otherwise, they aren't able to render elements that use this type (for example, to SQL columns). #### Example Erroneous code example: ```json { "definitions": { "MainType": { "kind": "type" } } } ``` `MainType` is of kind "type" but has not further type-information. #### How to Fix Add explicit type information to `MainType`, for example, add an `elements` property to make a structured type. ```json { "definitions": { "MainType": { "kind": "type", "elements": { "id": { "type": "cds.String" } } } } } ``` #### Related Messages - [`check-proper-type-of`](#check-proper-type-of) ## extend-repeated-intralayer The order of elements of an artifact may not be stable due to multiple extensions in the same layer (for example in the same file). A _layer_ can be seen as a group of connected sources, for example, CDL files. They form a cyclic connection through their dependencies (for example, `using` in CDL). #### Example Erroneous code example with multiple CDL files: ```cds // (1) Definition.cds using from './Extension.cds'; entity FooBar { }; extend FooBar { foo: Integer; }; // ❌ // (2) Extension.cds using from './Definition.cds'; extend FooBar { bar: Integer; }; // ❌ ``` Here we have a cyclic dependency between (1) and (2). Together they form one layer with multiple extensions. Again, the element order isn't stable. #### How to Fix Move extensions for the same artifact into the same extension block: ```cds // (1) Definition.cds : No extension block using from './Extension.cds'; entity FooBar { } // (2) Extension.cds : Now contains both extensions using from './Definition.cds'; extend FooBar { foo : Integer; bar : Integer; } ``` #### Related Messages - [`extend-unrelated-layer`](#extend-unrelated-layer) ## extend-unrelated-layer Unstable element order due to extensions for the same artifact in unrelated layers. A _layer_ can be seen as a group of connected sources, for example CDL files. They form a cyclic connection through their dependencies (for example, `using` in CDL). #### Example Erroneous code example using four CDS files: ```cds // (1) Base.cds: Contains the artifact that should be extended entity FooBar { } // (2) FooExtend.cds: First unrelated layer to Base.cds using from './Base'; extend FooBar { foo : Integer; } // (3) BarExtend.cds: Second unrelated layer to Base.cds using from './Base'; extend FooBar { bar : Integer; } // (4) ❌ All.cds: Combine all files using from './FooExtend'; using from './BarExtend'; ``` In (4) the compiler will warn that the element order of `FooBar` is unstable. That is because the extensions in (2) and (3) are in different layers and when used in (4) it can't be ensured which extension is applied first. Instead of passing (4) to the compiler, you can also pass (2) and (3) to it. Because there are no cyclic dependencies between the files, each file represents one layer. #### How to Fix Move extensions for the same artifact into the same layer, that is, the same file. For the erroneous example above, remove the extension from (3) and move it to (2): ```cds // (2) FooExtend.cds using from './Base'; extend FooBar { foo : Integer; bar : Integer; } ``` #### Related Messages - [`extend-repeated-intralayer`](#extend-repeated-intralayer) ## redirected-to-ambiguous The redirected target originates more than once from the original target through direct or indirect sources of the redirected target. The message's severity is `Error` and is raised by the compiler. The error happens due to an ill-formed redirection, which requires changes to your model. #### Example Erroneous code example: ```cds entity Main { key id : Integer; toTarget : Association to Target; } entity Target { key id : Integer; } view View as select from Main, Target, Target as Duplicate { // ❌ This redirection can't be resolved: Main.toTarget : redirected to View }; ``` Entity `Target` exists more than once in `View` under different table aliases. In the previous example, this happens through the *direct* sources in the select clause. Because the original target exists twice in the redirected target, the compiler isn't able to correctly resolve the redirection due to ambiguities. This can also happen through *indirect* sources. For example if entity `Main` were to include `Target`, then selecting from `Target` just once would be enough to trigger this error. #### How to Fix You must have the original target only once in your direct and indirect sources. The previous example can be fixed by removing `Duplicate` from the select clause. ```cds view View as select from Main, Target { Main.toTarget : redirected to View }; ``` If this isn't feasible then you have to redefine the association using a mixin clause. ```cds view View as select from Main, Target mixin { toMain : Association to View on Main.id = Target.id; } into { Main.id as mainId, Target.id as targetId, toMain }; ``` #### Related Messages - [`redirected-to-unrelated`](#redirected-to-unrelated) - [`redirected-to-complex`](#redirected-to-complex) ## redirected-to-complex The redirected target is a complex view, for example, contains a JOIN or UNION. The message's severity is `Info` and is raised by the compiler. It is emitted to help developers identify possible modeling issues. #### Example Erroneous code example: ```cds entity Main { key id : Integer; // self association for example purpose only toMain : Association to one Main; } entity Secondary { content: String; }; entity CrossJoin as SELECT from Main, Secondary; entity RedirectToComplex as projection on Main { id, toMain: redirected to CrossJoin, // ❌ }; ``` `Main:toMain` is a to-one association. Since `Main` contains a single key, which is used in the managed association, we know that following the association returns a single result. The cross join in the view `CrossJoin` results in multiple rows with the same `id`. Following the redirected view now returns multiple results, effectively making the to-one association a to-many association. Visualizing the tables with a bit of data, this issue becomes obvious: ```markdown Main Secondary | id | toMain_id | | content | |-----|-----------| |---------| | 1 | 2 | | 'Hello' | | 2 | 1 | | 'World' | CrossJoin | id | toMain_id | content | |-----|-----------|---------| | 1 | 2 | 'Hello' | | 1 | 2 | 'World' | | 2 | 1 | 'Hello' | | 2 | 1 | 'World' | ``` #### How to Fix First, ensure that the redirected association points to an entity that is a reasonable redirection target. That means, the redirection target shouldn't accidentally make it a to-many association. Then add an explicit ON-condition or explicit foreign keys to the redirected association. That will silence the compiler message. #### Related Messages - [`redirected-to-ambiguous`](#redirected-to-ambiguous) - [`redirected-to-unrelated`](#redirected-to-unrelated) ## redirected-to-unrelated The redirected target doesn't originate from the original target. The message's severity is `Error` and is raised by the compiler. The error happens due to an ill-formed redirection, which requires changes to your model. #### Example Erroneous code example: ```cds entity Main { key id : Integer; // self association for example purpose only toMain : Association to Main; } entity Secondary { key id : Integer; } entity InvalidRedirect as projection on Main { id, // ❌ Invalid redirection toMain: redirected to Secondary, }; ``` Projection `InvalidRedirect` tries to redirect `toMain` to `Secondary`. However, that entity doesn't have any connection to the original target `Main`, that means, it doesn't originate from `Main`. While this example may be clear, your model may have multiple redirections that make the error not as obvious. Erroneous code example with multiple redirections: ```cds entity Main { key id : Integer; toMain : Association to Main; } entity FirstRedirect as projection on Main { id, toMain: redirected to FirstRedirect, } entity SecondRedirect as projection on FirstRedirect { id, // Invalid redirection toMain: redirected to Main, } ``` The intent of the example above is to redirect `toMain` to its original target in `SecondRedirect`. But because `SecondRedirect` uses `toMain` from `FirstRedirect`, the original target is `FirstRedirect`. And `Main` doesn't originate from `FirstRedirect` but only vice versa. #### How to Fix You must redirect the association to an entity that originates from the original target. In the first example above you could redirect `SecondRedirect:toMain` to `SecondRedirect`. However, if that isn't feasible then you have to redefine the association using a mixin clause. ```cds view SecondRedirect as select from FirstRedirect mixin { toMain : Association to Main on id = $self.id; } into { FirstRedirect.id as id, toMain }; ``` #### Related Messages - [`redirected-to-ambiguous`](#redirected-to-ambiguous) - [`redirected-to-complex`](#redirected-to-complex) ## rewrite-not-supported The compiler isn't able to rewrite ON conditions for some associations. They have to be explicitly defined by the user. The message's severity is `Error`. #### Example Erroneous code example: ```cds entity Base { key id : Integer; primary : Association to Primary on primary.id = primary_id; primary_id : Integer; } entity Primary { key id : Integer; secondary : Association to Secondary on secondary.id = secondary_id; secondary_id : Integer; } entity Secondary { key id : Integer; text : LargeString; } entity View as select from Base { id, primary.secondary // ❌ The ON condition isn't rewritten here }; ``` In the previous example, the ON condition in `View` of `secondary` can't be automatically rewritten because the associations are unmanaged and the compiler can't determine how to properly rewrite them for `View`. #### How to Fix You have to provide an explicit ON condition. This can be achieved by using the `redirected to` statement: ```cds entity View as select from Base { id, primary.secondary_id, primary.secondary: redirected to Secondary on secondary.id = secondary_id }; ``` In the corrected view above, the association `secondary` gets an explicit ON condition. For this to work it is necessary to add `secondary_id` to the selection list, that means, we have to explicitly use the foreign key. #### Related Messages - [`rewrite-undefined-key`](#rewrite-undefined-key) ## rewrite-undefined-key The compiler isn't able to rewrite an association's foreign keys, because the redirected target is missing elements to match them. The message's severity is `Error`. #### Example Erroneous code example: ```cds entity model.Base { key ID : UUID; toTarget : Association to model.Target; // (1) } entity model.Target { key ID : UUID; // (2) field : String; } service S { entity Base as projection on model.Base; // ❌ (3) Can't redirect 'toTarget' entity Target as projection on model.Target { field, // (4) No 'ID' }; } ``` In the example, the projected association `toTarget` at (3) in entity `S.Base` can't be redirected to `S.Target`, because `S.Target` does not project element `ID` (4). `toTarget` (1) is a managed association and hence foreign keys are inferred for it. The compiler generates a foreign key `ID`, which corresponds to element `ID` of `model.Target` (2). As both entities are exposed in service `S`, the compiler tries to redirect `S.Base:toTarget` to an entity inside the same service, to create a "self-contained" service. It notices, however, that `S.Target` does not have element `ID`, and therefore can't match the foreign key to a target element and emits this error message. #### How to Fix If you don't need to expose association `toTarget` in `S.Target`, you can exclude it in the projection via an `excluding` clause. ```cds service S { entity Base as projection on model.Base excluding { toTarget }; // ... } ``` If the association is required in the service, you need to either project element `ID` in `S.Target`, or redirect the association explicitly. The easiest fix is to select `ID` explicitly: ```cds service S { // ... entity Target as projection on model.Target { field, ID, // Explicitly select element ID }; } ``` However, if you don't want to expose `ID`, redirect association `toTarget` explicitly, matching the foreign key to another element: ```cds service S { entity Base as projection on model.Base { ID, toTarget : redirected to Target { fakeID as ID }, // (1) }; entity Target as projection on model.Target { calculateKey() as fakeID : UUID, // (2) field, }; } ``` Note that at (1), we use element `fakeID` of `S.Target` as foreign key `ID`. That changes its semantic meaning and may not be feasible in all cases! In the example, we assume at (2) that a key can be calculated. #### Related Messages - [`rewrite-not-supported`](#rewrite-not-supported) ## syntax-expecting-unsigned-int The compiler expects a safe non-negative integer here. The last safe integer is `2^53 - 1` or `9007199254740991`. A safe integer is an integer that fulfills all of the following: - Can be exactly represented as an IEEE-754 double precision number. - The IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation. The message's severity is `Error`. #### Example Erroneous code example: ```cds type LengthIsUnsafe : String(9007199254740992); // ❌ type NotAnInteger : String(42.1); // ❌ ``` In the erroneous example, the string length for the type `LengthIsUnsafe` is not a safe integer. It is too large. Likewise, the string length for the type `NotAnInteger` is a decimal. #### How to Fix You have to provide a safe integer: ```cds type LengthIsSafe : String(9007199254740991); type AnInteger : String(42); ``` At other places, using unsafe integers (or non-integer numbers) is allowed: - Annotation values: The value is then simply a string. - Expressions: The `val` property in the CSN contains a string having a sibling `literal: 'number'`. ## type-missing-enum-value An enum definition is missing explicit values for one or more of its entries. Enum definitions that aren't based on string-types do not get implicit values. They have therefore to be defined explicitly in the model. The message's severity is `Warning` and is raised by the compiler. You need to adapt your model to fix the warning. #### Example Erroneous code example: ```cds entity Books { // … category: Integer enum { Fiction; // ❌ Action; // ❌ // … } default #Action; }; ``` Both entries `#Fiction` and `#Action` of the enum `category` are missing an explicit value. Because the base type `Integer` is not a string, no implicit values are defined for them. #### How to Fix Explicitly assign a value or change the type to a string if the values are not important in your model. The erroneous example above can be changed to: ```cds entity Books { // … category: Integer enum { Fiction = 1; Action = 2; // … } default #Action; }; ``` #### Background Many languages support implicit values for integer-like enums. However, CAP CDS does not have this feature, because otherwise, if values are persisted, adding a new entry in-between existing ones would lead to issues during deserialization later on. Assume that CAP would assign implicit values for integer enums. If a new value were to be added between `Fiction` and `Action` in the erroneous example above, then the generated SQL statement for entity `Books` would change: Instead of default value `2`, value `3` would be persisted. Without data migration, existing action books would have changed their category. To avoid this scenario, always add explicit values to enums. ## type-unexpected-foreign-keys Foreign keys were specified in a composition-of-aspect. Compositions of aspects are managed by the compiler. Specifying a foreign key list is not supported. If you need to specify foreign keys, use a composition of an entity instead. The message's severity is `Error`. #### Example Erroneous code example: ```cds aspect Item { key ID : UUID; field : String; }; entity Model { key ID : UUID; Item : Composition of Item { ID }; // ❌ }; ``` `Item` is an aspect. Because an explicit list of foreign keys is specified, the compiler rejects this CDS snippet. With an explicit foreign key list, only entities can be used, but not aspects. #### How to Fix Either remove the explicit list of foreign keys and let the compiler handle the composition, or use a composition of entity instead. ```cds aspect Item { key ID : UUID; field : String; }; entity Model { key ID : UUID; Item : Composition of Model.Item { ID }; // ok }; entity Model.Item : Item { }; ``` The snippet uses a user-defined entity, that includes the aspects. #### Related Messages - [`type-unexpected-on-condition`](#type-unexpected-on-condition) ## type-unexpected-on-condition An ON-condition was specified in a composition-of-aspect. Compositions of aspects are managed by the compiler. Specifying an ON-condition is not supported. If you need to specify an ON-condition, use a composition of an entity instead. The message's severity is `Error`. #### Example Erroneous code example: ```cds aspect Item { key ID : UUID; field : String; }; entity Model { key ID : UUID; Item : Composition of Item on Item.ID = ID; // ❌ }; ``` `Item` is an aspect. Because an ON-condition is specified, the compiler rejects this CDS snippet. With an ON-condition, only entities can be used, but not aspects. #### How to Fix Either remove the ON-condition and let the compiler handle the composition, or use a composition of entity instead. ```cds aspect Item { key ID : UUID; field : String; }; entity Model { key ID : UUID; Item : Composition of Model.Item on Item.ID = ID; // ok }; entity Model.Item : Item { }; ``` The snippet uses a user-defined entity, that includes the aspects. #### Related Messages - [`type-unexpected-foreign-keys`](#type-unexpected-foreign-keys) ## wildcard-excluding-one You're replacing an element in your projection, that is already included by using the wildcard `*`. The message's severity is `Info`. #### Example Erroneous code example: ```cds entity Book { key id : String; isbn : String; content : String; }; entity IsbnBook as projection on Book { *, isbn as id, // ❌ }; ``` `IsbnBook:id` replaces `Book:id`, which was included in `IsbnBook` through the wildcard `*`. #### How to Fix Add the replaced element to the list of wildcard excludes: ```cds entity IsbnBook as projection on Book { *, isbn as id } excluding { id }; ``` # On The Nature of Models Introduces the fundamental principles of CDS models. ## Metaphysics of Languages A *model* is a *thing* that describes *something*. For example, a *data model describes the type structure (commonly also called *'schema*') of *data*. ### Languages ### Representations Models can come in different *representations*, which follow different *syntaxes*. For example, we use the *CDL* syntax for *human-readable* representations of CDS models, while CSN is an *object notation*, i.e. a special form of *syntax*, used for *machine-readable* representations of CDS models. ::: details On CSN representations... We can go one meta-level further and distinguish between different representations of CSN representations: in a Node.js process at runtime they are just native in-memory JavaScript objects, when shared they are serialized to JSON format, which can in turn be translated to YAML, and so forth. When we create CSN objects at runtime, they could be plain JavaScript code. ::: ### Reflections CDS models can be compiled to other languages, that play in the same fields, yet not covering the same information, but rather with some loss of information — we call these '*reflections*'. Examples are: - SQL DDL covers the persistence model interface only → only flat tables and views - OData EDMX covers the service interfaces only → queryable entities still exist, with implicit features - GraqhQL also covers service interfaces → queryable entities still exist, but without less features - OpenAPI also covers the service interfaces, with → queryable entities got 'flattened' to paths with input and output types --- The above principles apply not only to CDS models, but also to Queries: - CQL is a syntax for human-readable representations - CQN is an object notation for machine-readable representations And for Expressions: - CXL is a syntax for human-readable representations - CXN is an object notation for machine-readable representations ... ## What is a CDS Model? Models in `cds` are plain JavaScript objects conforming to the _[Core Schema Notation (CSN)](./csn)_. They can be parsed from [_.cds_ sources](./cdl), read from _.json_ or _.yaml_ files or dynamically created in code at runtime. The following ways and examples of creating models are equivalent: ### In Plain Coding at Runtime ```js const cds = require('@sap/cds') // define the model var model = {definitions:{ Products: {kind:'entity', elements:{ ID: {type:'Integer', key:true}, title: {type:'String', length:11, localized:true}, description: {type:'String', localized:true}, }}, Orders: {kind:'entity', elements:{ product: {type:'Association', target:'Products'}, quantity: {type:'Integer'}, }}, }} // do something with it console.log (cds.compile.to.yaml (model)) ``` ### Parsed at Runtime ```js const cds = require('@sap/cds') // define the model var model = cds.parse (` entity Products { key ID: Integer; title: localized String(11); description: localized String; } entity Orders { product: Association to Products; quantity: Integer; } `) // do something with it console.log (cds.compile.to.yaml (model)) ``` ### From _.cds_ Source Files ```cds // some.cds source file entity Products { key ID: Integer; title: localized String(11); description: localized String; } entity Orders { product: Association to Products; quantity: Integer; } ``` Read/parse it, and do something with it, for example: ```js const cds = require('@sap/cds') cds.get('./some.cds') .then (cds.compile.to.yaml) .then (console.log) ``` > Which is equivalent to: `cds ./some.cds -2 yaml` using the CLI ### From _.json_ Files ```json {"definitions": { "Products": { "kind": "entity", "elements": { "ID": { "type": "Integer", "key": true }, "title": { "type": "String", "length": 11, "localized": true }, "description": { "type": "String", "localized": true } } }, "Orders": { "kind": "entity", "elements": { "product": { "type": "Association", "target": "Products" }, "quantity": { "type": "Integer" } } } }} ``` ```js const cds = require('@sap/cds') cds.get('./some.json') .then (cds.compile.to.yaml) .then (console.log) ```
### From Other Frontends You can add any other frontend instead of using [CDL](./cdl); it's just about generating the respective [CSN](./csn) structures, most easily as _.json_. For example, different parties already added these frontends: * ABAP CDS 2 csn * OData EDMX 2 csn * Fiori annotation.xml 2 csn * i18n properties files 2 csn * Java/JPA models 2 csn ## Processing Models All model processing and compilation steps, which can be applied subsequently just work on the basis of plain CSN objects. There's no assumption about and no lock-in to a specific source format. # CAP Service SDK for Node.js Reference Documentation { .subtitle} As an application developer you'd primarily use the Node.js APIs documented herein to implement **domain-specific custom logic** along these lines: 1. Define services in CDS → see [Cookbook > Providing & Consuming Services](../guides/providing-services#service-definitions) 2. Add service implementations → [`cds.Service` > Implementations](./core-services#implementing-services) 3. Register custom event handlers in which → [`srv.on`/`before`/`after`](./core-services#srv-on-before-after) 4. Read/write data from other services in which → [`srv.run`](./core-services#srv-run-query) + [`cds.ql`](./cds-ql) 5. ..., that is from your primary database → [`cds.DatabaseService`](./databases) 5. ..., that is from other connected services → [`cds.RemoteService`](./remote-services) 6. Emit and handle asynchronous events → [`cds.MessagingService`](./messaging) All the rest is largely handled by the CAP runtime framework behind the scenes. This especially applies to bootstrapping the [`cds.server`](./cds-serve) and the generic features provided through [`cds.ApplicationService`](./app-services). # The *cds* Façade Object {#title} The `cds` facade object provides access to all CAP Node.js APIs. Use it like that: ```js const cds = require('@sap/cds') let csn = cds.compile(`entity Foo {}`) ``` ::: tip Use `cds repl` to try out things For example, like this to get the compiled CSN for an entity `Foo`: ```js [dev] cds repl Welcome to cds repl v 7.3.0 > cds.compile(`entity Foo { key ID : UUID }`) { definitions: { Foo: { kind: 'entity', elements: { ID: { key: true, type: 'cds.UUID' } } } }} ``` ::: ## Refs to Submodules Many properties of cds are references to submodules, which are lazy-loaded on first access to minimize bootstrapping time and memory consumption. The submodules are documented in separate documents. - [cds. model](cds-facade#cds-model) {.property} - [cds. resolve()](cds-compile#cds-resolve) {.method} - [cds. load()](cds-compile#cds-load) {.method} - [cds. parse()](cds-compile#cds-parse) {.method} - [cds. compile](cds-compile) {.method} - [cds. linked()](cds-reflect) {.method} - [cds. server](cds-server) {.property} - [cds. serve()](cds-serve) {.method} - cds. services {.property} - cds. middlewares {.property} - cds. protocols {.property} - cds. auth {.property} - [cds. connect](cds-connect) {.property} - [cds. ql](cds-ql) {.property} - [cds. tx()](cds-tx) {.method} - [cds. log()](cds-log) {.method} - [cds. env](cds-env) {.property} - [cds. auth](authentication) {.property} - [cds. i18n](cds-i18n) {.property} - [cds. test](cds-test) {.property} - [cds. utils](cds-utils) {.property}
Import classes and functions through the facade object only: ##### **Good:** {#import-good .good} ```ts const { Request } = require('@sap/cds') // [!code ++] ``` ##### **Bad:** {#import-bad .bad} Never code against paths inside `@sap/cds/`: ```ts const Request = require('@sap/cds/lib/.../Request') // [!code --] ``` ## Builtin Types & Classes Following properties provide access to the classes and prototypes of [linked CSNs](cds-reflect). ### [cds. builtin .types](cds-reflect#cds-builtin-types) {.property} ### [cds. linked .classes](cds-reflect#cds-linked-classes) {.property} The following top-level properties are convenience shortcuts to their counterparts in `cds.linked.classes`.
For example: ```js cds.entity === cds.linked.classes.entity ``` - [cds. Association](cds-reflect#cds-association) {.property} - [cds. Composition](cds-reflect#cds-linked-classes) {.property} - [cds. entity](cds-reflect#cds-entity) {.property} - [cds. event](cds-reflect#cds-linked-classes) {.property} - [cds. type](cds-reflect#cds-linked-classes) {.property} - [cds. array](cds-reflect#cds-linked-classes) {.property} - [cds. struct](cds-reflect#cds-struct) {.property} - [cds. service](cds-reflect#cds-struct) {.property} ## Core Classes ### [cds. Service](core-services#core-services) {.class} - [cds. ApplicationService](app-services) {.class} - [cds. RemoteService](remote-services) {.class} - [cds. MessagingService](messaging) {.class} - [cds. DatabaseService](databases) {.class} - [cds. SQLService](databases) {.class} ### [cds. EventContext](events#cds-event-context) {.class} ### [cds. Event](events#cds-event) {.class} ### [cds. Request](events#cds-request) {.class} ### [cds. User](authentication#cds-user) {.class} ## Properties Following are properties which are not references to submodules. ### cds. version {.property} Returns the version of the `@sap/cds` package from which the current instance of the `cds` facade module was loaded. For example, use that to write version specific code: ```js const [major, minor] = cds.version.split('.').map(Number) if (major < 6) // code for pre cds6 usage ``` ### cds. home {.property} Returns the pathname of the `@sap/cds` installation folder from which the current instance of the `cds` facade module was loaded. ```js [dev] cds repl > cds.home // [!code focus] ~/.npm/lib/node_modules/@sap/cds ``` ### cds. root {.property} Returns the project root that is used by all CAP runtime file access as the root directory. By default this is `process.cwd()`, but can be set to a different root folder. It's guaranteed to be an absolute folder name. ```js // Print current project's package name let package_json = path.join (cds.root,'package.json') // [!code focus] let { name, description } = require(package_json) console.log ({ name, description }) ``` ### cds. cli {.property} Provides access to the parsed effective `cds` cli command and arguments. Example: If you would add log respective output in a project-local `server.js`, and start your server with `cds watch`, you'd see an output like this: ```js Trace : { command: 'serve', argv: [ 'all' ], options: { 'with-mocks': true, 'in-memory?': true } } ``` For example, [`cds-plugins`](cds-serve) can use that to plug into different parts of the framework for different commands being executed. Known values for `cds.cli.command` are `add`, `build`, `compile`, `deploy`, `import`, `init`, `serve`. `cds watch` is normalized to `serve`. ### cds. entities {.property} Is a shortcut to `cds.db.entities`. Used as a function, you can [specify a namespace](/node.js/cds-reflect#entities). ### cds. env {.property} Provides access to the effective configuration of the current process, transparently from various sources, including the local _package.json_ or _.cdsrc.json_, service bindings and process environments. ```js [dev] cds repl > cds.env.requires.auth // [!code focus] { kind: 'basic-auth', strategy: 'mock', users: { alice: { tenant: 't1', roles: [ 'admin' ] }, bob: { tenant: 't1', roles: [ 'cds.ExtensionDeveloper' ] }, # ..., '*': true }, tenants: { t1: { features: [ 'isbn' ] }, t2: { features: '*' } } } ``` [Learn more about `cds.env`](cds-env){.learn-more} ### cds. requires {.property} ... is an overlay and convenience shortcut to [`cds.env.requires`](#cds-env), with additional entries for services with names different from the service definition's name in cds models. For example, given this service definition: ```cds service ReviewsService {} ``` ... and this configuration: ```jsonc { "cds": { "requires": { "db": "sqlite", "reviews" : { // lookup name "service": "ReviewsService" // service definition's name } } }} ``` You can access the entries as follows: ```js [dev] cds repl > cds.env.requires.db //> the effective config for db > cds.env.requires.reviews //> the effective config for reviews > cds.env.requires.ReviewsService //> undefined ``` ```js [dev] cds repl > cds.requires.db //> the effective config for db > cds.requires.reviews //> the effective config for reviews > cds.requires.ReviewsService //> same as cds.requires.reviews ``` The additional entries are useful for code that needs to securely access the service by cds definition name. Note: as `cds.requires` is an overlay to `cds.env.requires`, it inherits all properties from there via prototype chain. In effect using operations which only look at *own* properties, like `Object.keys()` behave different than for `cds.env.requires`: ```js [dev] cds repl > Object.keys(cds.env.requires) //> [ 'db', 'reviews' ] > Object.keys(cds.requires) //> [ 'ReviewsService' ] ``` ### cds. services {.property} A dictionary and cache of all instances of [`cds.Service`](core-services) constructed through [`cds.serve()`](cds-serve), or connected to by [`cds.connect()`](cds-connect). It's an *iterable* object, so can be accessed in the following ways: ```js let { CatalogService, db } = cds.services let all_services = [ ... cds.services ] for (let k in cds.services) //... k is a services's name for (let s of cds.services) //... s is an instance of cds.Service ``` ### cds. context {.property} Provides access to common event context properties like `tenant`, `user`, `locale` as well as the current root transaction for automatically managed transactions. [Learn more about that in reference docs for `cds.tx`.](./cds-tx){.learn-more} ### cds. model {.property} The effective [CDS model](../cds/csn) loaded during bootstrapping, which contains all service and entity definitions, including required services. Many framework operations use that as a default where models are required. It is loaded in built-in `server.js` like so: ```js cds.model = await cds.load('*') ``` [Learn more about bootstrapping in `cds.server`.](./cds-serve){.learn-more} ### cds. app {.property} The [express.js Application object](https://expressjs.com/de/4x/api.html#app) constructed during bootstrapping. Several framework operations use that to add express handlers or middlewares. It is initialised in built-in `server.js` like so: ```js cds.app = require('express')() ``` [Learn more about bootstrapping in `cds.server`.](./cds-serve){.learn-more} ### cds. db {.property} A shortcut to [`cds.services.db`](#cds-services), the primary database connected to during bootstrapping. Many framework operations use that to address and interact with the primary database. In particular that applies to the global [`cds.ql`](cds-ql) statement objects. For example: ```js let books = await SELECT.from(Books) // is a shortcut for: let books = await cds.db.run ( SELECT.from(Books) ) ``` It is initialized in built-in `server.js` like so: ```js cds.db = await cds.connect.to('db') ``` [Learn more about bootstrapping in `cds.server`.](./cds-serve){.learn-more} ## Methods ### cds. error() {.method} ```ts function cds.error ( message : string | object, details? : object caller? : function ) ``` This is a helper to construct new errors in various ways: ```js let e = new cds.error ('message') let e = new cds.error ('message', { code, ... }) let e = new cds.error ({ message, code, ... }) ``` If called without `new` the error is thrown immediately allowing code like that: ```js let e = foo || cds.error (`Expected 'foo' to be truthy, but got: ${foo}`) ``` You can also use `cds.error` with tagged template strings: ```js let e = foo || cds.error `Expected 'foo' to be truthy, but got: ${foo}` ``` > In contrast to basic template strings, passed in objects are added using Node's `util.format()` instead of `toString()`. Method `cds.error.expected` allows to conveniently construct error messages as above: ```js let e = foo || cds.error.expected `${{foo}} to be truthy` ``` Optional argument `caller` can be a calling function to truncate the error stack. Default is `cds.error` itself, so it will never show up in the stacks. ### cds. exit() {.method} Provides a graceful shutdown for running servers, by first emitting `cds.emit('shutdown')` with handlers allowed to be `async` functions. If not running in a server, it calls `process.exit()` ```js cds.on('shutdown', async()=> fs.promises.rm('some-file.json')) cds.on('shutdown', ()=> console.log('shutdown')) cds.exit() //> will rune above handlers before stopping the server ``` ## Lifecycle Events The `cds` facade object is an [EventEmitter](https://nodejs.org/api/events.html#asynchronous-vs-synchronous), which frameworks emits events to, during the server bootstrapping process, or when we compile models. You can register event handlers using `cds.on()` like so: ```js twoslash // @noErrors const cds = require('@sap/cds') cds.on('bootstrap', ...) cds.on('served', ...) cds.on('listening', ...) ``` - [Learn more about Lifecycle Events emitted by `cds.compile`](cds-compile#lifecycle-events) {.learn-more} - [Learn more about Lifecycle Events emitted by `cds.server`](cds-server#lifecycle-events) {.learn-more} > [!warning] > As we're using Node's standard [EventEmitter](https://nodejs.org/api/events.html#asynchronous-vs-synchronous), > event handlers execute **synchronously** in the order they are registered, with `served` and `shutdown` > events as the only exeptions. # Remote Services Class `cds.RemoteService` is a service proxy class to consume remote services via different [protocols](/node.js/cds-serve#cds-protocols), like OData or plain REST. ## cds.**RemoteService** class { #cds-remote-service} ### class cds.**RemoteService** extends cds.Service ## cds.RemoteService — Configuration {#remoteservice-configuration } [remoteservice configuration]: #remoteservice-configuration The `cds.RemoteService` configuration allows you to define various options for connecting to remote services. ### CSRF-Token Handling If the remote system you want to consume requires it, you can enable the new CSRF-token handling of `@sap-cloud-sdk/core` via configuration options `csrf` and `csrfInBatch`. These options allow to configure CSRF-token handling for each remote service separately. #### Basic Configuration ```json "cds": { "requires": { "API_BUSINESS_PARTNER": { "kind": "odata", "model": "srv/external/API_BUSINESS_PARTNER", "csrf": true, "csrfInBatch": true } } } ``` In this example, CSRF handling is enabled for the `API_BUSINESS_PARTNER` service, for regular requests (`csrf: true`) and requests made within batch operations (`csrfInBatch: true`). #### Advanced Configuration Actually `csrf: true` is a convenient preset. If needed, you can further customize the CSRF-token handling with additional parameters: ```json "cds": { "requires": { "API_BUSINESS_PARTNER": { ... "csrf": { // [!code focus] "method": "get", // [!code focus] "url": "..." // [!code focus] } } } } ``` Here, the CSRF-token handling is customized at a more granular level: - `method`: The HTTP method for fetching the CSRF token. The default is `head`. - `url`: The URL for fetching the CSRF token. The default is the resource path without parameters. ### Timeout Handling The `requestTimeout` setting in the `cds.RemoteService` configuration specifies the maximum duration, in milliseconds (default: 60000), to wait for a response from the remote service before timing out. #### Configuration Option ```json { "API_BUSINESS_PARTNER": { "kind": "odata", "credentials": { ... "requestTimeout": 1000000 // [!code focus] } } } ``` ::: tip See [Using Destinations](../guides/using-services#using-destinations) for more details on destination configuration. ::: ## More to Come This documentation is not complete yet, or the APIs are not released for general availability. There's more to come in this place in upcoming releases. # Messaging {{$frontmatter?.synopsis}} ## cds.**MessagingService** class Class `cds.MessagingService` and subclasses thereof are technical services representing asynchronous messaging channels. They can be used directly/low-level, or behind the scenes on higher-level service-to-service eventing. ### class cds.**MessagingService** extends cds.Service ## Declaring Events In your CDS model, you can model events using the `event` keyword inside services. Once you created the `messaging` section in `cds.requires`, all modeled events are automatically enabled for messaging. You can then use the services to emit events (for your own service) or receive events (for external services). Example: In your _package.json_: ```json { "cds": { "requires": { "ExternalService": { "kind": "odata", "model": "srv/external/external.cds" }, "messaging": { "kind": "enterprise-messaging" } } } } ``` In _srv/external/external.cds_: ```cds service ExternalService { event ExternalEvent { ID: UUID; name: String; } } ``` In _srv/own.cds_: ```cds service OwnService { event OwnEvent { ID: UUID; name: String; } } ``` In _srv/own.js_: ```js module.exports = async srv => { const externalService = await cds.connect.to('ExternalService') externalService.on('ExternalEvent', async msg => { await srv.emit('OwnEvent', msg.data) }) } ``` #### Custom Topics with Declared Events You can specify topics to modeled events using the `@topic` annotation. ::: tip If no annotation is provided, the topic will be set to the fully qualified event name. ::: Example: ```cds service OwnService { @topic: 'my.custom.topic' event OwnEvent { ID: UUID; name: String; } } ``` ## Emitting Events To send a message to the message broker, you can use the `emit` method on a transaction for the connected service. Example: ```js const messaging = await cds.connect.to('messaging') this.after(['CREATE', 'UPDATE', 'DELETE'], 'Reviews', async (_, req) => { const { subject } = req.data const { rating } = await cds.run( SELECT.one(['round(avg(rating),2) as rating']) .from(Reviews) .where({ subject })) // send to a topic await messaging.emit('cap/msg/system/review/reviewed', { subject, rating }) // alternative if you want to send custom headers await messaging.emit({ event: 'cap/msg/system/review/reviewed', data: { subject, rating }, headers: { 'X-Correlation-ID': req.headers['X-Correlation-ID'] }}) }) ``` ::: tip The messages are sent once the transaction is successful. Per default, a persistent outbox is used. See [Messaging - Outbox](./outbox) for more information. ::: ## Receiving Events To listen to messages from a message broker, you can use the `on` method on the connected service. This also creates the necessary topic subscriptions. Example: ```js const messaging = await cds.connect.to('messaging') // listen to a topic messaging.on('cap/msg/system/review/reviewed', msg => { const { subject, rating } = msg.data return cds.run(UPDATE(Books, subject).with({ rating })) }) ``` Once all handlers are executed successfully, the message is acknowledged. If one handler throws an error, the message broker will be informed that the message couldn't be consumed properly and might send the message again. To avoid endless cycles, consider catching all errors. If you want to receive all messages without creating topic subscriptions, you can register on `'*'`. This is useful when consuming messages from a dead letter queue. ```js messaging.on('*', async msg => { /*...*/ }) ``` ::: tip In general, messages do not contain user information but operate with a technical user. As a consequence, the user of the message processing context (`cds.context.user`) is set to [`cds.User.privileged`](/node.js/authentication#privileged-user) and, hence, any necessary authorization checks must be done in custom handlers. ::: ## CloudEvents Protocol [CloudEvents](https://cloudevents.io/) is a commonly used specification for describing event data. An example event looks like this: ```js { "type": "sap.s4.beh.salesorder.v1.SalesOrder.Created.v1", "specversion": "1.0", "source": "/default/sap.s4.beh/ER9CLNT001", "id": "0894ef45-7741-1eea-b7be-ce30f48e9a1d", "time": "2020-08-14T06:21:52Z", "datacontenttype": "application/json", "data": { "SalesOrder":"3016329" } } ``` To help you adhere to this standard, CAP prefills these header fields automatically. To enable this, you need to set the option `format: 'cloudevents'` in your message broker. Example: ```js { cds: { requires: { messaging: { kind: 'enterprise-messaging-shared', format: 'cloudevents' } } } } ``` You can always overwrite the default values. ### Topic Prefixes If you want the topics to start with a certain string, you can set a publish and/or a subscribe prefix in your message broker. Example: ```js { cds: { requires: { messaging: { kind: 'enterprise-messaging-shared', publishPrefix: 'default/sap.cap/books/', subscribePrefix: 'default/sap.cap/reviews/' } } } } ``` ### Topic Manipulations #### [SAP Event Mesh](../guides/messaging/#sap-event-mesh) If you specify your format to be `cloudevents`, the following default prefixes are set: ```js { publishPrefix: '$namespace/ce/', subscribePrefix: '+/+/+/ce/' } ``` In addition to that, slashes in the event name are replaced by dots and the `source` header field is derived based on `publishPrefix`. Examples: | publishPrefix | derived source | |--------------------------|---------------------| | `my/own/namespace/ce/` | `/my/own/namespace` | | `my/own.namespace/-/ce/` | `/my/own.namespace` | ## Message Brokers To safely send and receive messages between applications, you need a message broker in-between where you can create queues that listen to topics. All relevant incoming messages are first stored in those queues before they're consumed. This way messages aren't lost when the consuming application isn't available. In CDS, you can configure one of the available broker services in your [`requires` section](cds-connect#cds-env-requires). According to our [grow as you go principle](../about/#grow-as-you-go), it makes sense to first test your application logic without a message broker and enable it later. Therefore, we provide support for [local messaging](#local-messaging) (if everything is inside one Node.js process) as well as [file-based messaging](#file-based). ### Configuring Message Brokers You must provide all necessary credentials by [binding](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/296cd5945fd84d7d91061b2b2bcacb93.html) the message broker to your app. For local environments, use [`cds bind`](../advanced/hybrid-testing#cds-bind-usage) in a [hybrid setup](../guides/messaging/event-mesh#run-tests-in-hybrid-setup). ::: tip For local testing use [`kind`: `enterprise-messaging-shared`](#event-mesh-shared) to avoid the complexity of HTTP-based messaging. ::: ### SAP Event Mesh (Shared) { #event-mesh-shared} `kind`: `enterprise-messaging-shared` Use this if you want to communicate using [SAP Event Mesh](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/df532e8735eb4322b00bfc7e42f84e8d.html) in a shared way. If you register at least one handler, a queue will automatically be created if not yet existent. Keep in mind that unused queues aren't automatically deleted, this has to be done manually. You have the following configuration options: - `queue`: An object containing the `name` property as the name of your queue, additional properties are described [in the SAP Business Accelerator Hub](https://hub.sap.com/api/SAPEventMeshDefaultManagementAPIs/path/putQueue). - `amqp`: AQMP client options as described in the [`@sap/xb-msg-amqp-v100` documentation](https://www.npmjs.com/package/@sap/xb-msg-amqp-v100?activeTab=readme) If the queue name isn't specified, it's derived from `application_name` and the first four characters of `application_id` of your `VCAP_APPLICATION` environmental variable, as well as the `namespace` property of your SAP Event Mesh binding in `VCAP_SERVICES`: `{namespace}/{application_name}/{truncated_application_id}`. This makes sure that every application has its own queue. Example: ```json { "requires": { "messaging": { "kind": "enterprise-messaging-shared", "queue": { "name": "my/enterprise/messaging/queue", "accessType": "EXCLUSIVE", "maxMessageSizeInBytes": 19000000 }, "amqp": { "incomingSessionWindow": 100 } } } } ``` ::: warning _❗ Warning_ When using `enterprise-messaging-shared` in a multitenant scenario, only the provider account will have an event bus. There is no tenant isolation. ::: ::: tip You need to install the latest version of the npm package `@sap/xb-msg-amqp-v100`. ::: ::: tip For optimal performance, you should set the correct access type. To make sure your server is not flooded with messages, you should set the incoming session window. ::: ### SAP Event Mesh `kind`: `enterprise-messaging` This is the same as `enterprise-messaging-shared` except that messages are transferred through HTTP. For incoming messages, a webhook is used. Compared to `enterprise-messaging-shared` you have the additional configuration option: - `webhook`: An object containing the `waitingPeriod` property as the time in milliseconds until a webhook is created after the application is listening to incoming HTTP requests (default: 5000). Additional properties are described in the `Subscription` object in [SAP Event Mesh - REST APIs Messaging](https://help.sap.com/doc/3dfdf81b17b744ea921ce7ad464d1bd7/Cloud/en-US/messagingrest-api-spec.html). Example: ```json { "requires": { "messaging": { "kind": "enterprise-messaging", "queue": { "name": "my/enterprise/messaging/queue", "accessType": "EXCLUSIVE", "maxMessageSizeInBytes": 19000000 }, "webhook": { "waitingPeriod": 7000 } } } } ``` If your server is authenticated using [XSUAA](authentication#jwt), you need to grant the scope `$XSAPPNAME.emcallback` to SAP Event Mesh for it to be able to trigger the handshake and send messages. ::: code-group ```js [xs-security.json] { ..., "scopes": [ ..., { "name": "$XSAPPNAME.emcallback", "description": "Event Mesh Callback Access", "grant-as-authority-to-apps": [ "$XSSERVICENAME()" ] } ] } ``` ::: Make sure to add this to the service descriptor of your SAP Event Mesh instance: ```js { ..., "authorities": [ "$ACCEPT_GRANTED_AUTHORITIES" ] } ``` ::: warning This will not work in the `dev` plan of SAP Event Mesh. ::: ::: warning If you enable the [cors middleware](https://www.npmjs.com/package/cors), [handshake requests](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/6a0e4c77e3014acb8738af039bd9df71.html?q=handshake) from SAP Event Mesh might be intercepted. ::: ### SAP Cloud Application Event Hub { #event-broker } `kind`: `event-broker` Use this if you want to communicate using [SAP Cloud Application Event Hub](https://help.sap.com/docs/event-broker). The integration with SAP Cloud Application Event Hub is provided using the plugin [`@cap-js/event-broker`](https://github.com/cap-js/event-broker). Hence, you first need to install the plugin: ```bash npm add @cap-js/event-broker ``` Then, set the `kind` of your messaging service to `event-broker`: ```jsonc "cds": { "requires": { "messaging": { "kind": "event-broker" } } } ``` The [CloudEvents](https://cloudevents.io/) format is enforced since it's required by SAP Cloud Application Event Hub. Authentication in the SAP Cloud Application Event Hub integration is based on the [Identity Authentication service (IAS)](https://help.sap.com/docs/cloud-identity-services/cloud-identity-services/getting-started-with-identity-service-of-sap-btp) of [SAP Cloud Identity Services](https://help.sap.com/docs/cloud-identity-services). If you are not using [IAS-based Authentication](./authentication#ias), you will need to trigger the loading of the IAS credentials into your app's `cds.env` via an additional `requires` entry: ```jsonc "cds": { "requires": { "ias": { // any name "vcap": { "label": "identity" } } } } ``` #### Deployment Your SAP Cloud Application Event Hub configuration must include your system namespace as well as the webhook URL. The binding parameters must set `"authentication-type": "X509_GENERATED"` to allow IAS-based authentication. Your IAS instance must be configured to include your SAP Cloud Application Event Hub instance under `consumed-services` in order for your application to accept requests from SAP Cloud Application Event Hub. Here's an example configuration based on the _mta.yaml_ file of the [@capire/incidents](https://github.com/cap-js/incidents-app/tree/event-broker) application, bringing it all together: ::: code-group ```yaml [mta.yaml] ID: cap.incidents modules: - name: incidents-srv provides: - name: incidents-srv-api properties: url: ${default-url} #> needed in webhookUrl and home-url below requires: - name: incidents-event-broker parameters: config: authentication-type: X509_IAS - name: incidents-ias parameters: config: credential-type: X509_GENERATED app-identifier: cap.incidents #> any value, e.g., reuse MTA ID resources: - name: incidents-event-broker type: org.cloudfoundry.managed-service parameters: service: event-broker service-plan: event-connectivity config: # unique identifier for this event broker instance # should start with own namespace (i.e., "foo.bar") and may not be longer than 15 characters systemNamespace: cap.incidents webhookUrl: ~{incidents-srv-api/url}/-/cds/event-broker/webhook requires: - name: incidents-srv-api - name: incidents-ias type: org.cloudfoundry.managed-service requires: - name: incidents-srv-api processed-after: # for consumed-services (cf. below), incidents-event-broker must already exist # -> ensure incidents-ias is created after incidents-event-broker - incidents-event-broker parameters: service: identity service-plan: application config: consumed-services: - service-instance-name: incidents-event-broker xsuaa-cross-consumption: true #> if token exchange from IAS token to XSUAA token is needed display-name: cap.incidents #> any value, e.g., reuse MTA ID home-url: ~{incidents-srv-api/url} ``` :::
### Redis PubSub ::: warning This is a beta feature. Beta features aren't part of the officially delivered scope that SAP guarantees for future releases. ::: `kind`: `redis-messaging` Use [Redis PubSub](https://redis.io/) as a message broker. There are no queues: - Messages are lost when consumers are not available. - All instances receive the messages independently. ::: warning No tenant isolation in multitenant scenario When using `redis-messaging` in a multitenant scenario, only the provider account will have an event bus. There is no tenant isolation. ::: ::: tip You need to install the latest version of the npm package `redis`. ::: ### File Based `kind`: `file-based-messaging` Don't use this in production, only if you want to test your application _locally_. It creates a file and uses it as a simple message broker. >You can have at most one consuming app per emitted event. You have the following configuration options: * `file`: You can set the file path (default is _~/.cds-msg-box_). Example: ```json { "requires": { "messaging": { "kind": "file-based-messaging", "file": "../msg-box" } } } ``` ::: warning No tenant isolation in multitenant scenario When using `file-based-messaging` in a multitenant scenario, only the provider account will have an event bus. There is no tenant isolation. ::: ### Local Messaging `kind`: `local-messaging` You can use local messaging to communicate inside one Node.js process. It's especially useful in your automated tests. ### Composite-Messaging `kind`: `composite-messaging` If you have several messaging services and don't want to mention them explicitly in your code, you can create a `composite-messaging` service where you can define routes for incoming and outgoing messages. In those routes, you can use glob patterns to match topics (`**` for any number of any character, `*` for any number of any character except `/` and `.`, `?` for a single character). Example: ```json { "requires": { "messaging": { "kind": "composite-messaging", "routes": { "myEnterpriseMessagingReview": ["cap/msg/system/review/*"], "myEnterpriseMessagingBook": ["**/book/*"] } }, "myEnterpriseMessagingReview": { "kind": "enterprise-messaging", "queue": { "name": "cap/msg/system/review" } }, "myEnterpriseMessagingBook": { "kind": "enterprise-messaging", "queue": { "name": "cap/msg/system/book" } } } } ``` ```js module.exports = async srv => { const messaging = await cds.connect.to('messaging') messaging.on('book/repository/book/modified', msg => { // comes from myEnterpriseMessagingBook }) messaging.on('cap/msg/system/review/reviewed', msg => { // comes from myEnterpriseMessagingReview }) } ``` # Database Services
## cds.**DatabaseService** class { #cds-db-service} ### class cds.**DatabaseService** extends cds.Service ### srv.begin () → this {#db-begin } In case of database services this actually starts the transaction by acquiring a physical connection from the connection pool, and optionally sends a command to the database like `BEGIN TRANSACTION`. This method is called automatically by the framework on the first query, so **you never have to call it** in application coding. There are only very rare cases where you'd want to do so, for example to reuse a `tx` object to start subsequent physical transactions after a former `commit` or `rollback`. But this is not considered good practice. ## cds.DatabaseService — Consumption {#databaseservice-consumption } [databaseservice consumption]: #databaseservice-consumption ### `InsertResult` (Beta) - On INSERT, DatabaseServices return an instance of `InsertResult` defined as follows: - Iterator that returns the keys of the created entries, for example: - Example: `[...result]` -> `[{ ID: 1 }, { ID: 2 }, ...]` - In case of `INSERT...as(SELECT...)`, the iterator returns `{}` for each row - `affectedRows`: the number inserted (root) entries or the number of affectedRows in case of INSERT into SELECT - `valueOf()`: returns `affectedRows` such that comparisons like `result > 0` can be used ::: tip `===` can't be used as it also compares the type ::: ## cds.DatabaseService — Configuration {#databaseservice-configuration } [databaseservice configuration]: #databaseservice-configuration ### Pool Instead of opening and closing a database connection for every request, we use a pool to reuse connections. By default, the following [pool configuration](https://www.npmjs.com/package/generic-pool) is used: ```json { "acquireTimeoutMillis": , "evictionRunIntervalMillis": <2 * (idleTimeoutMillis || softIdleTimeoutMillis || 30000)>, "min": 0, "max": 100, "numTestsPerEvictionRun": <(max - min) / 3>, "softIdleTimeoutMillis": 30000, "idleTimeoutMillis": 30000, "testOnBorrow": true, "fifo": false } ``` ::: warning This default pool configuration does not apply to `@cap-js` database implementations. ::: The _generic-pool_ has a built-in pool evictor, which inspects idle database connections in the pool and destroys them if they are too old. The following parameters are provided in the pool configuration: - _acquireTimeoutMillis_: The parameter specifies how much time it is allowed to wait an existing connection is fetched from the pool or a new connection is established. - _evictionRunIntervalMillis_: The parameter specifies how often to run eviction checks. In case of 0 the check is not run. - _min_: Minimum number of database connections to keep in pool at any given time. ::: warning This should be kept at the default 0. Otherwise every eviction run destroys all unused connections older than `idleTimeoutMillis` and afterwards creates new connections until `min` is reached. ::: - _max_: Maximum number of database connections to keep in pool at any given time. - _numTestsPerEvictionRun_: Number of database connections to be checked with one eviction run. - _softIdleTimeoutMillis_: Amount of time database connection may sit idle in the pool before it is eligible for eviction. At least "min" connections should stay in the pool. In case of -1 no connection can get evicted. - _idleTimeoutMillis_: The minimum amount of time that a database connection may stay idle in the pool before it is eligible for eviction due to idle time. This parameter supercedes softIdleTimeoutMillis. - _testOnBorrow_: Should the pool validate the database connections before giving them to the clients? - _fifo_: If false, the most recently released resources will be the first to be allocated (stack). If true, the oldest resources will be first to be allocated (queue). Default value: false. Pool configuration can be adjusted by setting the `pool` option as shown in the following example: ```json { "cds": { "requires": { "db": { "kind": "hana", "pool": { "acquireTimeoutMillis": 5000, "min": 0, "max": 100, "fifo": true } } } } } ``` ::: warning _❗ Warning_ The parameters are very specific to the current technical setup, such as the application environment and database location. Even though we provide a default pool configuration, we expect that each application provides its own configuration based on its specific needs. :::
## cds.DatabaseService — UPSERT {#databaseservice-upsert } [databaseservice upsert]: #databaseservice-upsert The main use case of upsert is data replication. [Upsert](../cds/cqn.md#upsert) updates existing entity records from the given data or inserts new ones if they don't exist in the database. ::: warning Even if an entity doesn't exist in the database:
→ Upsert is **not** equivalent to Insert. ::: `UPSERT` statements can be created with the [UPSERT](cds-ql#upsert) query API: ```js UPSERT.into('db.Books') .entries({ ID: 4711, title: 'Wuthering Heights', stock: 100 }) ``` `UPSERT` queries are translated into DB native upsert statements, more specifically they unfold to an [UPSERT SQL statement](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/ea8b6773be584203bcd99da76844c5ed.html) on SAP HANA and to an [INSERT ON CONFLICT SQL statement](https://www.sqlite.org/lang_upsert.html) on SQLite. - The rows to be upserted need to have the same structure, that is, all rows needs to specify the same named values. - The upsert data must contain all key elements of the entity. - If upsert data is incomplete only the given values are updated or inserted, which means the `UPSERT` statement has "PATCH semantics". - `UPSERT` statements don't have a where clause. The key values of the entity that is upserted are extracted from the data. The following actions are *not* performed on upsert: * UUID key values are _not generated_. * Generic CAP handlers, such as audit logging, are not invoked. ::: warning In contrast to the Java runtime, deep upserts and delta payloads are not yet supported. ::: ## More to Come This documentation is not complete yet, or the APIs are not released for general availability. Stay tuned to upcoming releases for further updates. # Events and Requests ## cds. context {.property} This property provides seemingly static access to the current [`cds.EventContext`], that is, the current `tenant`, `user` , `locale`, and so on, from wherever you are in your code. For example: ```js let { tenant, user } = cds.context ``` Usually that context is set by inbound middleware. The property is realized as a so-called continuation-local variable, implemented using [Node.js' async local storage](https://nodejs.org/api/async_context.html) technique, and a getter/setter pair: The getter is a shortcut for[`getStore()`](https://nodejs.org/api/async_context.html#asynclocalstoragegetstore). The setter coerces values into valid instances of [`cds.EventContext`]. For example: ```js [dev] cds repl > cds.context = { tenant:'t1', user:'u2' } > let ctx = cds.context > ctx instanceof cds.EventContext //> true > ctx.user instanceof cds.User //> true > ctx.tenant === 't1' //> true > ctx.user.id === 'u2' //> true ``` If a transaction object is assigned, its `tx.context` is used, hence `cds.context = tx` acts as a convenience shortcut for `cds.context = tx.context`: ```js let tx = cds.context = cds.tx({ ... }) cds.context === tx.context //> true ``` ::: tip Prefer local `req` objects in your handlers for accessing event context properties, as each access to `cds.context` happens through [`AsyncLocalStorage.getStore()`](https://nodejs.org/api/async_context.html#asynclocalstoragegetstore), which induces some minor overhead. ::: ## `cds.EventContext` { .class #cds-event-context } [`cds.EventContext`]: #cds-event-context "Class cds.EventContext" Instances of this class represent the invocation context of incoming requests and event messages, such as `tenant`, `user`, and `locale`. Classes [`cds.Event`] and [`cds.Request`] inherit from it and hence provide access to the event context properties: ```js this.on ('*', req => { let { tenant, user } = req ... }) ``` In addition, you can access the current event context from wherever you are in your code via the continuation-local variable [`cds.context`](#cds-context): ```js let { tenant, user } = cds.context ``` ### . http {.property} If the inbound process came from an HTTP channel, you can now access express's common [`req`](https://expressjs.com/en/4x/api.html#req) and [`res`](https://expressjs.com/en/4x/api.html#res) objects through this property. It is propagated from `cds.context` to all child requests, so `Request.http` is accessible in all handlers including your database service ones like so: ```js this.on ('*', req => { let { res } = req.http res.set('Content-Type', 'text/plain') res.send('Hello!') }) ``` Keep in mind that multiple requests (that is, instances of `cds.Request`) may share the same incoming HTTP request and outgoing HTTP response (for example, in case of an OData batch request). ### . id {.property} A unique string used for request correlation. For inbound HTTP requests the implementation fills it from these sources in order of precedence: - `x-correlation-id` header - `x-correlationid` header - `x-request-id` header - `x-vcap-request-id` header - a newly created UUID On outgoing HTTP messages, it's propagated as `x-correlation-id` header. ### . locale {.property} The current user's preferred locale, taken from the HTTP Accept-Language header of incoming requests and resolved to [_normalized_](../guides/i18n#normalized-locales). ### . tenant {.property} A unique string identifying the current tenant, or `undefined` if not in multitenancy mode. In the case of multitenant operation, this string is used for tenant isolation, for example as keys in the database connection pools. ### . timestamp {.property} A constant timestamp for the current request being processed, as an instance of [`Date`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date). The CAP framework uses that to fill in values for the CDS pseudo variable `$now`, with the guaranteed same value. [Learn more in the **Managed Data** guide.](../guides/domain-modeling#managed-data){.learn-more} ### . user {.property} The current user, an instance of `cds.User` as identified and verified by the authentication strategy. If no user is authenticated, `cds.User.anonymous` is returned. [See reference docs for `cds.User`.](authentication#cds-user){.learn-more .indent} ::: tip Please note the difference between `req` in a service handler (instance of `cds.EventContext`) and `req` in an express middleware (instance of `http.IncomingMessage`). Case in point, `req.user` in a service handler is an official API and, if not explicitely set, points to `cds.context.user`. On the other hand, setting `req.user` in a custom authentication middleware is deprecated. ::: ## `cds.Event` { .class #cds-event} [`cds.Event`]: #cds-event "Class cds.Event" Class [`cds.Event`] represents event messages in [asynchronous messaging](messaging), providing access to the [event](#event) name, payload [data](#data), and optional [headers](#headers). It also serves as **the base class for [`cds.Request`](#cds-request)** and hence for all synchronous interactions. ### . event {.property} The name of the incoming event, which can be one of: * The name of an incoming CRUD request like `CREATE`, `READ`, `UPDATE`, `DELETE` * The name of a custom action or function like `submitOrder` * The name of a custom event like `OrderedBook` ### . data {.property} Contains the event data. For example, the HTTP body for `CREATE` or `UPDATE` requests, or the payload of an asynchronous event message. Use `req.data` for modifications as shown in the following: ```js this.before ('UPDATE',Books, req => { req.data.author = 'Schmidt' // [!code ++] req.query.UPDATE.data.author = 'Schmidt' // [!code --] }) ``` ### . headers {.property} Provides access to headers of the event message or request. In the case of asynchronous event messages, it's the headers information sent by the event source. For HTTP requests it's the [standard Node.js request headers](https://nodejs.org/api/http.html#http_message_headers). ### eve. before 'commit' {.event alt="The following documentation on done also applies to commit. "} ### eve. on 'succeeded' {.event alt="The following documentation on done also applies to succeeded. "} ### eve. on 'failed' {.event alt="The following documentation on done also applies to failed. "} ### eve. on 'done' {.event} Register handlers to these events on a per event / request basis. The events are executed when the whole top-level request handling is finished Use this method to register handlers, executed when the whole request is finished. ```js req.before('commit', () => {...}) // immediately before calling commit req.on('succeeded', () => {...}) // request succeeded, after commit req.on('failed', () => {...}) // request failed, after rollback req.on('done', () => {...}) // request succeeded/failed, after all ``` ::: danger The events `succeeded` , `failed`, and `done` are emitted *after* the current transaction ended. Hence, they **run outside framework-managed transactions**, and handlers can't veto the commit anymore. ::: To veto requests, either use the `req.before('commit')` hook, or service-level `before` `COMMIT` handlers. To do something that requires databases in `succeeded`/`failed` handlers, use `cds.spawn()`, or one of the other options of [manual transactions](./cds-tx#manual-transactions). Preferably use a variant with automatic commit/ rollback. Example: ```js req.on('done', async () => { await cds.tx(async () => { await UPDATE `Stats` .set `views = views + 1` .where `book_ID = ${book.ID}` }) }) ``` Additional note about OData: For requests that are part of a changeset, the events are emitted once the entire changeset was completed. If at least one of the requests in the changeset fails, following the atomicity property ("all or nothing"), all requests fail. ## `cds.Request` { .class #cds-request } [`cds.Request`]: #cds-request "Class cds.Request" Class `cds.Request` extends [`cds.Event`] with additional features to represent and deal with synchronous requests to services in [event handlers](./core-services#srv-handle-event), such as the [query](#query), additional [request parameters](#params), the [authenticated user](#user), and [methods to send responses](#req-reply). [Router]: https://expressjs.com/en/4x/api.html#router [routing]: https://expressjs.com/en/guide/routing.html [middleware]: https://expressjs.com/en/guide/using-middleware.html ### . method {.property} The HTTP method of the incoming request: | `msg.event` | → | `msg.method` | |-------------|--------|--------------| | CREATE | → | POST | | READ | → | GET | | UPDATE | → | PATCH | | DELETE | → | DELETE | {} ### . target {.property} Refers to the current request's target entity definition, if any; `undefined` for unbound actions/functions and events. The returned definition is a [linked](cds-reflect#linked-csn) definition as reflected from the [CSN](../cds/csn) model. For OData navigation requests along associations, `msg.target` refers to the last target. For example: | OData Request | `req.target` | |-------------------|----------------------| | Books | AdminService.Books | | Books/201/author | AdminService.Authors | | Books(201)/author | AdminService.Authors | {} [See also `req.path` to learn how to access full navigation paths.](#path){.learn-more} [See _Entity Definitions_ in the CSN reference.](../cds/csn#entity-definitions){.learn-more} [Learn more about linked models and definitions.](cds-reflect){.learn-more} ### . path {.property} Captures the full canonicalized path information of incoming requests with navigation. For requests without navigation, `req.path` is identical to [`req.target.name`](#target) (or [`req.entity`](#entity), which is a shortcut for that). Examples based on [cap/samples/bookshop AdminService](https://github.com/sap-samples/cloud-cap-samples/tree/master/bookshop/srv/admin-service.cds): | OData Request | `req.path` | `req.target.name` | |-------------------|---------------------------|----------------------| | Books | AdminService.Books | AdminService.Books | | Books/201/author | AdminService.Books/author | AdminService.Authors | | Books(201)/author | AdminService.Books/author | AdminService.Authors | {} [See also `req.target`](#target){.learn-more} ### . entity {.property} This is a convenience shortcut to [`msg.target.name`](#target). ### . params {.property} Provides access to parameters in URL paths as an [*iterable*](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#The_iterable_protocol) with the contents matching the positional occurrence of parameters in the url path. In the case of compound parameters, the respective entry is the key value pairs as given in the URL. For example, the parameters in an HTTP request like that: ```http GET /catalog/Authors(101)/books(title='Eleonora',edition=2) HTTP/1.1 ``` The provided parameters can be accessed as follows: ```js const [ author, book ] = req.params // > author === 101 // > book === { title: 'Eleonora', edition: 2 } ``` ### . query {.property} Captures the incoming request as a [CQN query](cds-ql#class-cds-ql-query). For example, an HTTP request like `GET http://.../Books` is captured as follows: ```js req.query = {SELECT:{from:{ref:['Books']}}} ``` If bound custom operations `req.query` contains the query to the entity, on which the bound custom operation is called. For unbound custom operations, `req.query` contains an empty object. ### . subject {.property} Acts as a pointer to one or more instances targeted by the request. It can be used as input for [cds.ql](cds-ql) as follows: ```js SELECT.one.from(req.subject) //> returns single object SELECT.from(req.subject) //> returns one or many in array UPDATE(req.subject) //> updates one or many DELETE(req.subject) //> deletes one or many ``` It's available for CRUD events and bound actions. ### req. reply() {.method} [`req.reply`]: #req-reply Stores the given `results` in `req.results`, which is then sent back to the client, rendered in a protocol-specific way. ### req. reject() {.method} [`req.reject`]: #req-reject Rejects the request with the given HTTP response code and single message. Additionally, `req.reject` throws an error based on the passed arguments. Hence, no additional code and handlers is executed once `req.reject` has been invoked. [Arguments are the same as for `req.error`](#req-error){.learn-more} ### req. error() {.method} ### req. warn() {.method} ### req. info() {.method} ### req. notify() {.method} [`req.info`]: #req-msg [`req.error`]: #req-msg Use these methods to collect messages or errors and return them in the request response to the caller. The method variants reflect different severity levels. Use them as follows: #### Variants | Method | Collected in | Typical UI | Severity | | -------------- | -------------- | ---------- | :------: | | `req.notify()` | `req.messages` | Toasters | 1 | | `req.info()` | `req.messages` | Dialog | 2 | | `req.warn()` | `req.messages` | Dialog | 3 | | `req.error()` | `req.errors` | Dialog | 4 | {} **Note:** messages with a severity less than 4 are collected and accessible in property `req.messages`, while error messages are collected in property `req.errors`. The latter allows to easily check, whether errors occurred with: ```js if (req.errors) //> get out somehow... ``` #### Arguments - `code` _Number (Optional)_ - Represents the error code associated with the message. If the number is in the range of HTTP status codes and the error has a severity of 4, this argument sets the HTTP response status code. - `message` _String \| Object \| Error_ - See below for details on the non-string version. - `target` _String (Optional)_ - The name of an input field/element a message is related to. - `args` _Array (Optional)_ - Array of placeholder values. See [Localized Messages](cds-i18n) for details. ::: tip `target` property for UI5 OData model The `target` property is evaluated by the UI5 OData model and needs to be set according to [Server Messages in the OData V4 Model](https://ui5.sap.com/#/topic/fbe1cb5613cf4a40a841750bf813238e). ::: #### Using an Object as Argument You can also pass an object as the sole argument, which then contains the properties `code`, `message`, `target`, and `args`. Additional properties are preserved until the error or message is sanitized for the client. In case of an error, the additional property `status` can be used to specify the HTTP status code of the response. ```js req.error ({ code: 'Some-Custom-Code', message: 'Some Custom Error Message', target: 'some_field', status: 418 }) ``` Additional properties can be added as well, for example to be used in [custom error handlers](core-services#srv-on-error). > In OData responses, notifications get collected and put into HTTP response header `sap-messages` as a stringified array, while the others are collected in the respective response body properties (→ see [OData Error Responses](https://docs.oasis-open.org/odata/odata-json-format/v4.0/os/odata-json-format-v4.0-os.html#_Toc372793091)). #### Error Sanitization In production, errors should never disclose any internal information that could be used by malicious actors. Hence, we sanitize all server-side errors thrown by the CAP framework. That is, all errors with a 5xx status code (the default status code is 500) are returned to the client with only the respective generic message (example: `500 Internal Server Error`). Errors defined by app developers aren't sanitized and returned to the client unchanged. Additionally, the OData protocol specifies which properties an error object may have. If a custom property shall reach the client, it must be prefixed with `@` to not be purged. ### req. diff() {.method} [`req.diff`]: #req-diff Use this asynchronous method to calculate the difference between the data on the database and the passed data (defaults to `req.data`, if not passed). Note that the usage of `req.diff` only makes sense in *before* handlers as they are run before the actual change was persisted on the database. > This triggers database requests. ```js const diff = await req.diff() ``` # Reflecting CDS Models {{$frontmatter?.synopsis}} [def]: ../cds/csn#definitions [defs]: ../cds/csn#definitions ## cds. linked ([csn](../cds/csn)) {#cds-linked .method} [`cds.linked`]: #cds-linked Method `cds.linked` (or `cds.reflect` which is an alias to the same method) turns a given parsed model into an instance of [class `LinkedCSN`](#linked-csn), and all definitions within into instances of [class `LinkedDefinition`](#any), recursively. Declaration: ```tsx function* cds.linked (csn: CSN | string) => LinkedCSN ``` A typical usage is like that: ```js let csn = cds.load('some-model.cds') let linked = cds.linked(csn) // linked === csn ``` Instead of a already compiled CSN, you can also pass a string containing CDL source code: ```js let linked = cds.linked` entity Books { key ID: UUID; title: String; author: Association to Authors; } entity Authors { key ID: UUID; name: String; } ` ``` The passed in model gets **modified**, and the returned linked model is actually the modified passed-in csn. The operation is **idempotent**, that is, you can repeatedly invoke it on already linked models with zero overhead. ## LinkedCSN {#linked-csn .class} [reflected model]: #linked-csn [linked model]: #linked-csn [`LinkedCSN`]: #linked-csn Models passed through [`cds.linked`] become instances of this class. ### . is_linked {.property} A tag property which is `true` for linked models. {.indent} ### . definitions {.property} The [CSN definitions](../cds/csn#definitions) of the model, turned into an instance of [`LinkedDefinitions`]. {.indent} ### . services {.property alt="The following documentation on entities also applies to services. "} ### . entities {.property} These are convenient shortcuts to access all *[service](../cds/cdl#services)* or all *[entity](../cds/cdl#entities)* definitions in a model.
The value is an instance of [`LinkedDefinitions`]. For example: ```js let m = cds.linked` namespace my.bookshop; entity Books {...} entity Authors {...} service CatalogService { entity ListOfBooks as projection on Books {...} } ` // Object nature let { CatalogService, AdminService } = m.services let { Books, Authors } = m.entities // Array nature for (let each of m.entities) console.log(each.name) // Function nature let { ListOfBooks } = m.entities ('my.bookshop.CatalogService') ``` In addition to the object and array natures of [`LinkedDefinitions`] these properties also can be used as functions, which allows to optionally specify a namespace to fetch all definitions with prefixed with that. If no namespace is specified, the model's declared namespace is used, if any. ### each() {#each .method } ```tsx function* lm.each ( filter : string | def => true/false, defs? : linked_definitions ) ``` Fetches definitions matching the given filter, returning an iterator on them. ```js let m = cds.reflect (csn) for (let d of m.each('entity')) { console.log (d.kind, d.name) } ``` The first argument **_filter_** specifies a filter to match definitions, which can be one of: - a `string` referring to a _kind_ of definition - a `function` returning `true` or `false` Derived kinds are supported, for example, `m.each('struct')` matches structs as well as entities; kind `'any'` matches all. The second optional argument **_[defs]_** allows to specify the definitions to fetch in, defaults to `this.definitions`. ### all() {#all .method } ```tsx function lm.all ( filter : string | def => true/false, defs? : linked_definitions ) ``` Convenience shortcut to [`[... model.each()]`](#each), for example, the following are equivalent: ```js m.all('entity') //> using shortcut [...m.each('entity')] //> using spread operator ``` ### find() {#find .method } ```tsx function lm.find ( filter : string | def => true/false, defs? : linked_definitions ) ``` Convenience shortcut to fetch definitions matching the given filter, returning the first match, if any. For example: ```js let service = m.find('service') ``` The implementation uses to [`.each()`](#each) as follows: ```js for (let any of m.each('service')) return any ``` ### foreach() {#foreach .method } ```tsx function lm.foreach ( filter : def => true/false | string, visitor : def => {}, defs? : linked_definitions ) ``` Calls the visitor for each definition matching the given filter. `foreach` iterates through the passed in defs only, `forall` in addition walks through all nested element definitions hierarchically. * `filter` / `kind` — the filter or kind used to match definitions [→ see _.each(x)_](#each) * `visitor` — the callback function * `defs` — the definitions to fetch in, default: `this.definitions` Examples: ```js // print the names of all services let m = cds.reflect(csn) m.foreach ('service', s => console.log(s.name)) ``` ```js // print the names of all Associations in Books element let { Books } = m.entities() m.foreach ('Association', a => console.log(a.name), Books.elements) ``` ## LinkedDefinitions {.class #iterable} [`LinkedDefinitions`]: #iterable All objects of a linked model containing CSN definitions are instances of this class. For example, that applies to: - *`cds.model` [.definitions](#definitions), [.services](#services), [.entities](#entities)* - *`cds.service` [.entities](#entities-1), [.events](#events), [.actions](#actions-1)* - *`cds.entity` [.keys](#keys), [.associations](#associations), [.compositions](#compositions), [.actions](#actions)* - *`cds.struct` [.elements](#elements)* (hence also *`cds.entity` .elements*) - *`cds.Association` [.foreignKeys](#foreignkeys)* Instances of `LinkedDefinitions` allow both, object-style access, as well as array-like access. For example: ```js let linked = cds.linked (model) let { Books, Authors } = linked.entities // object-like let [ Books, Authors ] = linked.entities // array-like ``` > Note: Orders of definitions could change, so you should always prefer object destructuring over array destructuring. The array-like nature also allows using these shortcuts in `for..of` loops, of course. Which means, you can do that: ```js for (let each of linked.definitions) console.log (each.name) ``` ... instead of iterating definitions using `for..in` loops like that: ```js for (let each in linked.definitions) { let d = linked.definitions [each] console.log (d.name) } ``` Each entry in an instance of `LinkedDefinitions` is a [`LinkedDefinition`]. ## LinkedDefinition {.class #any} [`LinkedDefinition`]: #any All [`cds.linked`] definitions are instances of this class, or subclasses thereof. It is accessible through [`cds.linked.classes.any`](#cds-linked-classes). ### . is_linked {.property} A tag property which is `true` for all linked definitions. {.indent} ### . name {.property} The linked definition's fully qualified name as a non-enumerable property. {.indent} ### . kind {.property} The linked definition's resolved kind as a non-enumerable property. One of: - `'context'` - `'service'` - `'entity'` - `'type'` - `'aspect'` - `'event'` - `'element'` - `'annotation'` ... as documented in the [CSN specification](../cds/csn#definitions). #### *instanceof* You can use JavaScript's standard `instanceof` operator in combination with the built-in classes to check a linked definition's type: ```js let { Foo } = cds.linked(csn).entities if (Foo instanceof cds.entity) console.log ("it's an entity") ``` ## cds. service {.class} All *[service](../cds/cdl#services)* definitions in a linked model are instances of this class. ```tsx class cds.service extends cds.context {...} ``` ### . is_service {.property} A tag property which is `true` for linked entity definitions. {.indent} ### . entities {.property alt="The following documentation on actions also applies to entities. "} ### . events {.property alt="The following documentation on actions also applies to events. "} ### . actions {.property} These properties are convenience shortcuts to access a service definition's exposed [*entity*](../cds/cdl#entities), [*type*](../cds/cdl#types), [*event*](../cds/cdl#events), [*action* or *function*](../cds/cdl#actions) definitions.
Their values are [`LinkedDefinitions`]. {.indent} ## cds. entity {.class } All entity definitions in a linked model are instances of this class. ```tsx class cds.entity extends cds.struct {...} ``` > As `cds.entity` is a subclass of [`cds.struct`](#cds-struct) it also inherits all methods from that. ### . is_entity {.property} A tag property which is `true` for linked entity definitions. {.indent} ### . keys {.property alt="The following documentation on actions also applies to keys. "} ### . associations {.property alt="The following documentation on actions also applies to associations. "} ### . compositions {.property alt="The following documentation on actions also applies to compositions. "} ### . actions {.property} These properties are convenient shortcuts to access an entity definition's declared [*keys*](../cds/cdl#entities), *[Association](../cds/cdl#associations)* or *[Composition](../cds/cdl#associations)* elements, as well as [*bound action* or *function*](../cds/cdl#bound-actions) definitions.
Their values are [`LinkedDefinitions`]. {.indent} ### . texts {.property} If the entity has *[localized](../guides/localized-data)* elements, this property is a reference to the respective `.texts` entity. If not, this property is undefined {.indent} ### . drafts {.property} If draft is enabled, a definition to easily refer to *[draft](../advanced/fiori#draft-support)* data for the current entity is returned. {.indent} ## cds. struct {.class } This is the base class of *[struct](../cds/cdl#structured-types)* elements and types, *[aspects](../cds/cdl#aspects)*, and *[entities](../cds/cdl#entities)*. ```tsx class cds.struct extends cds.type {...} ``` ### . is_struct {.property} A tag property which is `true` for linked struct definitions (types and elements).
It is also `true` for linked entity definitions, that is, instances of as [`cds.entity`](#cds-entity). {.indent} ### . elements {.property} The entity's declared elements as [documented in the CSN Specification](../cds/csn#entity-definitions)
as an instance of [`LinkedDefinitions`]. { .indent} ## cds. Association {.class} All linked definitions of type `Association` or `Composition`, including elements, are instances of this class. Besides the properties specified for [Associations in CSN](../cds/csn#associations), linked associations provide the following reflection properties... ### . _target {.property} A reference to the association's resolved linked target definition. {.indent} ### . isAssociation {.property} A tag property which is `true` for all linked Association definitions, including Compositions. {.indent} ### . isComposition {.property} A tag property which is `true` for all linked Composition definitions. {.indent} ### . is2one / 2many {.property} Convenient shortcuts to check whether an association definition has to-one or to-many cardinality. { .indent} ### . keys {.property} The declared or derived foreign keys. As specified in [CSN spec](../cds/csn#assoc-keys) this is a *projection* of the association target's elements. {.indent} ### . foreignKeys {.property} The effective foreign keys of [*managed* association](../cds/cdl#managed-associations) as linked definitions.
The value is an instance of [`LinkedDefinitions`]. {.indent} ## cds. linked .classes {#cds-linked-classes .property} [`cds.linked.classes`]: #cds-linked-classes This property gives you access to the very roots of `cds`'s type system. When a model is passed through [`cds.linked`] all definitions effectively become instances of one of these classes. In essence they are defined as follows: ```js class any {...} class context extends any {...} cds.service = class service extends context {...} cds.type = class type extends any {...} class scalar extends type {...} class boolean extends scalar {...} class number extends scalar {...} class date extends scalar {...} class string extends scalar {...} cds.array = class array extends type {...} cds.struct = class struct extends type {...} cds.entity = class entity extends struct {...} cds.event = class event extends struct {...} cds.Association = class Association extends type {...} cds.Composition = class Composition extends Association {...} ``` > A few prominent ones of the above classes are available through top-level shortcuts as indicated by the `cds. =` prefixes in the above pseudo code, find more details on these in the following sections. For example, you can use these classes as follows: ```js let m = cds.linked` entity Books { author: Association to Authors; } entity Authors { key ID: UUID; } `) let { Books, Authors } = m.entities let isEntity = Books instanceof cds.entity let keys = Books.keys let { author } = Books.elements if (author.is2many) ... ``` #### mixin() {.method} Provided a convenient way to enhance one or more of the builtin classes with additional methods. Use it like that: ```js const cds = require ('@sap/cds') // simplistic csn2cdl enablement cds.linked.classes .mixin ( class type { toCDL(){ return `${this.kind} ${this.name} : ${this.typeAsCDL()};\n` } typeAsCDL(){ return `${this.type.replace(/^cds\./,'')}` } }, class struct { typeAsCDL() { return `{\n${ Object.values(this.elements).map ( e => ` ${e.toCDL()}` ).join('')}}`} }, class entity extends cds.struct { typeAsCDL() { return ( this.includes ? this.includes+' ' : '' ) + super.typeAsCDL() } }, class Association { typeAsCDL(){ return `Association to ${this.target}` } }, ) // test drive let m = cds.linked` entity Books : cuid { title:String; author: Association to Authors } entity Authors : cuid { name:String; } aspect cuid : { key ID:UUID; } ` m.foreach (d => console.log(d.toCDL())) ``` ## cds. builtin. types {#cds-builtin-types .property} [`cds.builtin.types`]: #cds-builtin-types This property gives you access to all prototypes of the builtin classes as well as to all linked definitions of the [builtin pre-defined types](../cds/types). The resulting object is in turn like the `definitions` in a [`LinkedCSN`]. Actually, at runtime CDS is in fact bootstrapped out of this using core [CSN](../cds/csn) object structures and [`cds.linked`] techniques. Think of it to be constructed as follows: ```js cds.builtin.types = cds.linked` using from './roots'; context cds { type UUID : String(36); type Boolean : boolean; type Integer : number; type UInt8 : Integer; type Int16 : Integer; type Int32 : Integer; type Int64 : Integer; type Integer64 : Integer; type Decimal : number; type Double : number; type Date : date; type Time : date; type DateTime : date; type Timestamp : date; type String : string; type Binary : string; type LargeString : string; type LargeBinary : string; } `.definitions ``` With `./roots` being this in-memory CSN: ```js const { any, context, service , type, scalar, string, number, boolean, date, array, struct, entity, event, aspect Association, Composition } = cds.linked.classes const roots = module.exports = {definitions:{ any: new any, context: new context ({type:'any'}), type: new type ({type:'any'}), scalar: new scalar ({type:'type'}), string: new string ({type:'scalar'}), number: new number ({type:'scalar'}), boolean: new boolean ({type:'scalar'}), date: new date ({type:'scalar'}), array: new array ({type:'type'}), struct: new struct ({type:'type'}), entity: new entity ({type:'struct'}), event: new event ({type:'struct'}), aspect: new aspect ({type:'struct'}), Association: new Association ({type:'type'}), Composition: new Composition ({type:'Association'}), service: new service ({type:'context'}), }} ``` > Indentation indicates inheritance. # Serving Provided Services ## cds. serve (...) {.method} Use `cds.serve()` to construct service providers from the service definitions in corresponding CDS models. Declaration: ```ts:no-line-numbers async function cds.serve ( service : 'all' | string | cds.Service | typeof cds.Service, options : { service = 'all', ... } ) .from ( model : string | CSN ) // default: cds.model .to ( protocol : string | 'rest' | 'odata' | 'odata-v2' | 'odata-v4' | ... ) .at ( path : string ) .in ( app : express.Application ) // default: cds.app .with ( impl : string | function | cds.Service | typeof cds.Service ) ``` ##### Common Usages: ```js const { CatalogService } = await cds.serve ('my-services') ``` ```js const app = require('express')() cds.serve('all') .in (app) ``` ##### Arguments: * `name` specifies which service to construct a provider for; use `all` to construct providers for all definitions found in the models. ```js cds.serve('CatalogService') //> serve a single service cds.serve('all') //> serve all services found ``` You may alternatively specify a string starting with `'./'` or refer to a file name with a non-identifier character in it, like `'-'` below, as a convenient shortcut to serve all services from that model: ```js cds.serve('./reviews-service') //> is not an identifier through './' cds.serve('reviews-service') //> same as '-', hence both act as: cds.serve('all').from('./reviews-service') ``` The method returns a fluent API object, which is also a _Promise_ resolving to either an object with `'all'` constructed service providers, or to the single one created in case you specified a single service: ```js const { CatalogService, AdminService } = await cds.serve('all') const ReviewsService = await cds.serve('ReviewsService') ``` ##### Caching: The constructed service providers are cached in [`cds.services`](cds-facade#cds-services), which (a) makes them accessible to [`cds.connect`](cds-connect), as well as (b) allows us to extend already constructed services through subsequent invocation of [`cds.serve`](cds-serve). ##### Common Usages and Defaults Most commonly, you'd use `cds.serve` in a custom file to add all the services to your [express.js](https://expressjs.com) app as follows: ```js const app = require('express')() cds.serve('all').in(app) app.listen() ``` This uses these defaults for all options: | Option | Description | Default | |----------------------|---------------------------------|-----------------------------| | cds.serve ... | which services to construct | `'all'` services | | .from | models to load definitions from | `'./srv'` folder | | .in | express app to mount to | — none — | | .to | client protocol to serve to | `'fiori'` | | .at | endpoint path to serve at | [`@path`](#path) or `.name` | | .with | implementation function | `@impl` or `._source`.js | Alternatively you can construct services individually, also from other models, and also mount them yourself, as document in the subsequent sections on individual fluent API options. If you just want to add some additional middleware, it's recommended to bootstrap from a [custom `server.js`](#cds-server). ### .from (model) {#from .method} Allows to determine the CDS models to fetch service definitions from, which can be specified as one of: - A filename of a single model, which gets loaded and parsed with [`cds.load`] - A name of a folder containing several models, also loaded with [`cds.load`] - The string `'all'` as a shortcut for all models in the `'./srv'` folder - An already parsed model in [CSN](../cds/csn) format The latter allows you to [`cds.load`] or dynamically construct models yourself and pass in the [CSN](../cds/csn) models, as in this example: ```js const csn = await cds.load('my-services.cds') cds.serve('all').from(csn)... ``` **If omitted**, `'./srv'` is used as default. ### .to (protocol) {#to .method} Allows to specify the protocol through which to expose the service. Currently supported values are: * `'rest'` plain HTTP rest protocol without any OData-specific extensions * `'odata'` standard OData rest protocol without any Fiori-specific extensions * `'fiori'` OData protocol with all Fiori-specific extensions like Draft enabled **If omitted**, `'fiori'` is used as default. ### .at (path) {#at .method} Allows to programmatically specify the mount point for the service. **Note** that this is only possible when constructing single services: ```js cds.serve('CatalogService').at('/cat') cds.serve('all').at('/cat') //> error ``` **If omitted**, the mount point is determined from annotation [`@path`](#path), if present, or from the service's lowercase name, excluding trailing _Service_. ```cds service MyService @(path:'/cat'){...} //> served at: /cat service CatalogService {...} //> served at: /catalog ``` ### .in ([express app](https://expressjs.com/api.html#app)) {#in .method} Adds all service providers as routers to the given [express app](https://expressjs.com/api.html#app). ```js const app = require('express')() cds.serve('all').in(app) app.listen() ``` ### .with (impl) {#with .method} Allows to specify a function that adds [event handlers] to the service provider, either as a function or as a string referring to a separate node module containing the function. ```js cds.serve('./srv/cat-service.cds') .with ('./srv/cat-service.js') ``` ```js cds.serve('./srv/cat-service') .with (srv => { srv.on ('READ','Books', (req) => req.reply([...])) }) ``` [Learn more about using impl annotations.](core-services#implementing-services){.learn-more} [Learn more about adding event handlers.](core-services#srv-on-before-after){.learn-more} **Note** that this is only possible when constructing single services: ```js cds.serve('CatalogService') .with (srv=>{...}) cds.serve('all') .with (srv=>{...}) //> error ``` **If omitted**, an implementation is resolved from annotation `@impl`, if present, or from a `.js` file with the same basename than the CDS model, for example: ```cds service MyService @(impl:'cat-service.js'){...} ``` ```sh srv/cat-service.cds #> CDS model with service definition srv/cat-service.js #> service implementation used by default ``` ## cds. middlewares For each service served at a certain protocol, the framework registers a configurable set of express middlewares by default like so: ```js app.use (cds.middlewares.before, protocol_adapter) ``` The standard set of middlewares uses the following order: ```js cds.middlewares.before = [ context(), // provides cds.context trace(), // provides detailed trace logs when DEBUG=trace auth(), // provides cds.context.user & .tenant ctx_model(), // fills in cds.context.model, in case of extensibility ] ``` ::: warning _Be aware of the interdependencies of middlewares_ _ctx_model_ requires that _cds.context_ middleware has run before. _ctx_auth_ requires that _authentication_ has run before. ::: ### . context() {.method} This middleware initializes [cds.context](events#cds-context) and starts the continuation. It's required for every application. ### . trace() {.method} The tracing middleware allows you to do a first-level performance analysis. It logs how much time is spent on which layer of the framework when serving a request. To enable this middleware, you can set for example the [environment variable](cds-log#debug-env-variable) `DEBUG=trace`. ### . auth() {.method} [By configuring an authentication strategy](./authentication#strategies), a middleware is mounted that fulfills the configured strategy and subsequently adds the user and tenant identified by that strategy to [cds.context](events#cds-context). ### . ctx_model() {.method} It adds the currently active model to the continuation. It's required for all applications using extensibility or feature toggles. ### .add(mw, pos?) {.method} Registers additional middlewares at the specified position. `mw` must be a function that returns an express middleware. `pos` specified the index or a relative position within the middleware chain. If not specified, the middleware is added to the end. ```js cds.middlewares.add (mw, {at:0}) // to the front cds.middlewares.add (mw, {at:2}) cds.middlewares.add (mw, {before:'auth'}) cds.middlewares.add (mw, {after:'auth'}) cds.middlewares.add (mw) // to the end ```
### Custom Middlewares The configuration of middlewares must be done programmatically before bootstrapping the CDS services, for example, in a [custom server.js](cds-serve#custom-server-js). The framework exports the default middlewares itself and the list of middlewares which run before the protocol adapter starts processing the request. ```js cds.middlewares = { auth, context, ctx_model, errors, trace, before = [ context(), trace(), auth(), ctx_model() ] } ``` In order to plug in custom middlewares, you can override the complete list of middlewares or extend the list programmatically. ::: warning Be aware that overriding requires constant updates as new middlewares by the framework are not automatically taken over. ::: [Learn more about the middlewares default order.](#cds-middlewares){.learn-more} #### Customization of `cds.context.user` You can register middlewares to customize `cds.context.user`. It must be done after authentication. If `cds.context.tenant` is manipulated as well, it must also be done before `cds.context.model` is set for the current request. ```js cds.middlewares.before = [ cds.middlewares.context(), cds.middlewares.trace(), cds.middlewares.auth(), function ctx_user (_,__,next) { const ctx = cds.context ctx.user.id = '' + ctx.user.id next() }, cds.middlewares.ctx_model() ] ``` #### Enabling Feature Flags You can register middlewares to customize `req.features`. It must be done before `cds.context.model` is set for the current request. ```js cds.middlewares.before = [ cds.middlewares.context(), cds.middlewares.trace(), cds.middlewares.auth(), function req_features (req,_,next) { req.features = ['', ''] next() }, cds.middlewares.ctx_model() ] ``` [Learn more about Feature Vector Providers.](../guides/extensibility/feature-toggles#feature-vector-providers){.learn-more} ### Current Limitations - Configuration of middlewares must be done programmatically. ## cds. protocols The framework provides adapters for OData V4 and REST out of the box. In addition, GraphQL can be served by using our open source package [`@cap-js/graphql`](https://github.com/cap-js/graphql). By default, the protocols are served at the following path: |protocol|path| |---|---| |OData V4|/odata/v4| |REST|/rest| |GraphQL|/graphql| ### @protocol Configures at which protocol(s) a service is served. ```cds @odata service CatalogService {} //> serves CatalogService at: /odata/v4/catalog @protocol: 'odata' service CatalogService {} //> serves CatalogService at: /odata/v4/catalog @protocol: ['odata', 'rest', 'graphql'] service CatalogService {} //> serves CatalogService at: /odata/v4/catalog, /rest/catalog and /graphql @protocol: [{ kind: 'odata', path: 'some/path' }] service CatalogService {} //> serves CatalogService at: /odata/v4/some/path ``` Note, that - the shortcuts `@rest`, `@odata`, `@graphql` are only supported for services served at only one protocol. - `@protocol` has precedence over the shortcuts. - `@protocol.path` has precedence over `@path`. - the default protocol is OData V4. - `odata` is a shortcut for `odata-v4`. - `@protocol: 'none'` will treat the service as _internal_. ### @path Configures the path at which a service is served. ```cds @path: 'browse' service CatalogService {} //> serves CatalogService at: /odata/v4/browse @path: '/browse' service CatalogService {} //> serves CatalogService at: /browse ``` Be aware that using an absolute path will disallow serving the service at multiple protocols. ### Custom Protocol Adapter Similar to the configuration of the GraphQL Adapter, you can plug in your own protocol. The `impl` property must point to the implementation of your protocol adapter. Additional options for the protocol adapter are provided on the same level. ```js cds.env.protocols = { 'custom-protocol': { path: '/custom', impl: '', ...options } } ``` ### Current Limitations - Configuration of protocols must be done programmatically. - Additional protocols do not respect `@protocol` annotation yet. - The configured protocols do not show up in the `index.html` yet. # Connecting to Required Services Services frequently consume other services, which could be **local** services served by the same process, or **external** services, for example consumed through OData. The latter include **database** services. In all cases use `cds.connect` to connect to such services, for example, from your: ## Configuring Required Services {#cds-env-requires } To configure required remote services in Node.js, simply add respective entries to the `cds.requires` sections in your _package.json_ or in _.cdsrc.json_ (omitting the `cds.` prefix). These configurations are constructed as follows: ```json "cds": { "requires": { "ReviewsService": { "kind": "odata", "model": "@capire/reviews" }, "OrdersService": { "kind": "odata", "model": "@capire/orders" }, } } ``` Entries in this section tell the service loader to not serve that service as part of your application, but expects a service binding at runtime in order to connect to the external service provider. The options are as follows: ### cds.requires.\`.impl` Service implementations are ultimately configured in `cds.requires` like that: ```json "cds": { "requires": { "some-service": { "impl": "some/node/module/path" }, "another-service": { "impl": "./local/module/path" } }} ``` Given that configuration, `cds.connect.to('some-service')` would load the specific service implementation from `some/node/module/path`. Prefix the module path in `impl` with `./` to refer to a file relative to your project root. ### cds.requires.\`.kind` As service configurations inherit from each other along `kind` chains, we can refer to default configurations shipped with `@sap/cds`, as you commonly see that in our [_cap/samples_](https://github.com/sap-samples/cloud-cap-samples), like so: ```json "cds": { "requires": { "db": { "kind": "sqlite" }, "remote-service": { "kind": "odata" } }} ``` This is backed by these default configurations: ```json "cds": { "requires": { "sqlite": { "impl": "[...]/sqlite/service" }, "odata": { "impl": "[...]/odata/service" }, }} ``` > Run `cds env get requires` to see all default configurations. > Run `cds env get requires.db.impl` to see the impl used for your database. Given that configuration, `cds.connect.to('db')` would load the generic service implementation. [Learn more about `cds.env`.](cds-env){.learn-more} ### cds.requires.\`.model` Specify (imported) models for remote services in this property. This allows the service runtime to reflect on the external API and add generic features. The value can be either a single string referring to a CDS model source, resolved as absolute node module, or relative to the project root, or an array of such. ```json "cds": { "requires": { "remote-service": { "kind": "odata", "model":"some/imported/model" } }} ``` Upon [bootstrapping](./cds-serve), all these required models will be loaded and compiled into the effective [`cds.model`](cds-facade#cds-model) as well. ### cds.requires.\`.service` If you specify a model, then a service definition for your required service must be included in that model. By default, the name of the service that is checked for is the name of the required service. This can be overwritten by setting `service` inside the required service configuration. ```json "cds": { "requires": { "remote-service": { "kind": "odata", "model":"some/imported/model", "service": "BusinessPartnerService" } }} ``` The example specifies `service: 'BusinessPartnerService'`, which results in a check for a service called `BusinessPartnerService` instead of `remote-service` in the model loaded from `some/imported/model`. ### cds.requires.\`.credentials` Specify the credentials to connect to the service. Credentials need to be kept secure and should not be part of a configuration file. ## Connecting to Required Services { #cds-connect-to } ### cds. connect.to () {.method} Declaration: ```ts:no-line-numbers async function cds.connect.to ( name : string, // reference to an entry in `cds.requires` config options : { kind : string // reference to a preset in `cds.requires.kinds` config impl : string // module name of the implementation } ) ``` Use `cds.connect.to()` to connect to services configured in a project's `cds.requires` configuration. Usually such services are remote services, which in turn can be mocked locally. Here's an example: ::: code-group ```json [package.json] {"cds":{ "requires":{ "db": { "kind": "sqlite", "credentials": { "url":"db.sqlite" }}, "ReviewsService": { "kind": "odata-v4" } } }} ``` ::: ```js const ReviewsService = cds.connect.to('ReviewsService') const db = cds.connect.to('db') ``` Argument `options` allows to pass options programmatically, and thus create services without configurations, for example: ```js const db2 = cds.connect.to ({ kind: 'sqlite', credentials: { url: 'db2.sqlite' } }) ``` In essence, `cds.connect.to()` works like that: ```js let o = { ...cds.requires[name], ...options } let csn = o.model ? await cds.load(o.model) : cds.model let Service = require (o.impl) //> a subclass of cds.Service let srv = new Service (name, csn, o) return srv.init() ?? srv ``` ### cds.connect.to (name, options?) → service Connects to a required service and returns a _Promise_ resolving to a corresponding _[Service](../cds/cdl#services)_ instance. Subsequent invocations with the same service name all return the same instance. ```js const srv = await cds.connect.to ('some-service') const { Books } = srv.entities await srv.run (SELECT.from(Books)) ``` _**Arguments:**_ * `name` is used to look up connect options from [configured services](#cds-env-requires). * `options` allows to provide _ad-hoc_ options, overriding [configured ones](#cds-env-requires). _**Caching:**_ Service instances are cached in [`cds.services`](cds-facade#cds-services), thus subsequent connects with the same service name return the initially connected one. As services constructed by [`cds.serve`](cds-serve#cds-serve) are registered with [`cds.services`](cds-facade#cds-services) as well, a connect finds and returns them as local service connections. If _ad-hoc_ options are provided, the instance is not cached. ### cds.connect.to (options) → service Ad-hoc connection (→ only for tests): ```js cds.connect.to ({ kind:'sqlite', credentials:{database:'my.db'} }) ``` ### cds.connect.to ('\:\') → service This is a shortcut for ad-hoc connections. For example: ```js cds.connect.to ('sqlite:my.db') ``` is equivalent to: ```js cds.connect.to ({kind: 'sqlite', credentials:{database:'my.db'}}) ``` ## Service Bindings {#service-bindings} A service binding connects an application with a cloud service. For that, the cloud service's credentials need to be injected in the CDS configuration: ```jsonc { "requires": { "db": { "kind": "hana", "credentials": { /* from service binding */ } } } } ``` You specify the credentials to be used for a service by using one of the following: - Environment variables - File system - Auto binding What to use depends on your environment. ### In Cloud Foundry {#bindings-in-cloud-platforms} Find general information about how to configure service bindings in Cloud Foundry: - [Deploying Services using MTA Deployment Descriptor](https://help.sap.com/docs/SAP_HANA_PLATFORM/4505d0bdaf4948449b7f7379d24d0f0d/33548a721e6548688605049792d55295.html) - [Binding Service Instances to Cloud Foundry Applications](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/0e6850de6e7146c3a17b86736e80ee2e.html) - [Binding Service Instances to Applications using the Cloud Foundry CLI](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/296cd5945fd84d7d91061b2b2bcacb93.html) Cloud Foundry uses auto configuration of service credentials through the `VCAP_SERVICES` environment variable. [Learn more about environment variables on Cloud Foundry and `cf env`.](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html){.learn-more} #### Through `VCAP_SERVICES` env var {#vcap_services} When deploying to Cloud Foundry, service bindings are provided in `VCAP_SERVICES` process environment variables, which is JSON-stringified array containing credentials for multiple services. The entries are matched to the entries in `cds.requires` as follows, in order of precedence: 1. The service's `name` is matched against the `name` property of `VCAP_SERVICE` entries 2. The service's `name` is matched against the `binding_name` property 3. The service's `name` is matched against entries in the `tags` array 4. The service's `kind` is matched against entries in the `tags` array 5. The service's `kind` is matched against the `label` property, for example, 'hana' 6. The service's `kind` is matched against the `type` property (The type property is only relevant for [servicebinding.io](https://servicebinding.io) bindings) 7. The service's `vcap.name` is matched against the `name` property All the config properties found in the first matched entry will be copied into the cds.requires.\.credentials property. Here are a few examples:
CAP config VCAP_SERVICES
```json { "cds": { "requires": { "db": { ... } } } } ``` ```json { "VCAP_SERVICES": { "hana": [{ "name": "db", ... }] } } ```
```json { "cds": { "requires": { "db": { "kind": "hana" } } } } ``` ```json { "VCAP_SERVICES": { "hana": [{ "label": "hana", ... }] } } ```
```json { "cds": { "requires": { "db": { "vcap": { "name": "myDb" } } } } } ``` ```json { "VCAP_SERVICES": { "hana": [{ "name": "myDb", ... }] } } ```
### In Kubernetes / Kyma { #in-kubernetes-kyma} CAP supports [servicebinding.io](https://servicebinding.io/) service bindings and SAP BTP service bindings created by the [SAP BTP Service Operator](https://github.com/SAP/sap-btp-service-operator). 1. Specify a root directory for all service bindings using `SERVICE_BINDING_ROOT` environment variable: ```yaml spec: containers: - name: bookshop-srv env: # ... - name: SERVICE_BINDING_ROOT value: /bindings ``` 2. Create service bindings Use the `ServiceBinding` custom resource of the [SAP BTP Service Operator](https://github.com/SAP/sap-btp-service-operator) to create bindings to SAP BTP services: ```yaml apiVersion: services.cloud.sap.com/v1alpha1 kind: ServiceBinding metadata: name: bookshop-xsuaa-binding spec: serviceInstanceName: bookshop-xsuaa-binding externalName: bookshop-xsuaa-binding secretName: bookshop-xsuaa-secret ``` Bindings to other services need to follow the [servicebinding.io workload projection specification](https://servicebinding.io/spec/core/1.0.0-rc3/#workload-projection). 3. Mount the secrets of the service bindings underneath the root directory: ```yaml spec: containers: - name: bookshop-srv # ... volumeMounts: - name: bookshop-auth mountPath: "/bindings/auth" readOnly: true volumes: - name: bookshop-auth secret: secretName: bookshop-xsuaa-secret ``` The `secretName` property refers to an existing Kubernetes secret, either manually created or by the `ServiceBinding` resource. The name of the sub directory (`auth` in the example) is recognized as the binding name. CAP services receive their credentials from these bindings [as if they were provided using VCAP_SERVICES](#vcap_services). #### Through environment variables {#env-service-bindings} All values of a secret can be added as environment variables to a pod. A prefix can be prepended to each of the environment variables. To inject the values from the secret in the right place of your CDS configuration, you use the configuration path to the `credentials` object of the service as the prefix: `cds_requires__credentials_` Please pay attention to the underscore ("`_`") character at the end of the prefix. *Example:* ```yaml spec: containers: - name: app-srv # ... envFrom: - prefix: cds_requires_db_credentials_ secretRef: name: app-db ``` ::: warning For the _configuration path_, you **must** use the underscore ("`_`") character as delimiter. CAP supports dot ("`.`") as well, but Kubernetes won't recognize variables using dots. Your _service name_ **mustn't** contain underscores. ::: #### Through the file system {#file-system-service-bindings} CAP can read configuration from a file system by specifying the root path of the configuration in the `CDS_CONFIG` environment variable. Set `CDS_CONFIG` to the path that should serve as your configuration root, for example: `/etc/secrets/cds`. Put the service credentials into a path that is constructed like this: `/requires//credentials` Each file will be added to the configuration with its name as the property name and the content as the value. If you have a deep credential structure, you can add further sub directories or put the content in a file as a JSON array or object. For Kubernetes, you can create a volume with the content of a secret and mount it on your container. *Example:* ```yaml spec: volumes: - name: app-db-secret-vol secret: secretName: app-db containers: - name: app-srv # ... env: - name: CDS_CONFIG value: /etc/secrets/cds volumeMounts: - name: app-db-secret-vol mountPath: /etc/secrets/cds/requires/db/credentials readOnly: true ``` #### Provide Service Bindings (`VCAP_SERVICES`) {#provide-service-bindings} If your application runs in a different environment than Cloud Foundry, the `VCAP_SERVICES` env variable is not available. But it may be needed by some libraries, for example the SAP Cloud SDK. By enabling the CDS feature `features.emulate_vcap_services`, the `VCAP_SERVICES` env variable will be populated from your configured services. For example, you can enable it in the _package.json_ file for your production profile: ```json { "cds": { "features": { "[production]": { "emulate_vcap_services": true } } } } ``` ::: warning This is a backward compatibility feature.
It might be removed in a next [major CAP version](../releases/schedule#yearly-major-releases). ::: Each service that has credentials and a `vcap.label` property is put into the `VCAP_SERVICES` env variable. All properties from the service's `vcap` object will be taken over to the service binding. The `vcap.label` property is pre-configured for some services used by CAP. For example, for the XSUAA service you only need to provide credentials and the service kind: ```json { "requires": { "auth": { "kind": "xsuaa", "credentials": { "clientid": "cpapp", "clientsecret": "dlfed4XYZ" } } } } ``` The `VCAP_SERVICES` variable is generated like this: ```json { "xsuaa": [ { "label": "xsuaa", "tags": [ "auth" ], "credentials": { "clientid": "cpapp", "clientsecret": "dlfed4XYZ" } } ] } ``` The generated value can be displayed using the command: ```sh cds env get VCAP_SERVICES --process-env ``` A list of all services with a preconfigured `vcap.label` property can be displayed with this command: ```sh cds env | grep vcap.label ``` You can include your own services by configuring `vcap.label` properties in your CAP configuration. For example, in the _package.json_ file: ```json { "cds": { "requires": { "myservice": { "vcap": { "label": "myservice-label" } } } } } ``` The credentials can be provided in any supported way. For example, as env variables: ```sh cds_requires_myservice_credentials_user=test-user cds_requires_myservice_credentials_password=test-password ``` The resulting `VCAP_SERVICES` env variable looks like this: ```json { "myservice-label": [ { "label": "myservice-label", "credentials": { "user": "test-user", "password": "test-password" } } ] } ``` ## Hybrid Testing In addition to the [static configuration of required services](#service-bindings), additional information, such as urls, secrets, or passwords are required to actually send requests to remote endpoints. These are dynamically filled into property `credentials` from process environments, as explained in the following. ### cds.requires.\.credentials All service binding information goes into this property. It's filled from the process environment when starting server processes, managed by deployment environments. Service bindings provide the details about how to reach a required service at runtime, that is, providing requisite credentials, most prominently the target service's `url`. For development purposes, you can pass them on the command line or add them to a _.env_ or _default-env.json_ file as follows: ```properties # .env file cds.requires.remote-service.credentials = { "url":"http://...", ... } ``` ::: warning ❗ Never add secrets or passwords to _package.json_ or _.cdsrc.json_! General rule of thumb: `.credentials` are always filled (and overridden) from process environment on process start. ::: One prominent exception of that, which you would frequently add to your _package.json_ is the definition of a database file for persistent sqlite database during development: ```json "cds": { "requires": { "db": { "kind": "sql", "[development]": { "kind": "sqlite", "credentials": { "url": "db/bookshop.sqlite" } } } }} ``` ### Basic Mechanism {#bindings-via-cds-env} The CAP Node.js runtime expects to find the service bindings in `cds.env.requires`. 1. Configured required services constitute endpoints for service bindings. ```json "cds": { "requires": { "ReviewsService": {...}, } } ``` 2. These are made available to the runtime via `cds.env.requires`. ```js const { ReviewsService } = cds.env.requires ``` 3. Service Bindings essentially fill in `credentials` to these entries. ```js const { ReviewsService } = cds.env.requires ReviewsService.credentials = { url: "http://localhost:4005/reviews" } ``` The latter is appropriate in test suites. In productive code, you never provide credentials in a hard-coded way. Instead, use one of the options presented in the following sections. ### Through _.cdsrc-private.json_ File for Local Testing [Learn more about hybrid testing using _.cdsrc-private.json_.](../advanced/hybrid-testing#bind-to-cloud-services) ```json { "requires": { "ReviewsService": { "credentials": { "url": "http://localhost:4005/reviews" } }, "db": { "credentials": { "url": "db.sqlite" } } } } ``` ::: warning Make sure that the _.cdsrc-private.json_ file is not checked into your project. ::: ### Through `process.env` Variables {#bindings-via-process-env} You could pass credentials as process environment variables, for example in ad-hoc tests from the command line: ```sh export cds_requires_ReviewsService_credentials_url=http://localhost:4005/reviews export cds_requires_db_credentials_database=sqlite.db cds watch fiori ``` #### In _.env_ Files for Local Testing Add environment variables to a local _.env_ file for repeated local tests: ```properties cds.requires.ReviewsService.credentials = { "url": "http://localhost:4005/reviews" } cds.requires.db.credentials.database = sqlite.db ``` > Never check in or deploy such _.env_ files!
## Importing Service APIs ## Mocking Required Services # Transaction Management Transaction management in CAP deals with (ACID) database transactions, principal / context propagation on service-to-service calls and tenant isolation. ::: tip **In Essence...** As an application developer, **you don't have to care** about transactions, principal propagation, or tenant isolation at all. CAP runtime manages that for you automatically. Only in rare cases, you need to go beyond that level, and use one or more of the options documented hereinafter. :::
## Automatic Transactions Whenever an instance of `cds.Service` processes requests, the core framework automatically cares for starting and committing or rolling back database transactions, connection pooling, principal propagation and tenant isolation. For example a call like that: ```js await db.read('Books') ``` ... will cause this to take place on SQL level: ```sql -- ACQUIRE connection from pool CONNECT; -- if no pooled one BEGIN; SELECT * from Books; COMMIT; -- RELEASE connection to pool ```
::: tip **Service-managed Transactions** — whenever a service operation, like `db.read()` above, is executed, the core framework ensures it will either join an existing transaction, or create a new root transaction. Within event handlers, your service always is in a transaction. ::: ## Nested Transactions Services commonly process requests in event handlers, which in turn send requests to other services, like in this simplistic implementation of a bank transfer operation: ```js const log = cds.connect.to('log') const db = cds.connect.to('db') BankingService.on ('transfer', req => { let { from, to, amount } = req.data await db.update('BankAccount',from).set('balance -=', amount), await db.update('BankAccount',to).set('balance +=', amount), await log.insert ({ kind:'Transfer', from, to, amount }) }) ``` Again, all transaction handling is done by the CAP core framework, in this case by orchestrating three transactions: 1. A *root* transaction for `BankingService.transfer` 2. A *nested* transaction for the calls to the `db` service 3. A *nested* transaction for the calls to the `log` service Nested transactions are automatically committed when their root transaction is committed upon successful processing of the request; or rolled back if not.
::: warning **No Distributed Transactions** — Note that in the previous example, the two nested transactions are *synchronized* with respect to a final commit / rollback, but *not as a distributed atomic transaction*. This means, it still can happen, that the commit of one nested transaction succeeds, while the other fails. ::: ## Manual Transactions Use `cds.tx()` to start and commit transactions manually, if you need to ensure two or more queries to run in a single transaction. The easiest way to achieve this is shown below: ```js cds.tx (async ()=>{ const [ Emily ] = await db.insert (Authors, {name:'Emily Brontë'}) await db.insert (Books, { title: 'Wuthering Heights', author: Emily }) }) ``` [Learn more about `cds.tx()`](#srv-tx){.learn-more} This usage variant, which accepts a function with nested operations ... 1. creates a new root transaction 2. executes all nested operations in this transaction 3. automatically finalizes the transaction with commit or rollback
::: tip **Only in non-managed environments** — as said above: you don't need to care for that if you are in a managed environment, that is, when implementing an event handler. In that case, the core service runtime automatically created a transaction for you already. ::: ::: warning _❗ Warning_ If you're using the database SQLite, it leads to deadlocks when two transactions wait for each other. Parallel transactions are not allowed and a new transaction is not started before the previous one is finished. ::: ## Background Jobs Background jobs are tasks to be executed *outside of the current transaction*, possibly also with other users, and maybe repeatedly. Use `cds.spawn()` to do so: ```js // run in current tenant context but with privileged user // and with a new database transactions each... cds.spawn ({ user: cds.User.privileged, every: 1000 /* ms */ }, async ()=>{ const mails = await SELECT.from('Outbox') await MailServer.send(mails) await DELETE.from('Outbox').where (`ID in ${mails.map(m => m.ID)}`) }) ``` [Learn more about `cds.spawn()`](#cds-spawn){.learn-more} ## cds. context {#event-contexts .property} Automatic transaction management, as offered by the CAP, needs access to properties of the invocation context — most prominently, the current **user** and **tenant**, or the inbound HTTP request object. ### Accessing Context Access that information anywhere in your code through `cds.context` like that: ```js // Accessing current user const { user } = cds.context if (user.is('admin')) ... ``` ```js // Accessing HTTP req, res objects const { req, res } = cds.context.http if (!req.is('application/json')) res.send(415) ``` [Learn more about available `cds.context` properties](events#cds-context){.learn-more} ### Setting Contexts Setting `cds.context` usually happens in inbound authentication middlewares or in inbound protocol adapters. You can also set it in your code, for example, you might implement a simplistic custom authentication middleware like so: ```js app.use ((req, res, next) => { const { 'x-tenant':tenant, 'x-user-id':user } = req.headers cds.context = { tenant, user } // Setting cds.context next() }) ``` ### Continuation-local Variable `cds.context` is implemented as a so-called *continuation-local* variable. As JavaScript is single-threaded, we cannot capture request-level invocation contexts such (as current user, tenant, or locale) in what other languages like Java call thread-local variables. But luckily, starting with Node v12, means for so-called *"Continuation-Local Storage (CLS)"* were given to us. Basically, the equivalent of thread-local variables in the asynchronous continuations-based execution model of Node.js. ### Context Propagation When creating new root transactions in calls to `cds.tx()`, all properties not specified in the `context` argument are inherited from `cds.context`, if set in the current continuation. In effect, this means the new transaction demarcates a new ACID boundary, while it inherits the event context properties unless overridden in the `context` argument to `cds.tx()`. The following applies: ```js cds.context = { tenant:'t1', user:'u1' } cds.context.user.id === 'u1' //> true let tx = cds.tx({ user:'u2' }) tx.context !== cds.context //> true tx.context.tenant === 't1' //> true tx.context.user.id === 'u2' //> true tx.context.user !== cds.context.user //> true cds.context.user.id === 'u1' //> true ``` ## cds/srv. tx() {#srv-tx .method} ```tsx function srv.tx ( ctx?, fn? : tx => {...} ) => Promise function srv.tx ( ctx? ) => tx var ctx : { tenant, user, locale } ``` Use this method to run the given function `fn` and all nested operations in a new *root* transaction. For example: ```js await srv.tx (async tx => { let exists = await tx.run ( SELECT(1).from(Books,201).forUpdate() ) if (exists) await tx.update (Books,201).with(data) else await tx.create (Books,{ ID:201,...data }) }) ``` ::: details Transaction objects `tx` The `tx` object created by `srv.tx()` and passed to the function `fn` is a derivate of the service instance, constructed like that: ```js tx = { __proto__:srv, context: { tenant, user, locale }, // defaults from cds.context model: cds.model, // could be a tenant-extended variant instead commit(){...}, rollback(){...}, } ``` ::: The new root transaction is also active for all nested operations run from fn, including other services, most important database services. In particular, the following would work as well as expected (this time using `cds.tx` as shortcut `cds.db.tx`): ```js await cds.tx (async () => { let exists = await SELECT(1).from(Books,201).forUpdate() if (exists) await UPDATE (Books,201).with(data) else await INSERT.into (Books,{ ID:201,...data }) }) ``` **Optional argument `ctx`** allows to override values for nested contexts, which are otherwise inherited from `cds.context`, for example: ```js await cds.tx ({ tenant:t0, user: privileged }, async ()=>{ // following + nested will now run with specified tenant and user... let exists = await SELECT(1).from(Books,201).forUpdate() ... }) ``` **If argument `fn` is omitted**, the constructed `tx` would be returned and can be used to manage the transaction in a fully manual fashion: ```js const tx = srv.tx() // [!code focus] try { // [!code focus] let exists = await tx.run ( SELECT(1).from(Books,201).forUpdate() ) if (exists) await tx.update (Books,201).with(data) else await tx.create (Books,{ ID:201,...data }) await tx.commit() // [!code focus] } catch(e) { await tx.rollback(e) // will rethrow e // [!code focus] } // [!code focus] ``` ::: warning Note though, that with this usage we've **not** started a new async context, and all nested calls to other services, like db, will **not** happen within the confines of the constructed `tx`. ::: ### srv.tx (context?, fn?) → tx\ Use `srv.tx()` to start new app-controlled transactions manually, most commonly for [database services](databases) as in this example: ```js let db = await cds.connect.to('db') let tx = db.tx() try { await tx.run (SELECT.from(Foo)) await tx.create (Foo, {...}) await tx.read (Foo) await tx.commit() } catch(e) { await tx.rollback(e) } ``` **Arguments:** * `context` – an optional context object → [see below](#srv-tx-ctx) * `fn` – an optional function to run → [see below](#srv-tx-fn) **Returns:** a transaction object, which is constructed as a derivate of `srv` like that: ```js tx = Object.create (srv, Object.getOwnPropertyDescriptors({ commit(){...}, rollback(){...}, })) ``` In effect, `tx` objects ... * are concrete context-specific — that is tenant-specific — incarnations of `srv`es * support all the [Service API](core-services) methods like `run`, `create` and `read` * support methods `tx.commit` and `tx.rollback` as documented below. **Important:** The caller of `srv.tx()` is responsible to `commit` or `rollback` the transaction, otherwise the transaction would never be finalized and respective physical driver connections never be released / returned to pools. ### srv.tx ({ tenant?, user?, ... }) → tx\ {#srv-tx-ctx} Optionally specify an object with [event context](events#cds-event-context) properties as the *first* argument to execute subsequent operations with different tenant or user context: ```js let tx = db.tx ({ tenant:'t1' user:'u2' }) ``` The argument is an object with these properties: * `user` — a unique user ID string or an [instance of `cds.User`](authentication#cds-user) * `tenant` — a unique string identifying the tenant * `locale` — a locale string in format `_` The implementation constructs a new instance of [cds.EventContext](events#cds-event-context) from the given properties, which is assigned to [tx.context](#tx-context) of the new transaction. [Learn more in section **Continuations & Contexts**.](#event-contexts){.learn-more} ### srv.tx ((tx)=>{...}) → tx\ {#srv-tx-fn} Optionally specify a function as the *last* argument to have `commit` and `rollback` called automatically. For example, the following snippets are equivalent: ```js await db.tx (async tx => { await tx.run (SELECT.from(Foo)) await tx.create (Foo, {...}) await tx.read (Foo) }) ``` ```js let tx = db.tx() try { await tx.run (SELECT.from(Foo)) await tx.create (Foo, {...}) await tx.read (Foo) await tx.commit() } catch(e) { await tx.rollback(e) } ``` In addition to creating a new tx for the current service, ### srv.tx (ctx) → tx\ {#srv-tx-context} If the argument is an instance of [cds.EventContext](events#cds-event-context) the constructed transaction will use this context as it's `tx.context`. If the specified context was constructed for a transaction started with `cds.tx()`, the new transaction will be constructed as a nested transaction. If not, the new transaction will be constructed as a root transaction. ```js cds.context = { tenant:'t1', user:'u2' } const tx = cds.tx (cds.context) //> tx is a new root transaction ``` ```js const tx = cds.context = cds.tx ({ tenant:'t1', user:'u2' }) const tx1 = cds.tx (cds.context) //> tx1 is a new nested transaction to tx ``` ### _↳_ tx.context → [cds.EventContext](events#cds-event-context) {#tx-context } Each new transaction created by [cds.tx()](#srv-tx) will get a new instance of [cds.EventContext](events#cds-event-context) constructed and assigned to this property. If there is a `cds.context` set in the current continuation, the newly constructed context object will inherit properties from that. [Learn more in section **Continuations & Contexts**.](#event-contexts){.learn-more} ### _↳_ tx.commit (res?) ⇢ res {#commit } In case of database services, this sends a `COMMIT` (or `ROLLBACK`) command to the database and releases the physical connection, that is returns it to the connection pool. In addition, the commit is propagated to all nested transactions. The methods are [bound](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) to the `tx` instance, and the passed-in argument is returned, or rethrown in case of `rollback`, which allows them to be used as follows: ```js let tx = cds.tx() tx.run(...) .then (tx.commit, tx.rollback) ``` ### _↳_ tx.rollback (err?) ⇢ err {#rollback } In case of database services, this sends `ROLLBACK` command to the database and releases the physical connection. In addition, the rollback is propagated to all nested transactions, and if an `err` object is passed, it is rethrown. [See documentation for `commit` for common details.](#commit){.learn-more}
::: warning **Note:** `commit` and `rollback` both release the physical connection. This means subsequent attempts to send queries via this `tx` will fail. ::: ## cds.spawn() {#cds-spawn .method} Runs the given function as detached continuation in a specified event context (not inheriting from the current one). Options `every` or `after` allow to run the function repeatedly or deferred. For example: ```js cds.spawn ({ tenant:'t0', every: 1000 /* ms */ }, async (tx) => { const mails = await SELECT.from('Outbox') await MailServer.send(mails) await DELETE.from('Outbox').where (`ID in ${mails.map(m => m.ID)}`) }) ``` ::: tip Even though the callback function is executed as a background job, all asynchronous operations inside the callback function must be awaited. Otherwise, transaction handling does not work properly. ::: **Arguments:** * `options` is the same as the `ctx` argument for `cds.tx()`, plus: * `every: ` number of milliseconds to use in `setInterval(fn,n)` * `after: ` number of milliseconds to use in `setTimeout(fn,n)` * if non of both is given, `setImmediate(fn)` is used to run the job * `fn` is a function representing the background task **Returns:** - An event emitter which allows to register handlers on `succeeded`, `failed`, and `done` events. ```js let job = cds.spawn(...) job.on('succeeded', ()=>console.log('succeeded')) ``` - In addition, property `job.timer` returns the response of `setTimeout` in case option `after` was used, or `setInterval` in case of option `every`. For example, this allows to stop a regular running job like that: ```js let job = cds.spawn({ every:111 }, ...) await sleep (11111) clearInterval (job.timer) // stops the background job loop ``` The implementation guarantees decoupled execution from request-handling threads/continuations, by... - constructing a new root transaction `tx` per run using `cds.tx()` - setting that as the background run's continuation's `cds.context` - invoking `fn`, passing `tx` as argument to it. Think of it as if each run happens in an own thread with own context, with automatic transaction management. By default, the nested context inherits all values except `timestamp` from `cds.context`, especially user and tenant. Use the argument `options` if you want to override values, for example to run the background thread with different user or tenant than the one you called `cds.spawn()` from. ## DEPRECATED APIs #### srv.tx (req) → tx\ {#srv-tx-req} Prior to release 5, you always had to write application code like that to ensure context propagation and correctly managed transactions: ```js this.on('READ','Books', req => { const tx = cds.tx(req) return tx.read ('Books') }) ``` This still works but is not required **nor recommended** anymore. # Minimalistic Logging Facade ## cds.log (id?, options?) { #cds-log} Returns a logger identified by the given id. ```js const LOG = cds.log('sql') LOG.info ('whatever', you, 'like...') ``` #### *Arguments* - `id?` — the id for which a logger is requested — default: `'cds'` - `options?` — alternative to `level` pass an options object with: - `level?` — the [log level](#log-levels) specified as string or number — default: `'info'` - `label?` — the [log label](#logger-label) to add to each log output — default: `id` - `level?` — specify a string instead of `options` as a shorthand for `{level}` ```js // all following are equivalent... const LOG = cds.log('foo', 'warn') //> shorthand for: const LOG = cds.log('foo', { level: 'warn' }) // including case-insensitivity... const LOG = cds.log('foo', 'WARN') //> shorthand for: const LOG = cds.log('foo', { level: 'WARN' }) ``` ### *Logger `id` — cached & shared loggers* {#logger-id} The loggers constructed by `cds.log()` are cached internally, and the same instances are returned on subsequent invocations of `cds.log()` with the same `id`. This allows to use and share the same logger in different modules. ```js const LOG1 = cds.log('foo') const LOG2 = cds.log('foo') console.log (LOG1 === LOG2) //> true ``` ### *Logger `label` — used to prefix log output* {#logger-label} By default, each log output is prefixed with `[] -`, for example, as in `[cds] - server listening `. Sometimes you may want to use different ids and labels. Use option `label` to do so as in this examples: ```js const LOG = cds.log('foo',{label:'bar'}) LOG.info("it's a foo") //> [bar] - it's a foo ``` ### _Logger usage → much like `console`_ { #logger-api } Loggers returned by `cds.log()` look and behave very much like [JavaScript's standard `console` object](https://nodejs.org/api/console.html) a log method for each [log level](#log-levels): ```js cds.log() → { trace(...), _trace, debug(...), _debug, info(...), _info, log(...), // alias for info() warn(...), _warn, error(...), _error, } ``` In addition, there is a boolean indicator to check which levels are active through corresponding underscored property, for example, `LOG._debug` is true if debug is enabled. ### *Recommendations* 1. **Leave formatting to the log functions** — for example don't expensively construct debug messages, which aren't logged at all if debug is not switched on. For example: ```js // DONT: const { format } = require('util') LOG.debug (`Expected ${arg} to be a string, but got: ${format(value)}`) // DO: LOG.debug ('Expected', arg, 'to be a string, but got', value) ``` 2. **Check levels explicitly** — to further minimize overhead you can check whether a log level is switched on using the boolean `Logger._` properties like so: ```js const LOG = cds.log('sql') LOG._info && LOG.info ('whatever', you, 'like...') ``` ## cds.log.format { #cds-log-format} ### _Setting Formats for New Loggers_ You can provide a custom log formatter function by setting `cds.log.format` programmatically as shown below, for example in your custom `server.js`. ```js // the current default: cds.log.format = (id, level, ...args) => [ `[${id}]`, '-', ...args ] ``` ```js // a verbose format: const _levels = [ 'SILENT', 'ERROR', 'WARN', 'INFO', 'DEBUG', 'TRACE' ] cds.log.format = (id, level, ...args) => [ '[', (new Date).toISOString(), '|', _levels[level].padEnd(5), '|', cds.context?.tenant || '-', '|', cds.context?.id || '-', '|', id, '] -', ...args ] ``` Formatter functions are expected to return an array of arguments, which are passed to the logger functions — same as the arguments for `console.log()`. ### _Setting Formats for Existing Loggers_ You can also change the format used by newly or formerly constructed loggers using `.setFormat()` function: ```js const _levels = [ 'SILENT', 'ERROR', 'WARN', 'INFO', 'DEBUG', 'TRACE' ] const LOG = cds.log('foo') .setFormat ((id, level, ...args) => [ '[', (new Date).toISOString(), '|', _levels[level].padEnd(5), '|', cds.context?.tenant || '-', '|', cds.context?.id || '-', '|', id, '] -', ...args ]) ``` ## cds.log.levels { #log-levels } Constants of supported log levels: ```js cds.log.levels = { SILENT: 0, // all log output switched off ERROR: 1, // logs errors only WARN: 2, // logs errors and warnings only INFO: 3, // logs errors, warnings and general infos DEBUG: 4, // logs errors, warnings, info, and debug // (and trace when using default logger implementation) TRACE: 5, // most detailed log level SILLY: 5, // alias for TRACE VERBOSE: 5 // alias for TRACE } ``` You can use these constants when constructing loggers, for example: ```js const LOG = cds.log('foo', cds.log.levels.WARN) ``` ### *Configuring Log Levels* Configure initial log-levels per module through `cds.log.levels`, for example like that in your `package.json`: ```json { "cds": { "log": { "levels": { "sql": "debug", "cds": "info" } } } } ``` [Learn more about `cds.env`.](cds-env){.learn-more} [See pre-defined module names below.](#cds-log-modules){.learn-more} ### *Programmatically Set Log Levels* You can specify a default log level to use when constructing a logger as shown above. When called subsequently with a *different* log level, the cached and shared logger's log level will be changed dynamically. For example: ```js // some-module.js const LOG = cds.log('foo') // using default log level 'info' ``` ```js // some-other-module.js const LOG = cds.log('foo') // shares the same logger as above ``` ```js // some-controller-module.js cds.log('foo','debug') // switches the 'foo' logger to 'debug' level ``` ### *Log Levels as Used by the CAP Node.js Runtime* The CAP Node.js runtime uses the following guidelines with regards to which log level to use in which situation: - `error`: Something went horribly wrong and it's unclear what to do (that is, an unexpected error). - `warn`: Something off the happy trail happened, but it can be handled (that is, an expected error). - `info`: Brief information about what is currently happening. - `debug`: Detailed information about what is currently happening. - `trace`/`silly`/`verbose` (not used by the CAP Node.js runtime): Exhaustive information about what is currently happening. ## cds.log.Logger Constructs a new logger with the method signature of `{ trace, debug, log, info, warn, error }` (cf. [`console`](https://nodejs.org/api/console.html)). The default implementation maps each method to the equivalent methods of `console`. You can assign different implementations by exchanging the factory with your own, for example, in order to integrate advanced logging frameworks such as [winston](#winston). #### *Arguments* - `label`— the log label to use with each log output, if applicable - `level`— the log level to enable → *0=off, 1=error, 2=warn, 3=info, 4=debug, 5=trace* ### *Using `winston` Loggers* {#winston} **Prerequisites:** You need to add [winston](https://www.npmjs.com/package/winston) to your project: ```sh npm add winston ``` Being designed as a simple log facade, `cds.log` can be easily integrated with advanced logging frameworks such as [`winston`](https://www.npmjs.com/package/winston). For example, using the built-in convenience method `cds.log.winstonLogger()` in your project's server.js like that: ```js cds.log.Logger = cds.log.winstonLogger() ``` You can specify winston custom options to that method [as documented for `winston.createLogger()`](https://github.com/winstonjs/winston#creating-your-own-logger), for example like that: ```js cds.log.Logger = cds.log.winstonLogger({ format: winston.format.simple() transports: [ new winston.transports.Console(), new winston.transports.File({ filename: 'errors.log', level: 'error' }) ], }) ``` ### _Custom Loggers_ Custom loggers basically have to return an object fulfilling the `console`-like [`cds.log` loggers API](#logger-api) as in this example: ```js const winston = require("winston") const util = require('util') const cds = require('@sap/cds') cds.log.Logger = (label, level) => { // construct winston logger const logger = winston.createLogger({ levels: cds.log.levels, // use cds.log's levels level: Object.keys(cds.log.levels)[level], transports: [new winston.transports.Console()], }) // winston's log methods expect single message strings const _fmt = (args) => util.formatWithOptions( {colors:false}, `[${label}] -`, ...args ) // map to cds.log's API return Object.assign (logger, { trace: (...args) => logger.TRACE (_fmt(args)), debug: (...args) => logger.DEBUG (_fmt(args)), log: (...args) => logger.INFO (_fmt(args)), info: (...args) => logger.INFO (_fmt(args)), warn: (...args) => logger.WARN (_fmt(args)), error: (...args) => logger.ERROR (_fmt(args)), }) } ``` Actually, the above is essentially the implementation of `cds.log.winstonLogger()`. ## `DEBUG` env variable Use env variable `DEBUG` to quickly switch on debug output from command line like that: ```sh DEBUG=app,sql cds watch DEBUG=all cds watch ``` Values can be - comma-separated list of [logger ids](#logger-id), or - the value `all` to switch on all debug output. ### *Matching multiple values of `DEBUG`* When obtaining loggers with `cds.log()` you can specify alternate ids that will all be matched against the entries of the `DEBUG` env variable; for example: ```js const LOG = cds.log('db|sql') ``` Will be debug-enabled by both, `DEBUG=db`, as well as `DEBUG=sql ...`. **Note:** The alternative ids specified after `|` have no impact on the unique logger ids. That is, the logger above will have the id `'db'`, while `'sql'` will only be used for matching against `DEBUG` env variable. ## Configuration Configuration for `cds.log()` can be specified through `cds.env.log`, for example like that in your `package.json`: ```json { "cds": { "log": { "levels": { "sql": "debug", "cds": "info" } } } } ``` [Learn more about `cds.env`.](cds-env){.learn-more} The following configuration options can be applied: - `levels` — configures log levels for logged modules. The keys refer to the [loggers' `id`](#logger-id), the values are lower-case names of [log levels](#log-levels). - `user` — Specify `true` to log the user's ID (`req.user.id`) as `remote_user` (Kibana formatter only). Consider the data privacy implications! Default: `false`. - `sanitize_values`— Specify `false` to deactivate the default behavior of sanitizing payload data in debug logs in production. Default: `true`. ## Common IDs { #cds-log-modules } The runtime uses the same logger facade, that is `cds.log()`. For each component, it requires a separate logger. So projects can set different log levels for different components/layers. The following table lists the ids used to set the log levels: | Component | Logger IDs(s) | |------------------------------------------|-------------------| | Server and common output | `cds` | | CLI output | `cli` | | CDS build output | `build` | | [Application Service](./app-services) | `app` | | [Databases](databases) | `db\|sql` | | [Messaging Service](messaging) | `messaging` | | [Remote Service](remote-services) | `remote` | | AuditLog Service | `audit-log` | | OData Protocol Adapter | `odata` | | REST Protocol Adapter | `rest` | | GraphQL Protocol Adapter | `graphql` | | [Authentication](./authentication) | `auth` | | Database Deployment | `deploy` | | Multitenancy and Extensibility | `mtx` | ## Logging in Development During development, we want concise, human-readable output in the console, with clickable stack traces in case of errors. You should not be overloaded with information that is additionally obfuscated by a bad rendering. Hence, [console.log()](https://nodejs.org/api/console.html#console_console_log_data_args), that makes use of [util.format()](https://nodejs.org/api/util.html#util_util_format_format_args) out of the box, with raw arguments is a good choice. The *plain log formatter*, which is the default in non-production environments, prepends the list of arguments with `[ -]`. The following screenshot shows the log output for the previous warning and rejection with the plain log formatter. ![The screenshot is explained in the accompanying text.](./assets/plain-formatter-output.png) The plain log formatter is the default formatter in non-production. ## Logging in Production SAP BTP offers two services, [SAP Cloud Logging](https://help.sap.com/docs/cloud-logging) and [SAP Application Logging Service](https://help.sap.com/docs/application-logging-service), to which bound Cloud Foundry applications can stream logs. In both services, operators can access and analyze observability data, as described in [Access and Analyze Observability Data](https://help.sap.com/docs/cloud-logging/cloud-logging/access-and-analyze-observability-data) for SAP Cloud Logging and [Access and Analyze Application Logs, Container Metrics and Custom Metrics](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/access-and-analyze-application-logs-container-metrics-and-custom-metrics) for SAP Application Logging Service. To get connected with either of those services, the application needs to be bound to the respective service instance(s) as described for [SAP Cloud Logging](https://help.sap.com/docs/cloud-logging/cloud-logging/ingest-via-cloud-foundry-runtime?version=Cloud) and [SAP Application Logging Service](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/produce-logs-container-metrics-and-custom-metrics). Additionally, the log output needs to be formatted in a way that enables the respective dashboard technology to optimally support the user, for example, filtering for logs of specific levels, modules, status, etc. The *JSON log formatter* constructs a loggable object from the passed arguments as well as [cds.context](events#cds-event-context) and the headers of the incoming request (if available). The JSON log formatter is the default formatter in production. ::: tip Since `@sap/cds 7.5`, running `cds add kibana-logging` or setting cds.features.kibana_formatter: true are no longer needed. If you want to opt-out of the JSON formatter in production, set cds.log.format: plain. ::: Further, there are two formatting aspects that are activated automatically, if appropriate, and add the following information to the loggable object: 1. Running on Cloud Foundry: `tenant_subdomain`, `CF_INSTANCE_IP` and information from `VCAP_APPLICATION` 1. Bound to an instance of the [SAP Application Logging Service](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/sap-application-logging-service-for-cloud-foundry-environment) or [SAP Cloud Logging](https://help.sap.com/docs/cloud-logging/sap-cloud-logging/what-is-sap-cloud-logging): `categories` and *custom fields* as described in [Custom Fields](#custom-fields) The following screenshot shows the log output for the rejection in the previous example with the JSON log formatter including the two aspects. ![The screenshot is explained in the accompanying text.](assets/json-formatter-output.png) ::: warning The SAP Application Logging Service offers [different plans with different quotas](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/service-plans-and-quotas). Please make sure the plan you use is sufficient, that is, no logs are being dropped so that the information is available in Kibana. As soon as logs are dropped, you cannot reliably assess what is going on in your app. ::: ### Header Masking Some header values shall not appear in logs, for example when pertaining to authorization. Configuration option cds.log.mask_headers: ["/authorization/i", "/cookie/i", "/cert/i", "/ssl/i"] allows to specify a list of matchers for which the header value shall be masked. Masked values are printed as `***`. The default value is `["/authorization/i", "/cookie/i", "/cert/i", "/ssl/i"]`. ::: warning In case your application shares any sensitive data (for example, secrets) via headers, please ensure that you adjust the configuration as necessary. ::: ### Custom Fields { #custom-fields } Information that is not included in the [list of supported fields](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/supported-fields) of the SAP Application Logging Service can be shown as additional information. This information needs to be provided as custom fields. By default, the JSON formatter uses the following custom fields configuration for SAP Application Logging Service: ```jsonc { "log": { "als_custom_fields": { // : "query": 0, //> sql "target": 1, "details": 2, //> generic validations "reason": 3 //> errors } } } ``` Up to 20 such custom fields can be provided using this mechanism. The advantage of this approach is that the additional information can be indexed. Besides being a manual task, it has the drawback that the indexes should be kept stable. ::: details Background The SAP Application Logging Service requires the following formatting of custom field content inside the JSON object that is logged: ```js { ..., '#cf': { strings: [ { k: '', v: '', i: }, ... ] } } ``` That is, a generic collection of key-value-pairs that are treated as opaque strings. The information is then rendered as follows: ```txt custom.string.key0: custom.string.value0: ``` Hence, in order to analyze, for example, the SQL statements leading to errors, you'd need to look at field `custom.string.value0` (given the default of `cds.env.log.als_custom_fields`). In a more practical example, the log would look something like this: ```log msg: SQL Error: Unknown column "IDONTEXIST" in table "DUMMY" ... custom.string.key0: query custom.string.value0: SELECT IDONTEXIST FROM DUMMY ``` Without the additional custom field `query` and it's respective value, it would first be necessary to reproduce the issue locally to know what the faulty statement is. ::: ::: tip Before `@sap/cds^7.5`, the configuration property was called `kibana_custom_fields`. As Kibana is the dashboard technology and the custom fields are actually a feature of the SAP Application Logging Service, we changed the name to `als_custom_fields`. `kibana_custom_fields` is supported until `@sap/cds^8`. ::: For SAP Cloud Logging, the JSON formatter uses the following default configuration: ```jsonc { "log": { "cls_custom_fields": [ "query", //> sql "target", "details", //> generic validations "reason" //> errors ] } } ``` In order for the JSON formatter to detect the binding to SAP Cloud Logging via user-provided service, the user-provided service must have a tag `cloud-logging`. (For existing user-provided services, tags can be added via [`cf update-user-provided-service`](https://cli.cloudfoundry.org/en-US/v7/update-user-provided-service.html).) The key-value pairs can either be part of the first argument or an exclusive object thereafter: ```js LOG.info({ message: 'foo', reason: 'bar' }) LOG.info('foo', { reason: 'bar' }) ``` As always, both defaults are overridable via [cds.env](cds-env#cds-env). ## Request Correlation { #node-observability-correlation } Unfortunately, there is no standard correlation ID header. `x-correlation-id` and `x-request-id` are the most commonly used, but SAP products often use `x-correlationid` (that is, without the second hyphen) and SAP BTP uses `x-vcap-request-id` when logging incoming requests. As CAP aims to be platform independent, we check an array of headers (or generate a new ID if none hits) and ensure the value available at `cds.context.id` as well as `req.headers['x-correlation-id']`: ```js const { headers: h } = req const id = h['x-correlation-id'] || h['x-correlationid'] || h['x-request-id'] || h['x-vcap-request-id'] || uuid() if (!cds.context) cds.context = { id } req.headers['x-correlation-id'] = cds.context.id ``` Subsequently, the JSON log formatter (see [Logging in Production](#logging-in-production)) sets the following fields: - `cds.context.id` → `correlation_id` - Request header `x_vcap_request_id` → `request_id` - Request header `traceparent` (cf. [W3C Trace Context](https://www.w3.org/TR/trace-context/)) → `w3c_traceparent` Specifically field `w3c_traceparent` is then used by both SAP Application Logging Service and SAP Cloud Logging to determine field `trace_id` in order to correlate requests, logs, and traces across multiple applications. The following screenshot shows an example for log correlation based on field `correlation_id` in a log analytic dashboard of the [SAP Application Logging Service for SAP BTP](https://help.sap.com/docs/application-logging-service). ![Default Formatter Output](assets/correlation.png) # Project-Specific Configurations Learn here about using cds.env to specify and access configuration options for the Node.js runtimes as well as the @sap/cds-dk CLI commands. ## CLI `cds env` Command {#cli} Run the `cds env` command in the root folder of your project to see the effective configuration. The listed settings include [global defaults](#defaults) as well as [project-specific settings](#project-settings) and [process environment settings](#process-env). Here's a brief intro how to use it: ```sh cds env #> shortcut to `cds env ls` cds env ls #> lists all settings in properties format cds env ls folders #> lists the `folders` settings cds env get #> prints all settings in JSON-like format cds env get folders #> prints the `folders` settings cds env get defaults #> prints defaults only cds env ? #> get help ``` For example:
> cds env ls requires.db

requires.db.credentials.url = ':memory:'
requires.db.impl = '@cap-js/sqlite'
requires.db.kind = 'sqlite'

> cds env requires.db

{
  impl: '@cap-js/sqlite',
  credentials: { url: ':memory:' },
  kind: 'sqlite'
}
Alternatively, you can also use the `cds eval` or `cds repl` CLI commands to access the `cds.env` property, which provides programmatic access to the effective settings:
> cds -e .env.requires.db

{
  impl: '@cap-js/sqlite',
  credentials: { url: ':memory:' },
  kind: 'sqlite'
}

$ cds -r
Welcome to cds repl ...
> cds.env.requires.db
{
  impl: '@cap-js/sqlite',
  credentials: { url: ':memory:' },
  kind: 'sqlite'
}
## The `cds.env` Module {#cds-env} The `cds env` CLI command and all configuration-related tasks and features in Node.js-based tools and runtimes are backed by the `cds.env` module, which can be accessed through the central `cds` facade. For example, you can use it as follows: ```js const cds = require('@sap/cds') console.log (cds.env.requires.sql) ``` > This would print the same output as the one above for `cds env get requires.sql`. As depicted in the figure below `cds.env` provides one-stop convenient and transparent access to the effective configuration read from various sources, including global defaults, static, project-specific configuration as well as dynamic settings from process environment and service bindings. Different environments, for example, dev vs prod can be identified and selected by [profiles](#profiles). !['cds env' in the middle, targeted by arrows coming from project content, service bindings and environment.](./assets/cds.env.drawio.svg) ## Sources for `cds.env` `cds.env` is actually a getter property, which on first usage loads settings from the following sources: | order | source | | |-|-|-| | 1 | [`@sap/cds`](#defaults) | built-in defaults | 2 | [_~/.cdsrc.json_](#defaults) | user-specific defaults | 3 | [_./.cdsrc.json_](#project-settings) | static project settings | 4 | [_./package.json_](#project-settings) | static project settings → `{"cds":{ ... }}` | 5 | [_./.cdsrc-private.json_](#private-project-settings) | user-specific project config | | 6 | [_./default-env.json_](#process-env) | *deprecated, see cds bind* | 7 | [_./.env_](#process-env) | user-specific project env (lines of `name=value`) | 8 | [`process.env.CDS_CONFIG`](#env-cds-config) | runtime settings from shell or cloud | 9 | [`process.env`](#process-env) | runtime env vars from shell or cloud | 10 | [`process.env.VCAP_SERVICES`](#services) | service bindings | 11 | [_~/.cds-services.json_](#services) | service bindings for [_development_ profile](#profiles) > - `./` represents a project's root directory. > - `~/` represents a user's home directory. ::: warning Private files are for you only and should not be checked into your source code management. ::: The settings are merged into `cds.env` starting from lower to higher order. Meaning that propertiers specified in a source of higher order will overwrite the value from a lower order. For example, given the following sources: ::: code-group ```jsonc [cdsrc.json] { "requires": { "db": { "kind": "sql", "model": "./db", "credentials": { "url": ":memory:" } } } } ``` ::: ::: code-group ```jsonc [package.json] { "cds": { "requires": { "db": { "kind": "sqlite" } } } } ``` ::: ::: code-group ```properties [env.properties] cds.requires.db.credentials.database = my.sqlite ``` ::: This would result in the following effective configuration: ```js cds.env = { ..., requires: { db: { kind: "sqlite", model: "./db", credentials: { database:"my.sqlite" } } } } ``` ### Programmatic Settings Node.js programs can also add and change settings by simply assigning values like so: ```js const cds = require('@sap/cds') cds.env.requires.sql.kind = 'sqlite' cds.env.requires.sql.credentials = { database:'my.sqlite' } ``` > This would change the respective settings in the running program only, without writing back to the sources listed above. ## Global Defaults {#defaults} ### Built-In to `@sap/cds` The lowest level of settings is read from built-in defaults, which comprise settings for these top-level properties: | Settings | Description | |------------|----------------------------------------------| | `build` | for build-related settings | | `features` | to switch on/off cds features | | `folders` | locations for `app`, `srv`, and `db` folders | | `i18n` | for i18n-related settings | | `odata` | for OData protocol-related settings | | `requires` | to configure required services | > As these properties are provided in the defaults, apps can safely access them, for example, through `cds.env.requires.sql`, without always checking for null values on the top-level entries. ### User-Specific Defaults in _~/.cdsrc.json_ You can also create a _.cdsrc.json_ file in your user's home folder to specify settings to be used commonly across several projects. ## Project Configuration {#project-settings} Settings, which are essential to your project topology go into static project settings. Examples are the `folders` layout of your project, specific `build` tasks, or the list of required services in `requires` — most frequently your primary database configured under `requires.db`. ::: tip The settings described here are part of your project's static content and delivery. They're checked in to your git repos and used also in productive deployments. **Don't** add environment-specific options as static settings but use one of the [dynamic process environment options](#process-env) for that. ::: ### In _./package.json_ You can provide static settings in a `"cds"` section of your project's _package.json_ as in the following example: ```json "cds": { "requires": { "db": "sql" } } ``` ### In _./.cdsrc.json_ Alternatively, you can put static settings in _.cdsrc.json_ file in your project root: ```json "requires": { "db": "sql" } ``` ## Private Project Settings {#private-project-settings} ### In _./.cdsrc-private.json_ A _.cdsrc.json_ equivalent for your private settings used in local testing. The file should not be committed to your version control system. ## Process Environment {#process-env} ### On the Command Line On UNIX-based systems (Mac, Linux) you can specify individual process env variables as prefixes to the command to start your server. For example: ```sh CDS_REQUIRES_DB_KIND=sql cds run ``` ### In _./default-env.json_ The use of _default-env.json_ is deprecated. Please use [`cds bind`](../advanced/hybrid-testing#run-with-service-bindings). ### In `./.env` Example for `.env`: ```properties cds_requires_db_kind = sql ``` or ```properties cds.requires.db.kind = sql ``` or ```properties cds.requires.db = { "kind": "sql" } ``` ::: warning The dot ("`.`") notation can only be used in `.env` files, because the dot is not a valid environment variable character. You can use it here if your config string contains underscore ("`_`") characters. ::: ### `CDS_CONFIG` env variable {#env-cds-config} You can use the `CDS_CONFIG` env variable in three different ways to add settings to the CDS environment: 1. Using a JSON string ```sh CDS_CONFIG='{"requires":{"db":{"kind":"sqlite"}}}' cds serve ``` 2. Using a JSON file ```sh CDS_CONFIG=./my-cdsrc.json cds serve ``` 3. Using a directory ```sh CDS_CONFIG=/etc/secrets/cds cds serve ``` For each file and folder, a new property is added to the configuration with its name. For a file the property value is the string content of the file. But if a file contains a parsable JSON string starting with `[` or `{` character, it is parsed and added as a substructure. For a directory an object is added and the algorithm continues there. ```yaml /etc/secrets/cds/requires/auth/kind: xsuaa /etc/secrets/cds/requires/auth/credentials/clientid: capapp /etc/secrets/cds/requires/auth/credentials/clientsecret: dlfed4XYZ /etc/secrets/cds/requires/db: { kind: "hana", "credentials": { "user": "hana-user" } } ``` Results in: ```json { "requires": { "auth": { "kind": "xsuaa", "credentials": { "clientid": "cpapp", "clientsecret": "dlfed4XYZ" } }, "db": { "kind": "hana", "credentials": { "user": "hana-user" } } } } ``` ## Required Services {#services} If your app requires external services (databases, message brokers, ...), you must add them to the `cds.requires` section. ### In `cds.requires.` Settings Here, you can configure the services. Find details about the individual options in the documentation of [`cds.connect`](cds-connect#cds-env-requires). ### Prototype-Chained Along `.kind` References You can use the `kind` property to reference other services for prototype chaining. > CDS provides default service configurations for all supported services (`hana`, `enterprise-messaging`, ...). Example: ::: code-group ```json [package.json] { "cds": { "requires": { "serviceA": { "kind": "serviceB", "myProperty": "my overwritten property" }, "serviceB": { "kind": "hana", "myProperty": "my property", "myOtherProperty": "my other property" } } } } ``` ::: `serviceA` will have the following properties: ```json { "kind": "serviceB", "myProperty": "my overwritten property", "myOtherProperty": "my other property", // from serviceB "impl": "[...]/hana/Service.js", // from hana "use": "hana" // where impl is defined } ``` ## Configuration Profiles {#profiles} Wrap entries into `[]:{ ... }` to provide settings for different environments. For example: ::: code-group ```json [package.json] { "cds": { "requires": { "db": { "[development]": { "kind": "sqlite" }, "[production]": { "kind": "hana" } } } } } ``` ::: The profile is determined at bootstrap time as follows: 1. from `--production` command line argument, if specified 2. from `--profile` command line argument, if specified 3. from `NODE_ENV` property, if specified 4. from `CDS_ENV`, if specified If the profile is not set to `production`, the `development` profile is automatically enabled. You can also introduce own custom profile names and use them as follows: ```sh cds run --profile my-custom-profile ``` or ::: code-group ```sh [Mac/Linux] CDS_ENV=my-custom-profile cds run ``` ```cmd [Windows] set CDS_ENV=my-custom-profile cds run ``` ```powershell [Powershell] $Env:CDS_ENV=my-custom-profile cds run ``` ::: ## App-Specific Settings You can use the same machinery as documented above for app-specific configuration options: ::: code-group ```json [package.json] "cds": { ... }, "my-app": { "myoption": "value" } ``` ::: And access them from your app as follows: ```js const { myoption } = cds.env.for('my-app') ``` # Authentication {{$frontmatter?.synopsis}} This is done by [authentication middlewares](#strategies) setting the [`cds.context.user` property](#cds-user) which is then used in [authorization enforcement](#enforcement) decisions. ## cds. User { #cds-user .class } [user]: #cds-user [`cds.context.user`]: #cds-user Represents the currently logged-in user as filled into [`cds.context.user`](events#user) by authentication middlewares. Simply create instances of `cds.User` or of subclasses thereof in custom middlewares. For example: ```js const cds = require('@sap/cds') const DummyUser = new class extends cds.User { is:()=>true } module.exports = (req,res,next) => { cds.context.user = new DummyUser('dummy') next() } ``` Or you can call the constructor of `cds.User` with specific arguments, to create a user instance. For example: ```js const cds = require('@sap/cds') // with user ID as string const user = new cds.User('userId') // a user instance const anotherUser = new cds.User(user) // a user instance like object const yetAnotherUser = new cds.User({id: user.id, roles: user.roles, attr: user.attr}) ``` ### .is (\) {#user-is .method} Checks if user has assigned the given role. Example usage: ```js if (req.user.is('admin')) ... ``` The role names correspond to the values of [`@requires` and the `@restrict.grants.to` annotations](../guides/security/authorization) in your CDS models. ### . id {#user-id .property} A user's unique ID. It corresponds to `$user` in [`@restrict` annotations](../guides/security/authorization) of your CDS models (Also in JavaScript, `user` can act as a shortcut for `user.id` in comparisons.) {.indent} ### . attr {#user-attr .property} User-related attributes, for example, from JWT tokens These correspond to `$user.` in [`@restrict` annotations](../guides/security/authorization) of your CDS models {.indent} ### . tokenInfo {#user-token-info .property} Parsed JWT token info provided by `@sap/xssec`.
> **Note:** This API is only available for authentication kinds based on `@sap/xssec`. ## cds.**User.Privileged** { #privileged-user .class } In some cases, you might need to bypass authorization checks while [consuming a local service](./core-services). For this, you can create a transaction with a privileged user as follows: ```js this.before('*', function (req) { const user = new cds.User.Privileged return this.tx({ user }, tx => tx.run( INSERT.into('RequestLog').entries({ url: req._.req.url, user: req.user.id }) ) }) ``` Alternatively, you can also use the ready-to-use instance `cds.User.privileged` directly, that is, `const user = cds.User.privileged`. ## cds.**User.Anonymous** { #anonymous-user .class } Class `cds.User.Anonymous` allows you to instantiate an anonymous user (`const user = new cds.User.Anonymous`), for example in a [custom authentication](#custom) implementation. Alternatively, you can also use the ready-to-use instance `cds.User.anonymous` directly, that is, `const user = cds.User.anonymous`. ## cds.**User.default** { #default-user .property } If a request couldn't be authenticated, for example due to a missing authorization header, the framework will use `cds.User.default` as fallback. By default, `cds.User.default` points to `cds.User.Anonymous`. However, you can override this, for example to be `cds.User.Privileged` in tests, or to be any other class that returns an instance of `cds.User`. ## Authorization Enforcement {#enforcement} Applications can use the `cds.context.user` APIs to do programmatic enforcement. For example, the authorization of the following CDS service: ```cds service CustomerService @(requires: 'authenticated-user'){ entity Orders @(restrict: [ { grant: ['READ','WRITE'], to: 'admin' }, ]){/*...*/} entity Approval @(restrict: [ { grant: 'WRITE', where: '$user.level > 2' } ]){/*...*/} } ``` can be programmatically enforced by means of the API as follows: ```js const cds = require('@sap/cds') cds.serve ('CustomerService') .with (function(){ this.before ('*', req => req.user.is('authenticated') || req.reject(403) ) this.before (['READ', 'CREATE'], 'Orders', req => req.user.is('admin') || req.reject(403) ) this.before ('*', 'Approval', req => req.user.attr.level > 2 || req.reject(403) ) }) ``` ## Authentication Strategies {#strategies} CAP ships with a few prebuilt authentication strategies, used by default: [`mocked`](#mocked) during development and [`jwt`](#jwt) in production. You can override these defaults and configure the authentication strategy to be used through the `cds.requires.auth` [config option in `cds.env`](./cds-env), for example: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "jwt" } } ``` ::: ::: tip Inspect effective configuration Run `cds env get requires.auth` in your project root to find out the effective config for your current environment. ::: ### Dummy Authentication {#dummy } This strategy creates a user that passes all authorization checks. It's meant for temporarily disabling the `@requires` and `@restrict` annotations at development time. **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "dummy" } } ``` ::: ### Mocked Authentication {#mocked } This authentication strategy uses basic authentication with pre-defined mock users during development. > **Note:** When testing different users in the browser, it's best to use an incognito window, because logon information might otherwise be reused. **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "mocked" } } ``` ::: You can optionally configure users as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": { "kind": "mocked", "users": { "": { "password": "", "roles": [ "", ... ], "attr": { ... } } } } } } ``` ::: #### Pre-defined Mock Users {#mock-users} The default configuration shipped with `@sap/cds` specifies these users: ```jsonc "users": { "alice": { "tenant": "t1", "roles": [ "admin" ] }, "bob": { "tenant": "t1", "roles": [ "cds.ExtensionDeveloper" ] }, "carol": { "tenant": "t1", "roles": [ "admin", "cds.ExtensionDeveloper", "cds.UIFlexDeveloper" ] }, "dave": { "tenant": "t1", "roles": [ "admin" ], "features": [] }, "erin": { "tenant": "t2", "roles": [ "admin", "cds.ExtensionDeveloper", "cds.UIFlexDeveloper" ] }, "fred": { "tenant": "t2", "features": [ "isbn" ] }, "me": { "tenant": "t1", "features": [ "*" ] }, "yves": { "roles": [ "internal-user" ] } "*": true //> all other logins are allowed as well } ``` This default configuration is merged with your custom configuration such that, by default, logins by alice, bob, ... and others (`*`) are allowed. If you want to restrict these additional logins, you need to overwrite the defaults: ```jsonc "users": { "alice": { "roles": [] }, "bob": { "roles": [] }, "*": false //> do not allow other users than the ones specified } ``` ### Basic Authentication {#basic } This authentication strategy uses basic authentication to use mock users during development. > **Note:** When testing different users in the browser, it's best to use an incognito window, because logon information might otherwise be reused. **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "basic" } } ``` ::: You can optionally configure users as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": { "kind": "basic", "users": { "": { "password": "", "roles": [ "", ... ], "attr": { ... } } } } } } ``` ::: In contrast to [mocked authentication](#mocked), no default users are automatically added to the configuration. ### JWT-based Authentication { #jwt } This is the default strategy used in production. User identity, as well as assigned roles and user attributes, are provided at runtime, by a bound instance of the ['User Account and Authentication'](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/419ae2ef1ddd49dca9eb65af2d67c6ec.html) service (UAA). This is done in form of a JWT token in the `Authorization` header of incoming HTTP requests. This authentication strategy also adds [`cds.context.user.tokenInfo`](#user-token-info). **Prerequisites:** You need to add [@sap/xssec](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/54513272339246049bf438a03a8095e4.html#loio54513272339246049bf438a03a8095e4__section_atx_2vt_vt) to your project: ```sh npm add @sap/xssec ``` **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "jwt" } } ``` ::: [Learn more about testing JWT-based authentication in **XSUAA in Hybrid Setup**.](#xsuaa-setup){.learn-more} ### XSUAA-based Authentication { #xsuaa } Authentication kind `xsuaa` is a logical extension of kind [`jwt`](#jwt) that additionally offers access to SAML attributes through `cds.context.user.attr` (for example, `cds.context.user.attr.familyName`). **Prerequisites:** You need to add [@sap/xssec](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/54513272339246049bf438a03a8095e4.html#loio54513272339246049bf438a03a8095e4__section_atx_2vt_vt) to your project: ```sh npm add @sap/xssec ``` **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "xsuaa" } } ``` ::: [See **XSUAA in Hybrid Setup** below for additional information of how to test this](#xsuaa-setup){.learn-more} ### IAS-based Authentication { #ias } This is an additional authentication strategy using the [Identity Authentication Service](https://help.sap.com/docs/IDENTITY_AUTHENTICATION) (IAS) that can be used in production. User identity and user attributes are provided at runtime, by a bound instance of the IAS service. This is done in form of a JWT token in the `Authorization` header of incoming HTTP requests. This authentication strategy also adds [`cds.context.user.tokenInfo`](#user-token-info). To allow forwarding to remote services, JWT tokens issued by IAS service don't contain authorization information. In particular, no scopes are included. Closing this gap is up to you as application developer. **Prerequisites:** You need to add [@sap/xssec](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/54513272339246049bf438a03a8095e4.html#loio54513272339246049bf438a03a8095e4__section_atx_2vt_vt) to your project: ```sh npm add @sap/xssec ``` **Configuration:** Choose this strategy as follows: ::: code-group ```json [package.json] "cds": { "requires": { "auth": "ias" } } ``` ::: ### Custom Authentication { #custom } You can configure an own implementation by specifying an own `impl` as follows: ```json "requires": { "auth": { "impl": "srv/custom-auth.js" // > relative path from project root } } ``` Essentially, custom authentication middlewares must do two things: First, they _must_ [fulfill the `cds.context.user` contract](#cds-user) by assigning an instance of `cds.User` or a look-alike to the continuation of the incoming request at `cds.context.user`. Second, if running in a multitenant environment, `cds.context.tenant` must be set to a string identifying the tenant that is addressed by the incoming request. ```js module.exports = function custom_auth (req, res, next) { // do your custom authentication cds.context.user = new cds.User({ id: '', roles: ['', ''], attr: { : '', : '' } }) cds.context.tenant = '' } ``` The TypeScript equivalent has to use the default export. ```ts import cds from "@sap/cds"; import {Request, Response, NextFunction} from "express"; type Req = Request & { user: cds.User, tenant: string }; export default function custom_auth(req: Req, res: Response, next: NextFunction) { // do your custom authentication ... } ``` [If you want to customize the user ID, please also have a look at this example.](/node.js/cds-serve#customization-of-req-user){.learn-more} ## Authentication Enforced in Production In a productive scenario with an authentication strategy configured, for example the default `jwt`, all CAP service endpoints are authenticated by default, regardless of the authorization model. That is, all services without `@restrict` or `@requires` implicitely get `@requires: 'authenticated-user'`. This can be disabled via feature flag cds.requires.auth.restrict_all_services: false, or by using [mocked authentication](#mocked) explicitly in production. ## XSUAA in Hybrid Setup {#xsuaa-setup} ### Prepare Local Environment The following steps assume you've set up the [**Cloud Foundry Command Line Interface**](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/856119883b8c4c97b6a766cc6a09b48c.html). 1. Log in to Cloud Foundry: ```sh cf l -a ``` If you don't know the API endpoint, have a look at section [Regions and API Endpoints Available for the Cloud Foundry Environment](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/350356d1dc314d3199dca15bd2ab9b0e.html#loiof344a57233d34199b2123b9620d0bb41). 2. Go to the project you have created in [Getting started in a Nutshell](../get-started/in-a-nutshell). 3. Configure your app for XSUAA-based authentication if not done yet: ```sh cds add xsuaa --for hybrid ``` This command creates the XSUAA configuration file `xs-security.json` and adds the service and required dependencies to your `package.json` file. 4. Make sure `xsappname` is configured and `tenant-mode` is set to `dedicated` in `xs-security.json` file: ```json { "xsappname": "bookshop-hybrid", "tenant-mode": "dedicated", ... } ``` 5. Configure the redirect URI: Add the following OAuth configuration to the `xs-security.json` file: ```json "oauth2-configuration": { "redirect-uris": [ "http://localhost:5000/" ] } ``` 6. Create an XSUAA service instance with this configuration: ```sh cf create-service xsuaa application bookshop-uaa -c xs-security.json ``` > Later on, if you've changed the scopes, you can use `cf update-service bookshop-uaa -c xs-security.json` to update the configuration. ::: tip This step is necessary for locally running apps and for apps deployed on Cloud Foundry. ::: ### Configure the Application 1. Create a service key: ```sh cf create-service-key bookshop-uaa bookshop-uaa-key ``` This lets you gain access to the XSUAA credentials from your local application. 2. Bind to the new service key: ```sh cds bind -2 bookshop-uaa ``` This adds an `auth` section containing the binding and the kind `xsuaa` to the _.cdsrc-private.json_ file. This file is created if it doesn't exist and keeps the local and private settings of your app: ```json { "requires": { "[hybrid]": { "auth": { "kind": "xsuaa", "binding": { ... } } } } } ``` >If your running in BAS, you can alternatively [create a new run configuration](https://help.sap.com/products/SAP%20Business%20Application%20Studio/9c36fdb911ae4cadab467a314d9e331f/cdbc00244452483e9582a4f486b42d64.html), connecting the `auth` to your XSUAA service instance. >In that case you need to add the environment variable `cds_requires_auth_kind=xsuaa` to the run configuration. 3. Check authentication configuration: ```sh cds env list requires.auth --resolve-bindings --profile hybrid ``` This prints the full `auth` configuration including the credentials. ### Set Up the Roles for the Application { #auth-in-cockpit} By creating a service instance of the `xsuaa` service, all the roles from the _xs-security.json_ file are added to your subaccount. Next, you create a role collection that assigns these roles to your users. 1. Open the SAP BTP Cockpit. > For your trial account, this is: [https://cockpit.hanatrial.ondemand.com](https://cockpit.hanatrial.ondemand.com) 2. Navigate to your subaccount and then choose *Security* > *Role Collections*. 3. Choose *Create New Role Collection*: ![Create role collections in SAP BTP cockpit](./assets/create-role-collection.png) 4. Enter a *Name* for the role collection, for example `BookshopAdmin`, and choose *Create*. 5. Choose your new role collection to open it and switch to *Edit* mode. 6. Add the `admin` role for your bookshop application (application id `bookshop!a`) to the *Roles* list. 7. Add the email addresses for your users to the *Users* list. 8. Choose *Save* ### Running App Router The App Router component implements the necessary authentication flow with XSUAA to let the user log in interactively. The resulting JWT token is sent to the application where it's used to enforce authorization and check the user's roles. 1. Add App Router to the `app` folder of your project: ```sh cds add approuter ``` 2. Install `npm` packages for App Router: ```sh npm install --prefix app/router ``` 3. In your project folder run: ::: code-group ```sh [Mac/Linux] cds bind --exec -- npm start --prefix app/router ``` ```cmd [Windows] cds bind --exec -- npm start --prefix app/router ``` ```powershell [Powershell] cds bind --exec '--' npm start --prefix app/router ``` ::: [Learn more about `cds bind --exec`.](../advanced/hybrid-testing#cds-bind-exec){.learn-more} This starts an [App Router](https://help.sap.com/docs/HANA_CLOUD_DATABASE/b9902c314aef4afb8f7a29bf8c5b37b3/0117b71251314272bfe904a2600e89c0.html) instance on [http://localhost:5000](http://localhost:5000) with the credentials for the XSUAA service that you have bound using `cds bind`. > Usually the App Router is started using `npm start` in the `app` folder. But you need to provide the `VCAP_SERVICES` variable with the XSUAA credentials. With the `cds bind --exec` command you can launch an arbitrary command with the `VCAP_SERVICES` variable filled with your `cds bind` service bindings. Since it only serves static files or delegates to the backend service, you can keep the server running. It doesn't need to be restarted after you have changed files. 4. Make sure that your CAP application is running as well with the `hybrid` profile: ```sh cds watch --profile hybrid ``` > If you are using BAS Run Configurations, you need to configure `cds watch` with profile `hybrid`: > 1. Right click on your run configuration > 2. Choose *Show in File* > 3. Change the command `args`: > ```json > "args": [ > "cds", > "watch", > "--profile", > "hybrid" > ], > ``` 5. After the App Router and CAP application are started, log in at [http://localhost:5000](http://localhost:5000) and verify that the routes are protected as expected. In our example, if you assigned the `admin` scope to your user in SAP BTP cockpit, you can now access the admin service at [http://localhost:5000/admin](http://localhost:5000/admin).
> To test UIs w/o a running UAA service, just add this to _app/router/xs-app.json_: ```"authenticationMethod": "none"``` **SAP Business Application Studio:** The login fails pointing to the correct OAuth configuration URL that is expected. 1. Replace the URL `http://localhost:5000/` in your `xs-security.json` file with the full URL from the error message: ```json "oauth2-configuration": { "redirect-uris": [ "" ] } ``` ::: warning This is a specific configuration for your dev space and should not be submitted or shared. ::: 2. Update the XSUAA service: ```sh cf update-service bookshop-uaa -c xs-security.json ``` 3. Retry # Testing with `cds.test` ## Overview The `cds.test` library provides best practice utils for writing tests for CAP Node.js applications. ::: tip Find examples in [*cap/samples*](https://github.com/sap-samples/cloud-cap-samples/tree/main/test) and in the [*SFlight sample*](https://github.com/SAP-samples/cap-sflight/tree/main/test). ::: ### Running a CAP Server Use function [`cds.test()`](#cds-test) to easily launch and test a CAP server. For example, given your CAP application has a `./test` subfolder containing tests as follows: ```zsh project/ # your project's root folder ├─ srv/ ├─ db/ ├─ test/ # your .test.js files go in here └─ package.json ``` Start your app's server in your `.test.js` files like that: ```js{3} const cds = require('@sap/cds') describe(()=>{ const test = cds.test(__dirname+'/..') }) ``` This launches a server from the specified target folder in a `beforeAll()` hook, with controlled shutdown when all tests have finished in an `afterAll()` hook. ::: warning Don't use `process.chdir()`! Doing so in Jest tests may leave test containers in failed state, leading to failing subsequent tests. Use [`cds.test.in()`](#test-in-folder) instead. ::: ::: danger Don't load [`cds.env`](cds-env) before [`cds.test()`](#cds-test)! To ensure `cds.env`, and hence all plugins, are loaded from the test's target folder, the call to `cds.test()` is the first thing you do in your tests. Any references to [`cds`](cds-facade) sub modules or any imports of which have to go after. → [Learn more in `CDS_TEST_ENV_CHECK`.](#cds-test-env-check) ::: ### Testing Service APIs As `cds.test()` launches the server in the current process, you can access all services programmatically using the respective [Node.js Service APIs](core-services). Here's an example for that taken from [*cap/samples*](https://github.com/SAP-samples/cloud-cap-samples/blob/a8345122ea5e32f4316fe8faef9448b53bd097d4/test/consuming-services.test.js#L2): ```js it('Allows testing programmatic APIs', async () => { const AdminService = await cds.connect.to('AdminService') const { Authors } = AdminService.entities expect (await SELECT.from(Authors)) .to.eql(await AdminService.read(Authors)) .to.eql(await AdminService.run(SELECT.from(Authors))) }) ``` ### Testing HTTP APIs To test HTTP APIs, we can use bound functions like so: ```js const { GET, POST } = cds.test(...) const { data } = await GET ('/browse/Books') await POST (`/browse/submitOrder`, { book: 201, quantity: 5 }) ``` [Learn more in GET/PUT/POST.](#http-bound) {.learn-more} #### Authenticated Endpoints `cds.test()` uses the standard authentication strategy in development mode, which is the [mocked authentication](../node.js/authentication#mocked). This also includes the usage of [pre-definded mock users](../node.js/authentication#mock-users) You can set the user for an authenticated request like this: ```js await GET('/admin/Books', { auth: { username: 'alice', password: '' } }) ``` This is the same as setting the HTTP `Authorization` header with values for basic authentication: ::: code-group ```http [test.http] GET http://localhost:4004/admin/Books Authorization: Basic alice: ``` ::: [Learn how to explicitly configure mock users in your _package.json_ file.](../node.js/authentication#mocked){.learn-more} ### Using Jest or Mocha [*Mocha*](https://mochajs.org) and [*Jest*](https://jestjs.io) are the most used test runners at the moment, with each having its user base. The `cds.test` library is designed to allow you to write tests that can run with both. Here's an example: ```js describe('my test suite', ()=>{ const { GET, expect } = cds.test(...) it ('should test', ()=>{ // Jest & Mocha const { data } = await GET ('/browse/Books') expect(data.value).to.eql([ // chai style expect { ID: 201, title: 'Wuthering Heights', author: 'Emily Brontë' }, { ID: 252, title: 'Eleonora', author: 'Edgar Allen Poe' }, //... ]) }) }) ``` > To ensure that your tests run with both `jest` and `mocha`, start a test server with `cds.test(...)` inside a `describe` block of the test. You can use Mocha-style `before/after` or Jest-style `beforeAll/afterAll` in your tests, as well as the common `describe, test, it` methods. In addition, to be portable, you should use the [Chai Assertion Library's](#chai) variant of `expect`. ::: tip [All tests in *cap/samples*](https://github.com/sap-samples/cloud-cap-samples/blob/master/test) are written in that portable way.
Run them with `npm run jest` or with `npm run mocha`. ::: ::: warning Helpers can cause conflicts _jest_ helpers might cause conflicts with the generic implementation of `@sap/cds`. To avoid such conflicts, do not use the following helpers: - _jest.resetModules_ as it leaves the server in an inconsistent state. - _jest.useFakeTimers_ as it intercepts the server shutdown causing test timeouts. ::: ### Using Test Watchers You can also start the tests in watch mode, for example: ```sh jest --watchAll ``` This should give you green tests, when running in *cap/samples* root:
PASS  test/cds.ql.test.js
PASS  test/hierarchical-data.test.js
PASS  test/hello-world.test.js
PASS  test/messaging.test.js
PASS  test/consuming-services.test.js
PASS  test/custom-handlers.test.js
PASS  test/odata.test.js
PASS  test/localized-data.test.js

Test Suites: 8 passed, 8 total
Tests:       65 passed, 65 total
Snapshots:   0 total
Time:        3.611 s, estimated 4 s
Ran all test suites.
Similarly, you can use other test watchers like `mocha -w`. ## Class `cds.test.Test` Instances of this class are returned by [`cds.test()`](#cds-test), for example: ```js const test = cds.test(_dirname) ``` You can also use this class and create instances yourself, for example, like that: ```js const { Test } = cds.test let test = new Test test.run().in(_dirname) ``` ### cds.test() {.method} This method is the most convenient way to start a test server. It's actually just a convenient shortcut to construct a new instance of class `Test` and call [`test.run()`](#test-run), defined as follows: ```js const { Test } = cds.test cds.test = (...args) => (new Test).run(...args) ``` :::warning Run `cds.test` once per test file `@sap/cds` relies on server state like `cds.model`. Running `cds.test` multiple times within the same test file can lead to a conflicting state and erratic behavior. ::: ### .chai, ... {.property} To write tests that run in [*Mocha*](https://mochajs.org) as well as in [*Jest*](https://jestjs.io), you should use the [*Chai Assertion Library*](https://www.chaijs.com/) through the following convenient methods. :::warning Using `chai` requires these dependencies added to your project: ```sh npm add -D chai@4 chai-as-promised@7 chai-subset jest ``` ::: #### .expect { .property} Shortcut to the [`chai.expect()`](https://www.chaijs.com/guide/styles/#expect) function, used like that: ```js const { expect } = cds.test(), foobar = {foo:'bar'} it('should support chai.except style', ()=>{ expect(foobar).to.have.property('foo') expect(foobar.foo).to.equal('bar') }) ``` If you prefer Jest's `expect()` functions, you can just use the respective global: ```js cds.test() it('should use jest.expect', ()=>{ expect({foo:'bar'}).toHaveProperty('foo') }) ``` #### .assert { .property} Shortcut to the [`chai.assert()`](https://www.chaijs.com/guide/styles/#assert) function, used like that: ```js const { assert } = cds.test(), foobar = {foo:'bar'} it('should use chai.assert style', ()=>{ assert.property(foobar,'foo') assert.equal(foobar.foo,'bar') }) ``` #### .should { .property} Shortcut to the [`chai.should()`](https://www.chaijs.com/guide/styles/#should) function, used like that: ```js const { should } = cds.test(), foobar = {foo:'bar'} it('should support chai.should style', ()=>{ foobar.should.have.property('foo') foobar.foo.should.equal('bar') should.equal(foobar.foo,'bar') }) ``` #### .chai {.property} This getter provides access to the [*chai*](https://www.chaijs.com) library, preconfigured with the [chai-subset](https://www.chaijs.com/plugins/chai-subset/) and [chai-as-promised](https://www.chaijs.com/plugins/chai-as-promised/) plugins. These plugins contribute the `containSubset` and `eventually` APIs, respectively. The getter is implemented like this: ```js get chai() { return require('chai') .use (require('chai-subset')) .use (require('chai-as-promised')) } ``` ### .axios {.property} Provides access to the [Axios](https://github.com/axios/axios) instance used as HTTP client. It comes preconfigured with the base URL of the running server, that is, `http://localhost:`. This way, you only need to specify host-relative URLs in tests, like `/catalog/Books`. {.indent} :::warning Using `axios` requires adding this dependency: ```sh npm add -D axios ``` ::: ### GET / PUT / POST ... {#http-bound .method} These are bound variants of the [`test.get/put/post/...` methods](#http-methods) allowing to write HTTP requests like that: ```js const { GET, POST } = cds.test() const { data } = await GET('/browse/Books') await POST('/browse/submitOrder', { book:201, quantity:1 }, { auth: { username: 'alice' }} ) ``` [Learn more about Axios.](https://axios-http.com) {.learn-more} For single URL arguments, the functions can be used in tagged template string style, which allows omitting the parentheses from function calls: ```js let { data } = await GET('/browse/Books') let { data } = await GET `/browse/Books` ``` ### test. get/put/post/...() {#http-methods .method} These are mirrored version of the corresponding [methods from `axios`](https://github.com/axios/axios#instance-methods), which prefix each request with the started server's url and port, which simplifies your test code: ```js const test = cds.test() //> served at localhost with an arbitrary port const { data } = await test.get('/browse/Books') await test.post('/browse/submitOrder', { book:201, quantity:1 }, { auth: { username: 'alice' }} ) ``` [Learn more about Axios.](https://axios-http.com) {.learn-more} ### test .data .reset() {.method} This is a bound method, which can be used in a `beforeEach` handler to automatically reset and redeploy the database for each test like so: ```js const { test } = cds.test() beforeEach (test.data.reset) ``` Instead of using the bound variant, you can also call this method the standard way: ```js beforeEach (async()=>{ await test.data.reset() // [!code focus] //... }) ``` ### test. log() {.method} Allows to capture console output in the current test scope. The method returns an object to control the captured logs: ```tsx function cds.test.log() => { output : string clear() release() } ``` Usage examples: ```js describe('cds.test.log()', ()=>{ let log = cds.test.log() it ('should capture log output', ()=>{ expect (log.output.length).to.equal(0) console.log('foo',{bar:2}) expect (log.output.length).to.be.greaterThan(0) expect (log.output).to.contain('foo') }) it('should support log.clear()', ()=> { log.clear() expect (log.output).to.equal('') }) it('should support log.release()', ()=> { log.release() // releases captured log console.log('foobar') // not captured expect (log.output).to.equal('') }) }) ``` The implementation redirects any console operations in a `beforeAll()` hook, clears `log.output` before each test, and releases the captured console in an `afterAll()` hook. ### test. run (...) {.method} This is the method behind [`cds.test()`](#cds-test) to start a CDS server, that is the following are equivalent: ```js cds.test(...) ``` ```js (new cds.test.Test).run(...) ``` It asynchronously launches a CDS server in a `beforeAll()` hook with an arbitrary port, with controlled shutdown when all tests have finished in an `afterAll()` hook. The arguments are the same as supported by the `cds serve` CLI command. Specify the command `'serve'` as the first argument to serve specific CDS files or services: ```js cds.test('serve','srv/cat-service.cds') cds.test('serve','CatalogService') ``` You can optionally add [`test.in(folder)`](#test-in-folder) in fluent style to run the test in a specific folder: ```js cds.test('serve','srv/cat-service.cds').in('/cap/samples/bookshop') ``` If the first argument is **not** `'serve'`, it's interpreted as a target folder: ```js cds.test('/cap/samples/bookshop') ``` This variant is a convenient shortcut for: ```js cds.test('serve','all','--in-memory?').in('/cap/samples/bookshop') cds.test().in('/cap/samples/bookshop') //> equivalent ``` ### test. in (folder, ...) {.method} Safely switches [`cds.root`](cds-facade#cds-root) to the specified target folder. Most frequently you'd use it in combination with starting a server with [`cds.test()`](#cds-test) in fluent style like that: ```js let test = cds.test(...).in(__dirname) ``` It can also be used as static method to only change `cds.root` without starting a server: ```js cds.test.in(__dirname) ``` ### `CDS_TEST_ENV_CHECK` It's important to ensure [`cds.env`](cds-env), and hence all plugins, are loaded from the test's target folder. To ensure this, any references to or imports of [`cds`](cds-facade) sub modules have to go after all plugins are loaded. For example if you had a test like that: ```js cds.env.fiori.lean_draft = true //> cds.env loaded from ./ // [!code --] cds.test(__dirname) //> target folder: __dirname ``` This would result in the test server started from `__dirname`, but erroneously using `cds.env` loaded from `./`. As these mistakes end up in hard-to-resolve follow up errors, [`test.in()`](#test-in-folder) can detect this if environment variable `CDS_TEST_ENV_CHECK` is set. The previous code will then result into an error like that: ```sh CDS_TEST_ENV_CHECK=y jest cds.test.test.js ``` ```zsh Detected cds.env loaded before running cds.test in different folder: 1. cds.env loaded from: ./ 2. cds.test running in: cds/tests/bookshop at Test.in (node_modules/@sap/cds/lib/utils/cds-test.js:65:17) at test/cds.test.test.js:9:41 at Object.describe (test/cds.test.test.js:5:1) 5 | describe('cds.test', ()=>{ > 6 | cds.env.fiori.lean_draft = true | ^ 7 | cds.test(__dirname) at env (test/cds.test.test.js:7:7) at Object.describe (test/cds.test.test.js:5:1) ``` A similar error would occur if one of the `cds` sub modules would be accessed, which frequently load `cds.env` in their global scope, like `cds.Service` in the following snippet: ```js class MyService extends cds.Service {} //> cds.env loaded from ./ // [!code --] cds.test(__dirname) //> target folder: __dirname ``` To fix this, always ensure your calls to `cds.test.in(folder)` or `cds.test(folder)` goes first, before anything else loading `cds.env`: ```js cds.test(__dirname) //> always should go first // anything else goes after that: cds.env.fiori.lean_draft = true // [!code ++] class MyService extends cds.Service {} // [!code ++] ``` :::warning Do switch on `CDS_TEST_ENV_CHECK` ! We recommended to switch on `CDS_TEST_ENV_CHECK` in all your tests to detect such errors. It's likely to become default in upcoming releases. ::: ## Best Practices ### Check Status Codes Last Avoid checking for single status codes. Instead, simply check the response data: ```js const { data, status } = await GET `/catalog/Books` expect(status).to.equal(200) //> DON'T do that upfront // [!code --] expect(data).to.equal(...) //> do this to see what's wrong expect(status).to.equal(200) //> Do it at the end, if at all // [!code ++] ``` This makes a difference if there are errors: with the status code check, your test aborts with a useless _Expected: 200, received: xxx_ error, while without it, it fails with a richer error that includes a status text. Note that by default, Axios throws errors for status codes `< 200` and `>= 300`. This can be [configured](https://github.com/axios/axios#handling-errors), though. ### Minimal Assumptions When checking expected errors messages, only check for significant keywords. Don't hardwire the exact error text, as this might change over time, breaking your test unnecessarily. **DON'T**{.bad} hardwire on overly specific error messages: ```js await expect(POST(`/catalog/Books`,...)).to.be.rejectedWith( 'Entity "CatalogService.Books" is readonly' ) ``` **DO**{.good} check for the essential information only: ```js await expect(POST(`/catalog/Books`,...)).to.be.rejectedWith( /readonly/i ) ``` ## Using `cds.test` in REPL You can use `cds.test` in REPL, for example, by running this from your command line in [*cap/samples*](https://github.com/sap-samples/cloud-cap-samples): ```sh [cap/samples] cds repl Welcome to cds repl v7.1 ``` ```js > var test = await cds.test('bookshop') ``` ```log [cds] - model loaded from 6 file(s): ./bookshop/db/schema.cds ./bookshop/srv/admin-service.cds ./bookshop/srv/cat-service.cds ./bookshop/app/services.cds ./../../cds/common.cds ./common/index.cds [cds] - connect to db > sqlite { database: ':memory:' } > filling sap.capire.bookshop.Authors from ./bookshop/db/data/sap.capire.bookshop-Authors.csv > filling sap.capire.bookshop.Books from ./bookshop/db/data/sap.capire.bookshop-Books.csv > filling sap.capire.bookshop.Books.texts from ./bookshop/db/data/sap.capire.bookshop-Books_texts.csv > filling sap.capire.bookshop.Genres from ./bookshop/db/data/sap.capire.bookshop-Genres.csv > filling sap.common.Currencies from ./common/data/sap.common-Currencies.csv > filling sap.common.Currencies.texts from ./common/data/sap.common-Currencies_texts.csv /> successfully deployed to sqlite in-memory db [cds] - serving AdminService { at: '/admin', impl: './bookshop/srv/admin-service.js' } [cds] - serving CatalogService { at: '/browse', impl: './bookshop/srv/cat-service.js' } [cds] - server listening on { url: 'http://localhost:64914' } [cds] - launched at 9/8/2021, 5:36:20 PM, in: 767.042ms [ terminate with ^C ] ``` ```js > await SELECT `title` .from `Books` .where `exists author[name like '%Poe%']` [ { title: 'The Raven' }, { title: 'Eleonora' } ] ``` ```js > var { CatalogService } = cds.services > await CatalogService.read `title, author` .from `ListOfBooks` [ { title: 'Wuthering Heights', author: 'Emily Brontë' }, { title: 'Jane Eyre', author: 'Charlotte Brontë' }, { title: 'The Raven', author: 'Edgar Allen Poe' }, { title: 'Eleonora', author: 'Edgar Allen Poe' }, { title: 'Catweazle', author: 'Richard Carpenter' } ] ``` # Fiori Support See [Cookbook > Serving UIs > Draft Support](../advanced/fiori#draft-support) for an overview on SAP Fiori Draft support in CAP. ## Lean Draft Lean draft is a new approach which makes it easier to differentiate between drafts and active instances in your code. This new architecture drastically reduces the complexity. ### Handlers Registration {#draft-support} Class `ApplicationService` provides built-in support for Fiori Draft. All CRUD events are supported for both, active and draft entities. Please note that draft-enabled entities must follow a specific draft choreography. The examples are provided for `.on` handlers, but the same is true for `.before` and `.after` handlers. ```js // only active entities srv.on(['CREATE', 'READ', 'UPDATE', 'DELETE'], 'MyEntity', /*...*/) // only draft entities srv.on(['CREATE', 'READ', 'UPDATE', 'DELETE'], 'MyEntity.drafts', /*...*/) // bound action/function on active entity srv.on('boundActionOrFunction', 'MyEntity', /*...*/) // bound action/function on draft entity srv.on('boundActionOrFunction', 'MyEntity.drafts', /*...*/) ``` It's also possible to use the array variant to register a handler for both entities, for example: `srv.on('boundActionOrFunction', ['MyEntity', 'MyEntity.drafts'], /*...*/)`. :::warning Bound actions/functions modifying active entity instances If a bound action/function modifies an active entity instance, custom handlers need to take care that a draft entity doesn't exist, otherwise all changes are overridden when saving the draft. ::: Additionally, you can add your logic to the draft-specific events as follows: ```js // When a new draft is created srv.on('NEW', 'MyEntity.drafts', /*...*/) // When a draft is discarded srv.on('CANCEL', 'MyEntity.drafts', /*...*/) // When a new draft is created from an active instance srv.on('EDIT', 'MyEntity', /*...*/) // When the active entity is changed srv.on('SAVE', 'MyEntity', /*...*/) ``` - The `CANCEL` event is triggered when you cancel the draft. In this case, the draft entity is deleted and the active entity isn't changed. - The `EDIT` event is triggered when you start editing an active entity. As a result `MyEntity.drafts` is created. - The `SAVE` event is just a shortcut for `['UPDATE', 'CREATE']` on an active entity. This event is also triggered when you press the `SAVE` button in UI after finishing editing your draft. Note, that composition children of the active entity will also be updated or created. ::: info Compatibility flag For compatibility to previous variants, set `cds.fiori.draft_compat` to `true`. ::: ### Draft Locks To prevent inconsistency, the entities with draft are locked for modifications by other users. The lock is released when the draft is saved, canceled or a timeout is hit. The default timeout is 15 minutes. You can configure this timeout by the following application configuration property: ```properties cds.fiori.draft_lock_timeout=30min ``` You can set the property to one of the following: - number of hours like `'1h'` - number of minutes like `'10min'` - number of milliseconds like `1000` ### Bypassing the SAP Fiori Draft Flow Creating or modifying active instances directly is possible without creating drafts. This comes in handy when technical services without a UI interact with each other. To enable this feature, set this feature flag in your configuration: ```json { "cds": { "fiori": { "bypass_draft": true } } } ``` You can then create active instances directly: ```http POST /Books { "ID": 123, "IsActiveEntity": true } ``` You can modify them directly: ```http PATCH /Books(ID=123,IsActiveEntity=true) { "title": "How to be more active" } ``` This feature is required to enable [SAP Fiori Elements Mass Edit](https://sapui5.hana.ondemand.com/sdk/#/topic/965ef5b2895641bc9b6cd44f1bd0eb4d.html), allowing users to change multiple objects with the same editable properties without creating drafts for each row. :::warning Additional entry point Note that this feature creates additional entry points to your application. Custom handlers are triggered with delta payloads rather than the complete business object. ::: ### Garbage Collection of Stale Drafts Inactive drafts are deleted automatically after the default timeout of 30 days. You can configure or deactivate this timeout by the following configuration: ```json { "cds": { "fiori": { "draft_deletion_timeout": "28d" } } } ``` You can set the property to one of the following: - `false` in order to deactivate the timeout - number of days like `'30d'` - number of hours like `'72h'` - number of milliseconds like `1000` ### Differences to Previous Version - Draft-enabled entities have corresponding CSN entities for drafts: ```js const { MyEntity } = srv.entities MyEntity.drafts // points to model.definitions['MyEntity.drafts'] ``` - Queries are now cleansed from draft-related properties (like `IsActiveEntity`) - `PATCH` event isn't supported anymore. - The target is resolved before the handler execution and points to either the active or draft entity: ```js srv.on('READ', 'MyEntity.drafts', (req, next) => { assert.equal(req.target.name, 'MyEntity.drafts') return next() }) ``` ::: info Special case: "Editing Status: All" In the special case of the Fiori Elements filter "Editing Status: All", two separate `READ` events are triggered for either the active or draft entity. The individual results are then combined behind the scenes. ::: - Draft-related properties (with the exception of `IsActiveEntity`) are only computed for the target entity, not for expanded sub entities since this is not required by Fiori Elements. - Manual filtering on draft-related properties is not allowed, only certain draft scenarios are supported. ### Programmatic Invocation of Draft Actions You can programmatically invoke draft actions with the following APIs: ```js await srv.new(MyEntity.drafts, data) // create new draft await srv.discard(MyEntity.drafts, keys) // discard draft await srv.edit(MyEntity, keys) // create draft from active instance await srv.new(MyEntity.drafts).for(keys) // same as above await srv.save(MyEntity.drafts, keys) // activate draft ``` # Best Practices From generic Node.js best practices like dependency management and error handling to CAP-specific topics like transaction handling and testing, this [video](https://www.youtube.com/watch?v=WTOOse-Flj8&t=87s) provides some tips and tricks to improve the developer experience and avoid common pitfalls, based on common customer issues. In the following section we explain these best practices. ## Managing Dependencies {#dependencies} Projects using CAP need to manage dependencies to the respective tools and libraries in their _package.json_ and/or _pom.xml_ respectively. Follow the guidelines to make sure that you consume the latest fixes and avoid vulnerabilities and version incompatibilities. These guidelines apply to you as a _consumer_ of reuse packages as well as a _provider_ of such reuse packages. ### Always Use the _Latest Minor_ Releases → for Example, `^7.2.0` {#use-caret } This applies to both, *@sap* packages as well as open source ones. It ensures your projects receive the latest features and important fixes during development. It also leverages [NPM's dedupe](https://docs.npmjs.com/cli/dedupe.html) to make sure bundles have a minimal footprint. Example: ```json "dependencies": { "@sap/cds": "^5.5.0", "@sap/some-reuse-package": "^1.1.0", "express": "^4.17.0" } ``` ::: tip We **recommend** using the caret form such as `^1.0.2` Caret form is the default for `npm install`, as that format clearly captures the minimum patch version. ::: ### Keep Open Ranges When *Publishing* for Reuse {#publish } Let's explain this by looking at two examples. #### Bad {.bad} Assume that you've developed a reuseable package, and consume a reuse package yourself. You decided to violate the previous rules and use exact dependencies in your _package.json_: ```json "name": "@sap/your-reuse-package", "version": "1.1.2", "dependencies": { "@sap/cds": "3.0.3", "@sap/foundation": "2.0.1", "express": "4.16.3" } ``` The effect would be as follows: 1. Consuming projects get duplicate versions of each package they also use directly, for example, `@sap/cds`, `@sap/foundation`, and `express`. 2. Consuming projects don't receive important fixes for the packages used in your _package.json_ unless you also provide an update. 3. It wouldn't be possible to reuse CDS models from common reuse packages (for example, would already fail for `@sap/cds/common`). #### Good {.good} Therefore, the rules when publishing packages for reuse are: * **Keep** the open ranges in your _package.json_ (just don't touch them). * **Do** an *npm update* before publishing and test thoroughly. (→ ideally automated in your CI/CD pipeline). * **Do** the vulnerability checks for your software and all open-source software used by you **or by packages you used** (→ [Minimize Usage of Open Source Packages](#oss)). * **Don't** do `npm shrinkwrap` → see also [npm's docs](https://docs.npmjs.com/cli/v10/configuring-npm/npm-shrinkwrap-json): *"It's discouraged for library authors to publish this file, ..."* ::: tip If both your package and a consuming package reuse the same CDS models, loading those models would fail because it's impossible to automatically merge the two versions, nor is it possible to load two independent versions. The reason for this is that it's reusing models that share the **same** single definitions. ::: ### Lock Dependencies Before *Deploying* {#deploy } When releasing a service or an application to end consumers, use `npm install` or `npm update` to produce a [_package-lock.json_](https://docs.npmjs.com/files/package-lock.json) file that freezes dependencies. This guarantees that it works correctly as it did the last time you tested it and checked it for vulnerabilities. Overall, the process for your release should include these steps: ```sh npm config set package-lock true # enables package-lock.json npm update # update it with latest versions git add package-lock.json # add it to version control # conduct all test and vulnerability checks ``` The _package-lock.json_ file in your project root freezes all dependencies and is deployed with your application. Subsequent npm installs, such as by cloud deployers or build packs, always get the same versions, which you checked upon your release. This ensures that the deployed tool/service/app doesn't receive new vulnerabilities, for example, through updated open source packages, without you being able to apply the necessary tests as prescribed by our security standards. ::: tip Run `npm update` frequently to receive latest fixes regularly Tools like [renovate](https://github.com/renovatebot/renovate) or [GitHub's dependabot](https://docs.github.com/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically) can help you automate this process.
::: ### Minimize Usage of Open Source Packages {#oss _} This rule for keeping open ranges for dependencies during development, as well as when publishing for reuse, also applies for open source packages. Because open source packages are less reliable with respect to vulnerability checks, this means that end-of-chain projects have to ensure respective checks for all the open source packages they use directly, as well as those they 'inherit' transitively from reuse packages. So, always take into account these rules: * When releasing to end consumers, you always have to conduct vulnerability checks for all open source packages that you used directly or transitively. * As a provider of reuse packages you should minimize the usage of open source packages to a reasonable minimum. **Q:** Why not freeze open source dependencies when releasing for reuse? **A:** Because that would only affect directly consumed packages, while packages from transitive dependencies would still reach your consumers. A good approach is to also provide certain features in combination with third-party packages, but to keep them, and hence the dependencies, optional; for example, express.js does this. ### Upgrade to _Latest Majors_ as Soon as Possible {#upgrade _} As providers of evolving SDKs we provide major feature updates, enhancements, and improvements in 6-12 month release cycles. These updates come with an increment of major release numbers. At the same time, we can't maintain and support unlimited numbers of branches with fixes. The following rules apply: * Fixes and nonbreaking enhancements are made available frequently in upstream release branches (current _major_). * Critical fixes also reach recent majors in a 2-month grace period. To make sure that you receive ongoing fixes, make sure to also adopt the latest major releases in a timely fashion in your actively maintained projects, that is, following the 6-12 month cycle. ### Additional Advice **Using _npm-shrinkwrap.json_** — only if you want to publish CLI tools or other 'sealed' production packages to npm. Unlike _package-lock.json_, it _does_ get packaged and published to npm registries. See the [npm documentation](https://docs.npmjs.com/cli/v8/configuring-npm/package-lock-json#package-lockjson-vs-npm-shrinkwrapjson) for more.
## Securing Your Application To keep builds as small as possible, the Node.js runtime doesn't bring any potentially unnecessary dependencies and, hence, doesn't automatically mount any express middlewares, such as the popular [`helmet`](https://www.npmjs.com/package/helmet). However, application developers can easily mount custom or best-practice express middlewares using the [bootstrapping mechanism](./cds-server#cds-server). Example: ```js // local ./server.js const cds = require('@sap/cds') const helmet = require('helmet') cds.on('bootstrap', app => { app.use(helmet()) }) module.exports = cds.server // > delegate to default server.js ``` {} Consult sources such as [Express' **Production Best Practices: Security** documentation](https://expressjs.com/en/advanced/best-practice-security.html) for state of the art application security. ### Content Security Policy (CSP) Creating a [Content Security Policy (CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) is a major building block in securing your web application. [`helmet`](https://www.npmjs.com/package/helmet) provides a default policy out of the box that you can also customize as follows: ```js cds.on('bootstrap', app => { app.use( helmet({ contentSecurityPolicy: { directives: { ...helmet.contentSecurityPolicy.getDefaultDirectives() // custom settings } } }) ) }) ``` Find required directives in the [OpenUI5 Content Security Policy documentation](https://openui5.hana.ondemand.com/topic/fe1a6dba940e479fb7c3bc753f92b28c) {.learn-more} ### Cross-Site Request Forgery (CSRF) Token Protect against cross-side request forgery (CSRF) attacks by enabling CSRF token handling through the _App Router_. ::: tip For a SAPUI5 (SAP Fiori/SAP Fiori Elements) developer, CSRF token handling is transparent There's no need to program or to configure anything in addition. In case the server rejects the request with _403_ and _“X-CSRF-Token: required”_, the UI sends a _HEAD_ request to the service document to fetch a new token. ::: [Learn more about CSRF tokens and SAPUI5 in the **Cross-Site Scripting** documentation.](https://sapui5.hana.ondemand.com/#/topic/91f0bd316f4d1014b6dd926db0e91070){.learn-more} Alternatively, you can add a CSRF token handler manually. ::: warning This request must never be cacheable If a CSRF token is cached, it can potentially be reused in multiple requests, defeating its purpose of securing each individual request. Always set appropriate cache-control headers to `no-store, no-cache, must-revalidate, proxy-revalidate` to prevent caching of the CSRF token. ::: #### Using App Router The _App Router_ is configured to require a _CSRF_ token by default for all protected routes and all HTTP requests methods except _HEAD_ and _GET_. Thus, by adding the _App Router_ as described in the [Deployment Guide: Using App Router as Gateway](../guides/deployment/to-cf#add-app-router), endpoints are CSRF protected. [Learn more about CSRF protection with the **App Router**](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/c19f165084d742e096c5d1625cecd2d4.html?q=csrf#loioc19f165084d742e096c5d1625cecd2d4__section_xj4_pcg_2z){.learn-more} #### Manual Implementation On the backend side, except for handling the _HEAD_ request mentioned previously, also the handlers for each _CSRF_ protected method and path should be added. In the following example, the _POST_ method is protected. ::: tip If you use SAP Fiori Elements, requests to the backend are sent as batch requests using the _POST_ method. In this case, an arbitrary _POST_ request should be protected. ::: As already mentioned, in case the server rejects because of a bad CSRF token, the response with a status _403_ and a header _“X-CSRF-Token: required”_ should be returned to the UI. For this purpose, the error handling in the following example is extended: ```js const csrfProtection = csrf({ cookie: true }) const parseForm = express.urlencoded({ extended: false }) cds.on('bootstrap', app => { app.use(cookieParser()) // Must: Provide actual s of served services. // Optional: Adapt for non-Fiori Elements UIs. .head('/', csrfProtection, (req, res) => { res.set({ 'X-CSRF-Token': req.csrfToken(), 'Cache-Control': 'no-store, no-cache, must-revalidate, proxy-revalidate' }).send() }) // Must: Provide actual s of served services. // Optional: Adapt for non-Fiori Elements UIs. .post('//$batch', parseForm, csrfProtection, (req, res, next) => next()) .use((err, req, res, next) => { if (err.code !== 'EBADCSRFTOKEN') return next(err) res.status(403).set('X-CSRF-Token', 'required').send() }) }) ``` [Learn more about backend coding in the **csurf** documentation.](https://www.npmjs.com/package/csurf){.learn-more} ::: tip Use _App Router_ CSRF handling when scaling Node.js VMs horizontally Handling CSRF at the _App Router_ level ensures consistency across instances. This avoids potential token mismatches that could occur if each VM handled CSRF independently. ::: ### Cross-Origin Resource Sharing (CORS) With _Cross-Origin Resource Sharing_ (CORS) the server that hosts the UI can tell the browser about servers it trusts to provide resources. In addition, so-called "preflight" requests tell the browser if the cross-origin server will process a request with a specific method and a specific origin. If not running in production, CAP's [built-in server.js](cds-server#built-in-server-js) allows all origins. #### Custom CORS Implementation For production, you can add CORS to your CAP server as follows: ```js const ORIGINS = { 'https://example.com': 1 } cds.on('bootstrap', app => app.use ((req, res, next) => { if (req.headers.origin in ORIGINS) { res.set('access-control-allow-origin', req.headers.origin) if (req.method === 'OPTIONS') // preflight request return res.set('access-control-allow-methods', 'GET,HEAD,PUT,PATCH,POST,DELETE').end() } next() }) ``` [Learn more about CORS in CAP in **this article by DJ Adams**](https://qmacro.org/blog/posts/2024/03/30/cap-cors-and-custom-headers/){.learn-more} [Learn more about CORS in general in the **MDN Web Docs**.](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS){.learn-more} #### Configuring CORS in App Router The _App Router_ has full support for CORS. Thus, by adding the _App Router_ as described in the [Deployment Guide: Using App Router as Gateway](../guides/deployment/to-cf#add-app-router), CORS can be configured in the _App Router_ configuration. [Learn more about CORS handling with the **App Router**](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/ba527058dc4d423a9e0a69ecc67f4593.html?q=allowedOrigin#loioba527058dc4d423a9e0a69ecc67f4593__section_nt3_t4k_sz){.learn-more} ::: warning Avoid configuring CORS in both _App Router_ and CAP server Configuring CORS in multiple places can lead to confusing debugging scenarios. Centralizing CORS settings in one location decreases complexity, and thus, improves security. ::: ## Availability Checks To proactively identify problems, projects should set up availability monitoring for all the components involved in their solution. ### Anonymous Ping An *anonymous ping* service should be implemented with the least overhead possible. Hence, it should not use any authentication or authorization mechanism, but simply respond to whoever is asking. From `@sap/cds^7.8` onwards, the Node.js runtime provides such an endpoint for availability monitoring out of the box at `/health` that returns `{ status: 'UP' }` (with status code 200). You can override the default implementation and register a custom express middleware during bootstrapping as follows: ```js cds.on('bootstrap', app => app.get('/health', (_, res) => { res.status(200).send(`I'm fine, thanks.`) })) ``` More sophisticated health checks, like database availability for example, should use authentication to prevent Denial of Service attacks! ## Error Handling Good error handling is important to ensure the correctness and performance of the running app and developer productivity. We will give you a brief overview of common best practices. ### Error Types We need to distinguish between two types of errors: - Programming errors: These occur because of some programming mistakes (for example, `cannot read 'foo' of undefined`). They need to be fixed. - Operational errors: These occur during the operation (for example, when a request is sent to an erroneous remote system). They need to be handled. ### Guidelines #### Let It Crash 'Let it crash' is a philosophy coming from the [Erlang programming language](https://www.erlang.org/) (Joe Armstrong) which can also be (partially) applied to Node.js. The most important aspects for programming errors are: - Fail loudly: Do not hide errors and silently continue. Make sure that unexpected errors are correctly logged. Do not catch errors you can't handle. - Don't program in a defensive way: Concentrate on your business logic and only handle errors if you know that they occur. Only use `try`/`catch` blocks when necessary. Never attempt to catch and handle unexpected errors, promise rejections, etc. If it's unexpected, you can't handle it correctly. If you could, it would be expected (and should already be handled). Even though your apps should be stateless, you can never be 100% certain that any shared resource wasn't affected by the unexpected error. Hence, you should never keep an app running after such an event, especially in multi-tenant apps that bear the risk of information disclosure. This will make your code shorter, clearer, and simpler. #### Don't Hide Origins of Errors If an error occurs, it should be possible to know the origin. If you catch errors and re-throw them without the original information, it becomes hard to find and fix the root cause. Example: ```js try { // something } catch (e) { // augment instead of replace details e.message = 'Oh no! ' + e.message e.additionalInfo = 'This is just an example.' // re-throw same object throw e } ``` In rare cases, throwing a new error is necessary, for example, if the original error has sensitive details that should not be propagated any further. This should be kept to an absolute minimum. ### Further Readings The following articles might be of interest: - [Error Handling in Node.js](https://web.archive.org/web/20220417042018/https://www.joyent.com/node-js/production/design/errors) - [Let It Crash](https://wiki.c2.com/?LetItCrash) - [Don't Catch Exceptions](https://wiki.c2.com/?DontCatchExceptions) - [Report And Die](https://wiki.c2.com/?ReportAndDie) ## Timestamps When using [timestamps](events#timestamp) (for example for managed dates) the Node.js runtime offers a way to easily deal with that without knowing the format of the time string. The `req` object contains a property `timestamp` that holds the current time (specifically `new Date()`, which is comparable to `CURRENT_TIMESTAMP` in SQL). It also stays the same until the request finished, so if it is used in multiple places in the same transaction or request it will always be the same. Example: ```js srv.before("UPDATE", "EntityName", (req) => { const now = req.timestamp; req.data.createdAt = now; }); ``` Internally the [timestamp](events#timestamp) is a JavaScript `Date` object, that is converted to the right format, when sent to the database. So if in any case a date string is needed, the best solution would be to initialize a Date object, that is then translated to the correct UTC String for the database. ## Custom Streaming { #custom-streaming-beta } When using [Media Data](../guides/providing-services#serving-media-data) the Node.js runtime offers a possibility to return a custom stream object as response to `READ` requests like `GET /Books/coverImage`. Example: ```js srv.on('READ', 'Books', (req, next) => { if (coverImageIsRequested) { const readable = new Readable() return { value: readable, $mediaContentType: 'image/jpeg', $mediaContentDispositionFilename: 'cover.jpg', // > optional $mediaContentDispositionType: 'inline' // > optional } } return next() }) ``` In the returned object, `value` is an instance of [stream.Readable](https://nodejs.org/api/stream.html#class-streamreadable) and the properties `$mediaContentType`, `$mediaContentDispositionFilename`, and `$mediaContentDispositionType` are used to set the respective headers. ## Custom $count { #custom-count } When you write custom `READ` on-handlers, you should also support requests that contain `$count`, such as `GET /Books/$count` or `GET /Books?$count=true`. For more details, consider the following example: ```js srv.on('READ', 'Books', function (req) { // simple '/$count' request if (req.query.SELECT.columns?.length === 1 && req.query.SELECT.columns[0].as === '$count') return [{ $count: 100 }] // support other '/$count' requests ... const resultSet = [ ... ] // request contains $count=true if (req.query.SELECT.count === true) resultSet.$count = 100 return resultSet }) ``` # CAP Service SDK for Java Reference Documentation { .subtitle}
# Getting Started How to start a new CAP Java project and how to run it locally. ## Introduction The CAP Java SDK enables developing CAP applications in Java. While the [SAP Business Application Studio](https://help.sap.com/products/SAP%20Business%20Application%20Studio/9d1db9835307451daa8c930fbd9ab264/84be8d91b3804ab5b0581551d99ed24c.html) provides excellent support to develop CAP Java applications, you can also develop locally with Visual Studio Code. The CAP Java SDK supports lean application design by its modular architecture, that means you pick the required features and add them to your application dependencies on demand. It enables local development by supporting in-memory or file-based SQLite databases. At the same time, the CAP Java SDK enables switching to a productive environment, using, for example, SAP HANA as a database, easily by simply switching the application deployment configuration. If you use Spring Boot, you find yourself directly at home when using the CAP Java SDK, as the framework integrates with Spring Boot features like transaction handling, auto-wiring and test support. While the CAP Java SDK is framework agnostic, it's also possible to develop plain Java applications or even integrate with other frameworks. The CAP Java SDK comes with an OData V4 protocol adapter, but it's openly designed. You can add more protocol adapters in the future or provide even custom protocol adapters by the application. It supports SAP BTP features like authentication and authorization based on XSUAA tokens. But you aren't locked in to SAP BTP using a CAP Java application. Excited? The following sections describe how to set up a development environment to get you started. ## Setting Up Local Development { #local} This section describes the prerequisites and tools to build a CAP application locally. 1. Install the CDS tools (`cds-dk`) by following the steps in the central *[Getting Started](../get-started/#setup)* guide. 2. Install a Java VM. At least, Java 17 is required. For example, [download](https://github.com/SAP/SapMachine/releases/latest) and [install](https://github.com/SAP/SapMachine/wiki/Installation) SapMachine 17. 3. [Install Apache Maven](https://maven.apache.org/download.cgi) (at least version 3.6.3 is required). 4. Execute the following commands on the command line to check whether the installed tools are set up correctly: ```sh cds --version java --version mvn --version ``` ::: tip For a preconfigured environment, use [SAP Business Application Studio](../tools/cds-editors#bas), which comes with all the required tools preinstalled. In older workspaces it might be necessary to explicitly set the JDK to version 17 with the command `Java: Set Default JDK`. ::: ## Starting a New Project { #new-project} Take the following steps to set up a new CAP Java application based on Spring Boot from scratch. As a prerequisite, you've set up your [development environment](#local). ### Run the Maven Archetype { #run-the-cap-java-maven-archetype } Use the [CAP Java Maven archetype](./developing-applications/building#the-maven-archetype) to bootstrap a new CAP Java project: ```sh mvn archetype:generate -DarchetypeArtifactId="cds-services-archetype" -DarchetypeGroupId="com.sap.cds" -DarchetypeVersion="RELEASE" -DinteractiveMode=true ```
When prompted, specify the group ID and artifact ID of your application. The artifact ID also specifies the name of your projects root folder that is generated in your current working directory. For other values prompted, it's enough to simply confirm the default values. Alternatively, you can use the CDS tools to bootstrap a Java project: ```sh cds init --java ``` Afterwards, switch to the new project by calling `cd `. All following steps need to executed from this directory! ::: tip You can call `cds help init` for more information on the available options. ::: ### Add a Sample CDS Model You can use the [CDS Maven plugin](developing-applications/building#cds-maven-plugin) to add a sample CDS model after creating your project. Navigate to the root folder of your CAP Java project and execute the following Maven command: ```sh mvn com.sap.cds:cds-maven-plugin:add -Dfeature=TINY_SAMPLE ``` ### Add CloudFoundry target platform Following the "[Grow As You Go](../about/#grow-as-you-go)" principle, the generated CAP Java project doesn't contain support for Cloud Foundry as the target platform. To enhance your project with dependencies required for Cloud Foundry, execute the goal `add` of the [CDS Maven plugin](./assets/cds-maven-plugin-site/add-mojo.html){target="_blank"} using the following command: ```sh mvn com.sap.cds:cds-maven-plugin:add -Dfeature=CF ``` This command adds the following dependency to the pom.xml: ```xml com.sap.cds cds-starter-cloudfoundry ``` ::: tip CAP Java also provides a starter bundle for SAP BTP Kyma environment. See [CAP Starter Bundles](./developing-applications/building#starter-bundles#starter-bundles) for more details. ::: ### Project Layout The generated project has the following folder structure: ```txt / ├─ db/ └─ srv/ ├─ src/main/java/ ├─ src/gen/java/ └─ node_modules/ ``` The generated folders have the following content: | Folder | Description | | --- | --- | | *db* | Contains content related to your database. A simple CDS domain model is included. | | *srv* | Contains the CDS service definitions and Java back-end code and the sample service model. | | *srv/src/main/java* | Contains the Java source code of the `srv/` Maven project. | | *srv/src/gen/java* | Contains the compiled CDS model and generated [accessor interfaces for typed access](./cds-data#typed-access) after building the project with `mvn compile` once. | | *node_modules* | Generated when starting the build, containing the dependencies for the CDS tools (unless you specify `-Dcdsdk-global` [when starting the build](#build-and-run)). | ### Add an Integration Test Module (Optional) Optionally, you can use the [CDS Maven plugin](./developing-applications/building#cds-maven-plugin) to enhance your CAP Java application with an additional Maven module to perform integration tests. To add such a module, go into the root folder of your CAP Java project and execute the following Maven command: ```sh mvn com.sap.cds:cds-maven-plugin:add -Dfeature=INTEGRATION_TEST ``` This command also creates a new folder *integration-tests/src/test/java*, which contains integration test classes. | Folder | Description | | -- | -- | | *integration-tests/src/test/java* | Contains integration test classes. | ### Build and Run To build and run the generated project from the command line, execute: ```sh mvn spring-boot:run ``` ::: tip To test whether the started application is up and running, open [http://localhost:8080](http://localhost:8080) in your browser. Use user [`authenticated`](./security#mock-users) if a username is requested. You don't need to enter a password. ::: ### Supported IDEs CAP Java projects can be edited best in a Java IDE. Leaving CDS support aside you could use any Java IDE supporting the import of Maven projects. But as CDS modeling and editing is a core part of CAP application development we strongly recommend to use an IDE with existing Java support: * [SAP Business Application Studio](/tools/cds-editors#bas) is a cloud-based IDE with minimal local requirements and footprint. It comes pre packaged with all tools, libraries and extensions that are needed to develop CAP applications. * [Visual Studio Code](/tools/cds-editors#vscode) is a free and very wide-spread code editor and IDE which can be extended with Java and CDS support. It offers first class CDS language support and solid Java support for many development scenarios. * [IntelliJ Idea Ultimate](/tools/cds-editors#intellij) is one of the leading Java IDEs with very powerful debugging, refactoring and profiling support. Together with the CDS Plugin it offers the most powerful support for CAP Java application development. #### Source Path Configuration and CDS build Your IDE might show inline errors indicating missing classes. This happens because the generated Java files are missing. To resolve this, open your terminal and execute `mvn compile` in your project root directory. This action performs a full build of your project. It's necessary because, although the IDE can construct the correct class path based on the project's dependencies, it doesn't initiate the CDS build or subsequent code generation. This is covered as part of the `mvn compile` call. If you're using JetBrains' Intellij, you need to tell it to use the generated folder `srv/src/gen/java`. Do so by marking the directory as `Generated Sources Root`. You can find this option in IntelliJ's project settings or by right-clicking on the folder and choosing `Mark Directory as`. By doing this, you ensure that the IntelliJ build includes the generated sources in the Java ClassPath. #### Run and Test the Application Once you've configured your application as described in the previous section, you can run your application in your IDE by starting the `main` method of your project's `Application.java`. Then open the application in your browser at [http://localhost:8080/](http://localhost:8080). ## Sample Application { #sample} Find [here](https://github.com/SAP-samples/cloud-cap-samples-java) the bookshop sample application based on CAP Java. # Versions & Dependencies Learn in this chapter about CAP Java versions and their dependencies. ## Versions { #versions } CAP Java is pretty much aligned with the [Semantic Versioning Specification](https://semver.org). Hence, the version identifier follows the pattern `MAJOR.MINOR.PATCH`: - **Major versions** are delivered every year or even several years and might introduce [incompatible changes](../releases/schedule#cap-java) (for example, `2.0.0`). Upcoming major versions are announced early. - **Minor versions** are delivered on a [monthly basis](/releases/schedule#minor) (for example, `2.7.0` replacing `2.6.4`). New features are announced in the [CAP Release notes](/releases/). - **Patch versions** containing critical bugfixes are delivered [on demand](../releases/schedule#patch) (for example, `2.7.1` replacing `2.7.0`). Patches do not contain new features. Find detailed information about versions and release in the [CAP release schedule](../releases/schedule#cap-java). ::: warning Consume latest versions We strongly recommend to consume the latest minor version on a monthly basis to keep future migration efforts as small as possible. Likewise, we strongly recommend to consume the latest patch version as soon as possible to receive critical bug fixes. ::: ### Active Version { #active-version } New features are developed and delivered in the [active codeline](../releases/schedule#active) of CAP Java only. That means the currently active codeline receives minor version updates as well as patches. A new major version opens a new active codeline and the previous one is put into maintenance mode. ### Maintenance Version { #maintenance-version } In the [maintenance codeline](../releases/schedule#maintenance-status) of CAP Java, only patch versions are delivered. This version provides applications with a longer time horizon for migrating to a new major version.
## Maintain Dependencies { #dependencies } ### Minimum Versions CAP Java uses various dependencies that are also used by the applications themselves. If the applications decide to manage the versions of these dependencies, it's helpful to know the minimum versions of these dependencies that CAP Java requires. The following table lists these minimum versions for various common dependencies, based on the latest release: #### Active Version 3.x { #dependencies-version-3 } | Dependency | Minimum Version | Recommended Version | | --- | --- | --- | | JDK | 17 | 21 | | Maven | 3.6.3 | 3.9.8 | | @sap/cds-dk | 7 | latest | | @sap/cds-compiler | 4 | latest | | Spring Boot | 3.0 | latest | | XSUAA | 3.0 | latest | | SAP Cloud SDK | 5.9 | latest | | Java Logging | 3.7 | latest | #### Maintenance Version 2.10.x { #dependencies-version-2 } | Dependency | Minimum Version | Recommended Version | | --- | --- | --- | | JDK | 17 | 21 | | Maven | 3.5.0 | 3.9.8 | | @sap/cds-dk | 6 | 7 | | @sap/cds-compiler | 3 | 4 | | Spring Boot | 3.0 | latest | | XSUAA | 3.0 | latest | | SAP Cloud SDK | 4.24 | latest | | Java Logging | 3.7 | latest | ::: warning The Cloud SDK BOM `sdk-bom` manages XSUAA until version 2.x, which isn't compatible with CAP Java 2.x. You have two options: * Replace `sdk-bom` with `sdk-modules-bom`, which [manages all Cloud SDK dependencies but not the transitive dependencies.](https://sap.github.io/cloud-sdk/docs/java/guides/manage-dependencies#the-sap-cloud-sdk-bill-of-material) * Or, add [dependency management for XSUAA](https://github.com/SAP/cloud-security-services-integration-library#installation) before Cloud SDK's `sdk-bom`. ::: ### Consistent Versions Some SDKs such as CAP Java or Cloud SDK provide a bunch of artifacts with a common version. Mixing different versions of SDK artifacts often results in compiler errors or unpredictable runtime issues. To help keeping the client configuration consistent, SDKs usually provide bill of material (BOM) poms as an optional maven dependency. We strongly recommended to import available BOM poms. Following example shows how BOM poms of `com.sap.cds`, `com.sap.cloud.sdk`, and `com.sap.cloud.security` can be added to the project's parent `pom.xml`: ::: code-group ```xml [pom.xml] com.sap.cds cds-services-bom ${cds.services.version} pom import com.sap.cloud.sdk sdk-modules-bom ${cloud.sdk.version} pom import com.sap.cloud.security java-bom ${xsuaa.version} pom import ``` ::: ### Update Versions Regular [updates and patches](#versions) of CAP Java keeps your project in sync with the most recent Free and Open Source Software (FOSS) dependency versions. However, a security vulnerability could be published, by one of your dependencies, in between CAP Java releases and in turn prevent your application from being released due to failing security scans. In this case, applications have the following options: - Wait for the next monthly CAP Java release with fixed dependencies. - Specify a secure version of the vulnerable dependency explicitly. Do that at the beginning of the `dependencyManagement` section of the top-level *pom.xml* file of your application: ::: code-group ```xml [pom.xml] […] ``` ::: Make sure that the updated version is compatible. When consuming a new CAP Java version, this extra dependency can be removed again.
# Working with CDS Models The Model Reflection API is a set of interfaces, which provide the ability to introspect a CDS model and retrieve details on the services, types, entities, and their elements that are defined by the model. ## The CDS Model The interface `CdsModel` represents the complete CDS model of the CAP application and is the starting point for the introspection. The `CdsModel` can be obtained from the `EventContext`: ```java import com.sap.cds.services.handler.annotations.On; import com.sap.cds.services.EventContext; import com.sap.cds.reflect.CdsModel; @On(event = "READ", entity = "CatalogService.Books") public void readBooksVerify(EventContext context) { CdsModel model = context.getModel(); ... } ``` or, in Spring, be injected: ```java @Autowired CdsModel model; ``` On a lower level, the `CdsModel` can be obtained from the `CdsDataStoreConnector`, or using the `read` method from a [CSN](../cds/csn) String or [InputStream](https://docs.oracle.com/javase/8/docs/api/java/io/InputStream.html): ```java InputStream csnJson = ...; CdsModel model = CdsModel.read(csnJson); ``` ::: tip Instead of bare string literals, you can also use auto-generated string constants and interfaces in event handlers. [Learn more about event handlers.](./event-handlers/){.learn-more} ::: ## Examples The following examples are using this CDS model: ```cds namespace my.bookshop; entity Books { title : localized String(111); author : Association to Authors; ... } entity Authors { key ID : Integer; ... } entity Orders { OrderNo : String @title:'Order Number'; ... } ``` ### Get and Inspect an Element of an Entity In this example, we introspect the details of the type of the element `title` of the entity `Books`: ```java CdsEntity books = model.getEntity("my.bookshop.Books"); CdsElement title = books.getElement("title"); boolean key = title.isKey(); // false boolean localized = title.isLocalized(); // true CdsType type = title.getType(); // CdsSimpleType if (type.isSimple()) { // true CdsSimpleType simple = type.as(CdsSimpleType.class); String typeName = simple.getQualifiedName(); // "cds.String" CdsBaseType baseType = simple.getType(); // CdsBaseType.STRING Class javaType = simple.getJavaType(); // String.class Integer length = simple.get("length"); // 111 } ``` ### Get and Inspect All Elements of an Entity ```java CdsEntity books = model.getEntity("my.bookshop.Books"); Stream elements = books.elements(); ``` The method `elements()` returns a stream of all elements of the given entity, structured type, or event. It's important to note that the Model Reflection API doesn't guarantee the element order to be exactly like in the source CSN document. However, the order is guaranteed to be stable during multiple consecutive model reads. ::: tip In case the element names are known beforehand it's recommended to access them by name through the `getElement(String name)` method. ::: ### Get and Inspect an Association Element of an Entity We can also analyze the details of an association: ```java CdsElement authorElement = book.getAssociation("author"); CdsAssociationType toAuthor = authorElement.getType(); CdsEntity author = toAuthor.getTarget(); // Entity: my.bookshop.Authors boolean association = toAuthor.isAssociation(); // true boolean composition = toAuthor.isComposition(); // false Cardinality cardinality = toAuthor.getCardinality(); String sourceMax = cardinality.getSourceMax(); // "*" String targetMin = cardinality.getTargetMin(); // "0" String targetMax = cardinality.getTargetMax(); // "1" Stream keys = toAuthor.keys(); // Stream: [ ID ] Optional onCondition = toAuthor.onCondition(); // empty ``` ### Find an Annotation by Name and Get Its Value Here, we programmatically check if the element `OrderNo` carries the annotation `title` and set the value of `displayName` depending on the presence of the annotation: ```java CdsEntity order = model.getEntity("my.bookshop.Orders"); CdsElement orderNo = order.getElement("OrderNo"); Optional> annotation = orderNo .findAnnotation("title"); String displayName = annotation.map(CdsAnnotation::getValue) .orElse(orderNo.getName()); // "Order Number" ``` ### Filter a Stream of Entities by Namespace The static method `com.sap.cds.reflect.CdsDefinition.byNamespace` allows to create a predicate to filter a stream of definitions (for example, entities, elements, ...) for definitions contained in a given namespace: ```java import static com.sap.cds.reflect.CdsDefinition.byNamespace; ... Stream entities = model.entities() .filter(byNamespace("my.bookshop")); ``` ### Get All Elements with Given Annotation The static method `com.sap.cds.reflect.CdsAnnotatable.byAnnotation` allows to create a [predicate](https://docs.oracle.com/javase/8/docs/api/java/util/function/Predicate.html) to filter a stream of annotatable model components (for example, entities, elements, ...) for components that carry a given annotation: ```java import static com.sap.cds.reflect.CdsAnnotatable.byAnnotation; ... CdsEntity order = model.getEntity("my.bookshop.Orders"); Stream elements = order.elements() .filter(byAnnotation("title")); ``` ## Feature Toggles ### Feature Toggles and Active Feature Set [Feature toggles](../guides/extensibility/feature-toggles) allow to dynamically enable or disable parts of an application at runtime or to alter the behaviour depending on features. Feature toggles can be used for different purposes. They can be used as release toggles to selectively enable some features for some customers only based on a deployment vector. Or they can be used as runtime toggles to dynamically enable or disable selected features for selected users. CAP Java does not make any assumption _how_ the set of enabled features (_active feature set_) is determined. This could be based on user ID, user role, user tenant, or any other information such as an HTTP header or an external feature toggle service. ### Features in CDS Models Features are modeled in CDS by dividing up CDS code concerning separate features into separate subfolders of a common `fts` folder of your project, as shown by the following example: ```txt ├─ [db] │ ├─ my-model.cds │ └─ ... ├─ [srv] │ ├─ my-service.cds │ └─ ... └─ [fts] ├─ [X] │ ├─ model.cds │ └─ ... ├─ [Y] │ ├─ feature-model.cds │ └─ ... └─ [Z] ├─ wrdlbrmpft.cds └─ ... ``` In this example, three _CDS features_ `X`, `Y` and `Z` are defined. Note that the name of a feature (by which it is referenced in a _feature toggle_) corresponds to the name of the feature's subfolder. A CDS feature can contain arbitrary CDS code. It can either define new entities or extensions of existing entities. The database schema resulting from CDS build at design time contains *all* features. This is required to serve the base model and all combinations of features at runtime. ### The Model Provider Service ![This graphic is explained in the accompanying text.](../assets/feature-toggles.drawio.svg) At runtime, per request, an effective CDS model is used that reflects the active feature set. To obtain the effective model that the runtime delegates to the *Model Provider Service*, which uses this feature set to resolve the CDS model code located in the `fts` folder of the active features and compiles to effective CSN and EDMX models for the current request to operate on. ::: warning The active feature set can't be changed within an active transaction. ::: ### Toggling SAP Fiori UI Elements In an [SAP Fiori elements](https://experience.sap.com/fiori-design-web/smart-templates/) application, the UI is captured with annotations in the CDS model. Hence, toggling of [SAP Fiori elements annotations](../advanced/fiori#what-are-sap-fiori-annotations) is already leveraged by the above concept: To enable toggling of such annotations (and thus UI elements), it's required that the EDMX returned by the `$metadata` respects the feature vector. This is automatically achieved by maintaining different model variants according to activated features as described in the previous section. ### Features on the Database As CDS features are reflected in database artifacts, the database needs to be upgraded when new features are _introduced_ in the CDS model. If a feature is _enabled_, the corresponding database artifacts are already present and no further database change is required. Only when a particular feature is turned on, the application is allowed to access the corresponding part of the database schema. The CAP framework ensures this by exposing only the CDS model that corresponds to a certain feature vector. The CAP framework accesses database entities based on the currently active CDS model only. This applies in particular to `SELECT *` requests for which the CAP framework returns all columns defined in the current view on the model, and *not* all columns persisted on the database. ### Feature Toggles Info Provider In CAP Java, the [active feature set](#feature-toggles-and-active-feature-set) in a particular request is represented by the [`FeatureTogglesInfo`](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/FeatureTogglesInfo.html). On each request, the runtime uses the [`FeatureTogglesInfoProvider`](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/FeatureTogglesInfoProvider.html) to create the request-dependent `FeatureTogglesInfo` object, which is exposed in the current `RequestContext` by [`getFeatureTogglesInfo()`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/RequestContext.html#getFeatureTogglesInfo--). By default all features are deactivated (`FeatureTogglesInfo` represents an empty set). #### From Mock User Configuration If mock users are used, a default [`FeatureToggleProvider`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/FeatureTogglesInfoProvider.html) is registered, which assigns feature toggles to users based on the [mock user configuration](./security#mock-users). Feature toggles can be configured per user or [per tenant](./security#mock-tenants). The following configuration enables the feature `wobble` for the user `Bob` while for `Alice` the features `cruise` and `parking` are enabled: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: security: mock: users: - name: Bob tenant: CrazyCars features: - wobble - name: Alice tenant: SmartCars features: - cruise - parking ``` ::: #### Custom Implementation Applications can implement a custom [`FeatureTogglesInfoProvider`](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/FeatureTogglesInfoProvider.html) that computes a `FeatureTogglesInfo` based on the request's [`UserInfo`](https://www.javadoc.io/static/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/UserInfo.html) and [`ParameterInfo`](https://www.javadoc.io/static/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/ParameterInfo.html). The following example demonstrates a feature toggles info provider that enables the feature `isbn` if the user has the `expert` role: ```java @Component public class DemoFTProvider implements FeatureTogglesInfoProvider { @Override public FeatureTogglesInfo get(UserInfo userInfo, ParameterInfo paramInfo) { Map featureToggles = new HashMap<>(); if (userInfo.hasRole("expert")) { featureToggles.put("isbn", true); } return FeatureTogglesInfo.create(featureToggles); } } ``` This feature toggles provider is automatically registered and used as Spring bean by means of the annotation `@Component`. At each request, the CAP Java runtime calls the method `get()`, which determines the active features based on the logged in user's roles. #### Defining Feature Toggles for Internal Service Calls It is not possible to redefine the feature set within an active request context as this would result in a model change. However, if there is no active request context such as in a new thread, you can specify the feature set while [Defining Request Contexts](./event-handlers/request-contexts#defining-requestcontext). In the following example, a `Callable` is executed in a new thread resulting in an initial request context. In the definition of the request context the feature toggles are defined that will be used for the statement execution: ```java @Autowired CdsRuntime runtime; @Autowired PersistenceService db; FeatureTogglesInfo isbn = FeatureTogglesInfo.create(Collections.singletonMap("isbn", true)); ... Future result = Executors.newSingleThreadExecutor().submit(() -> { return runtime.requestContext().featureToggles(isbn).run(rc -> { return db.run(Select.from(Books_.CDS_NAME)); }); }); ```
### Using Feature Toggles in Custom Code Custom code, which depends on a feature toggle can evaluate the [`FeatureTogglesInfo`](https://www.javadoc.io/static/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/FeatureTogglesInfo.html) to determine if the feature is enabled. The `FeatureTogglesInfo` can be obtained from the [RequestContext](./event-handlers/request-contexts) or `EventContext` by the `getFeatureTogglesInfo()` method or by [dependency injection](./spring-boot-integration#exposed-beans). This is shown in the following example where custom code depends on the feature `discount`: ```java @After protected void subtractDiscount(CdsReadEventContext context) { if (context.getFeatureTogglesInfo().isEnabled("discount")) { // Custom coding executed when feature "discount" is active // ... } } ``` # Working with CDS Data This section describes how CDS data is represented and used in CAP Java. ## Predefined Types The [predefined CDS types](../cds/types) are mapped to Java types and as follows: | CDS Type | Java Type | Remark | |--------------------|------------------------|--------------------------------------------------------------------------| | `cds.UUID` | `java.lang.String` | | | `cds.Boolean` | `java.lang.Boolean` | | | `cds.UInt8` | `java.lang.Short` | | | `cds.Int16` | `java.lang.Short` | | | `cds.Int32` | `java.lang.Integer` | | | `cds.Integer` | `java.lang.Integer` | | | `cds.Int64` | `java.lang.Long` | | | `cds.Integer64` | `java.lang.Long` | | | `cds.Decimal` | `java.math.BigDecimal` | | | `cds.DecimalFloat` | `java.math.BigDecimal` | deprecated | | `cds.Double` | `java.lang.Double` | | | `cds.Date` | `java.time.LocalDate` | date without a time-zone (year-month-day) | | `cds.Time` | `java.time.LocalTime` | time without a time-zone (hour-minute-second) | | `cds.DateTime` | `java.time.Instant` | instant on the time-line with _sec_ precision | | `cds.Timestamp` | `java.time.Instant` | instant on the time-line with _µs_ precision | | `cds.String` | `java.lang.String` | | | `cds.LargeString` | `java.lang.String` | `java.io.Reader` (1) if annotated with `@Core.MediaType` | | `cds.Binary` | `byte[]` | | | `cds.LargeBinary` | `byte[]` | `java.io.InputStream` (1) if annotated with `@Core.MediaType` | | `cds.Vector` | `com.sap.cds.CdsVector`| for [vector embeddings](#vector-embeddings) | ### SAP HANA-Specific Data Types To facilitate using legacy CDS models, the following [SAP HANA-specific data types](../advanced/hana#hana-types) are supported: | CDS Type | Java Type | Remark | | --- | --- | --- | | `hana.TINYINT` | `java.lang.Short` | | | `hana.SMALLINT` | `java.lang.Short` | | | `hana.SMALLDECIMAL` | `java.math.BigDecimal` | | | `hana.REAL` | `java.lang.Float` | | | `hana.CHAR` | `java.lang.String` | | | `hana.NCHAR` | `java.lang.String` | | | `hana.VARCHAR` | `java.lang.String` | | | `hana.CLOB` | `java.lang.String` | `java.io.Reader` (1) if annotated with `@Core.MediaType` | | `hana.BINARY` | `byte[]` | | > (1) Although the API to handle large objects is the same for every database, the streaming feature, however, is supported (and tested) in **SAP HANA**, **PostgreSQL**, and **H2**. See section [Database Support in Java](./cqn-services/persistence-services#database-support) for more details on database support and limitations. ::: warning The framework isn't responsible for closing the stream when writing to the database. You decide when the stream is to be closed. If you forget to close the stream, the open stream can lead to a memory leak. ::: These types are used for the values of CDS elements with primitive type. In the [Model Reflection API](./reflection-api), they're represented by the enum [CdsBaseType](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsBaseType.html). ## Structured Data In CDS, structured data is used as payload of *Insert*, *Update*, and *Upsert* statements. Also the query result of *Select* may be structured. CAP Java represents data of entities and structured types as `Map` and provides the `CdsData` interface as an extension of `Map` with additional convenience methods. In the following we use this CDS model: ```cds entity Books { key ID : Integer; title : String; author : Association to one Authors; } entity Authors { key ID : Integer; name : String; books : Association to many Books on books.author = $self; } entity Orders { key ID : Integer; header : Composition of one OrderHeaders; items : Composition of many OrderItems; } entity OrderHeaders { key ID : Integer; status : String; } aspect OrderItems { key ID : Integer; book : Association to one Books; } ``` [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples-java/blob/5396b0eb043f9145b369371cfdfda7827fedd039/db/schema.cds#L5-L22){ .learn-more} In this model, there is a bidirectional many-to-one association between `Books` and `Authors`, which is managed by the `Books.author` association. The `Orders` entity owns the composition `header`, which relates it to the `OrderHeaders` entity, and the composition `items`, which relates the order to the `OrderItems`. The items are modeled using a managed composition of aspects. ::: tip Use [Managed Compositions of Aspects](../guides/domain-modeling#composition-of-aspects) to model unidirectional one-to-many compositions. ::: ### Relationships to other entities Relationships to other entities are modeled as associations or compositions. While _associations_ capture relationships between entities, _compositions_ constitute document structures through 'contained-in' relationships. ### Entities and Structured Types Entities and structured types are represented in Java as a `Map` that maps the element names to the element values. The following example shows JSON data and how it can be constructed in Java: ```json { "ID" : 97, "title" : "Dracula" } ``` ```java Map book = new HashMap<>(); book.put("ID", 97); book.put("title", "Dracula"); ``` > Data of structured types and entities can be sparsely populated. ### Nested Structures and Associations Nested structures and single-valued associations, are represented by elements where the value is structured. In Java, the value type for such a representation is a map. The following example shows JSON data and how it can be constructed in Java: ```json { "ID" : 97, "author" : { "ID": 23, "name": "Bram Stoker" } } ``` Using plain maps: ```java Map author = new HashMap<>(); author.put("ID", 23); author.put("name", "Bram Stoker"); Map book = new HashMap<>(); book.put("ID", 97); book.put("author", author); ``` Using the `putPath` method of `CdsData`: ```java CdsData book = Struct.create(CdsData.class); book.put("ID", 97); book.putPath("author.ID", 23); book.putPath("author.name", "Bram Stoker"); ``` Using the generated [accessor interfaces](#generated-accessor-interfaces): ```java Authors author = Authors.create(); author.setId(23); author.setName("Bram Stoker"); Books book = Books.create(); book.setId(97); book.setAuthor(author); ``` A [to-many association](../cds/cdl#to-many-associations) is represented by a `List>`. The following example shows JSON data and how it can be constructed in Java: ```json { "ID" : 23, "name" : "Bram Stoker", "books" : [ { "ID" : 97, "title" : "Dracula" }, { "ID" : 98, "title" : "Miss Betty" } ] } ``` ```java // java Map book1 = new HashMap<>(); book1.put("ID", 97); book1.put("title", "Dracula"); Map book2 = new HashMap<>(); book2.put("ID", 98); book2.put("title", "Miss Betty"); Map author = new HashMap<>(); author.put("ID", 23); author.put("name", "Bram Stoker"); author.put("books", Arrays.asList(book1, book2)); ``` ## CDS Data In CAP Java data is represented in maps. To simplify data access in custom code, CAP Java additionally provides generated [accessor interfaces](#typed-access) which extend [CdsData](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/CdsData.html), enhancing the `Map` interface with path access to nested data and build-in serialization to JSON. ![This graphic is explained in the accompanying text.](./assets/accessor.drawio.svg) The `Row`s of a [query result](./working-with-cql/query-execution#result) as well as the [generated accessor interfaces](#generated-accessor-interfaces) already extend `CdsData`. Using the helper class [Struct](#struct) you can extend any `Map` with the CdsData `interface`: ```java Map map = new HashMap<>(); CdsData data = Struct.access(map).as(CdsData.class); ``` Or create an empty `CdsData` map using `Struct.create`: ```java CdsData data = Struct.create(CdsData.class); ``` ### Path Access Manipulate deeply nested data using `CdsData.putPath`: ```java data.putPath("author.name", "Bram Stoker"); ``` This results in a nested data structure: `{ "author" : { "name" : "Bram Stoker" } }`. The path access in `putPath` is null-safe, nested maps are created on the fly if required. Read nested data using `CdsData.getPath`: ```java String authorName = data.getPath("author.name"); ``` To check if the data contains a value in a nested map with a specific path use `containsPath`: ```java boolean b = data.containsPath("author.name"); ``` To do a deep remove use `removePath`: ```java String authorName = data.removePath("author.name"); ``` Empty nested maps are automatically removed by `removePath`. ::: tip Use path access methods of `CdsData` to conveniently manipulate nested data structures. ::: ### Serialization CDS Data has built-in serialization to JSON, which is helpful for debugging: ```java CdsData person = Struct.create(CdsData.class); person.put("salutation", "Mr."); person.putPath("name.first", "Frank"); // path access person.toJson(); // { "salutation" : "Mr.", name : { "first" : "Frank" } } ``` ::: warning Avoid cyclic relationships between CdsData objects when using toJson. ::: ## Vector Embeddings { #vector-embeddings } In CDS [vector embeddings](../guides/databases-hana#vector-embeddings) are stored in elements of type `cds.Vector`: ```cds entity Books : cuid { // [!code focus] title : String(111); description : LargeString; // [!code focus] embedding : Vector(1536); // vector space w/ 1536 dimensions // [!code focus] } // [!code focus] ``` In CAP Java, vector embeddings are represented by the `CdsVector` type, which allows a unified handling of different vector representations such as `float[]` and `String`: ```Java // Vector embedding of text, for example, from SAP GenAI Hub or via LangChain4j float[] embedding = embeddingModel.embed(bookDescription).content().vector(); CdsVector v1 = CdsVector.of(embedding); // float[] format CdsVector v2 = CdsVector.of("[0.42, 0.73, 0.28, ...]"); // String format ``` You can use the functions, `CQL.cosineSimilarity` or `CQL.l2Distance` (Euclidean distance) in queries to compute the similarity or distance of embeddings in the vector space. To use vector embeddings in functions, wrap them using `CQL.vector`: ```Java CqnVector v = CQL.vector(embedding); Result similarBooks = service.run(Select.from(BOOKS).where(b -> CQL.cosineSimilarity(b.embedding(), v).gt(0.9)) ); ``` You can also use parameters for vectors in queries: ```Java var similarity = CQL.cosineSimilarity(CQL.get(Books.EMBEDDING), CQL.param(0).type(VECTOR)); CqnSelect query = Select.from(BOOKS) .columns(b -> b.title(), b -> similarity.as("similarity")) .where(b -> b.ID().ne(bookId).and(similarity.gt(0.9))) .orderBy(b -> b.get("similarity").desc()); Result similarBooks = db.run(select, CdsVector.of(embedding)); ``` In CDS QL queries, elements of type `cds.Vector` are not included in select _all_ queries. They must be explicitly added to the select list: ```Java CdsVector embedding = service.run(Select.from(BOOKS).byId(101) .columns(b -> b.embedding())).single(Books.class).getEmbedding(); ``` ## Data in CDS Query Language (CQL) This section shows examples using structured data in [CQL](../cds/cql) statements. ### Deep Inserts through Compositions and Cascading Associations *Deep Inserts* create new target entities along compositions and associations that [cascade](./working-with-cql/query-execution#cascading-over-associations) the insert operation. In this example an order with a header in status 'open' is created via a deep insert along the `header` composition. ```java OrderHeaders header = OrderHeaders.create(); header.setId(11); header.setStatus("open"); Orders order = Orders.create(); order.setId(1); order.setHeader(header); Insert insert = Insert.into(ORDERS).entry(order); ``` ### Setting Managed Associations to Existing Target Entities If you're using associations that don't cascade the insert and update operations, those associations can only be set to existing target entities. The data is structured in the same way as in deep inserts, but the insert operation is *flat*, only the target values that are required to set the association are considered, all other target values are ignored: ```java Authors author = Authors.create(); author.setId(100); Books book = Books.create(); book.setId(101); book.setAuthor(author); Insert insert = Insert.into(BOOKS).entry(book); ``` ::: tip Set managed associations using the _association element_ and avoid using generated foreign key elements. ::: ### Inserts through Compositions via Paths To insert via compositions, use paths in `into`. In the following example we add an order item to the set of items of the order 100: ```java OrderItems orderItem = OrderItems.create(); orderItem.setId(1); orderItem.putPath("book.ID", 201); // set association to book 201 Insert.into(ORDERS, o -> o.filter(o.Id().eq(100)).items()) .entry(orderItem); ``` ::: tip Access child entities of a composition using a path expression from the parent entity instead of accessing the child entities directly. ::: ### Select Managed Associations To select the mapping elements of a managed association, simply add the [association](./working-with-cql/query-api#managed-associations-on-the-select-list) to the select list: ```java CqnSelect select = Select.from(BOOKS).byId(123) .columns(b -> b.author()); Row row = persistence.run(select).single(); Integer authorId = row.getPath("author.ID"); ``` ::: tip Don't select from and rely on compiler generated foreign key elements of managed associations. ::: ### Select with Paths in Matching Paths are also supported in [matching](./working-with-cql/query-api#using-matching), for example, to select all *orders* that are in status *canceled*: ```java Map order = new HashMap<>(); order.put("header.status", "canceled"); CqnSelect select = Select.from("bookshop.Orders").matching(order); Result canceledOrders = persistence.run(select); ``` ## Typed Access Representing data given as `Map` is flexible and interoperable with other frameworks. But it also has some disadvantages: * Names of elements are checked only at runtime * No code completion in the IDE * No type safety To simplify the handling of data, CAP Java additionally provides _typed_ access to data through _accessor interfaces_: Let's assume following data for a book: ```java Map book = new HashMap<>(); book.put("ID", 97); book.put("title", "Dracula"); ``` You can now either define an accessor interface or use a [generated accessor interface](#generated-accessor-interfaces). If you define an interface yourself, it could look like the following example: ```java interface Books extends Map { @CdsName("ID") // name of the CDS element Integer getID(); String getTitle(); void setTitle(String title); } ``` ### Struct At runtime, the `Struct.access` method is used to create a [proxy](#cds-data) that gives typed access to the data through the accessor interface: ```java import static com.sap.cds.Struct.access; ... Books book = access(data).as(Books.class); String title = book.getTitle(); // read the value of the element 'title' from the underlying map book.setTitle("Miss Betty"); // update the element 'title' in the underlying map title = data.get("title"); // direct access to the underlying map title = book.get("title"); // hybrid access to the underlying map through the accessor interface ``` To support _hybrid_ access, like simultaneous typed _and_ generic access, the accessor interface just needs to extend `Map`. ::: tip The name of the CDS element referred to by a getter or setter, is defined through `@CdsName` annotation. If the annotation is missing, it's determined by removing the get/set from the method name and lowercasing the first character. ::: ### Generated Accessor Interfaces {#generated-accessor-interfaces} For all structured types of the CDS model, accessor interfaces can be generated using the [CDS Maven Plugin](./cqn-services/persistence-services#staticmodel). The generated accessor interfaces allow for hybrid access and easy serialization to JSON. By default, the accessor interfaces provide the setter and getter methods inspired by the JavaBeans specification. Following example uses accessor interfaces that have been generated with the default (JavaBeans) style: ```java Authors author = Authors.create(); author.setName("Emily Brontë"); Books book = Books.create(); book.setAuthor(author); book.setTitle("Wuthering Heights"); ``` Alternatively, you can generate accessor interfaces in _fluent style_. In this mode, the getter methods are named after the property names. To enable fluent chaining, the setter methods return the accessor interface itself. Following is an example of the fluent style: ```java Authors author = Authors.create().name("Emily Brontë"); Books.create().author(author).title("Wuthering Heights"); ``` The generation mode is configured by the property [``](./assets/cds-maven-plugin-site/generate-mojo.html#methodstyle) of the goal `cds:generate` provided by the CDS Maven Plugin. The selected `` affects all entities and event contexts in your services. The default value is `BEAN`, which represents JavaBeans-style interfaces. Once, when starting a project, decide on the style of the interfaces that is best for your team and project. We recommend the default JavaBeans style. The way the interfaces are generated determines only how data is accessed by custom code. It does not affect how the data is represented in memory and handled by the CAP Java runtime. Moreover, it doesn't change the way how event contexts and entities, delivered by CAP, look like. Such interfaces from CAP are always modelled in the default JavaBeans style. #### Renaming Elements in Java Element names used in the CDS model might conflict with reserved [Java keywords](https://docs.oracle.com/javase/specs/jls/se13/html/jls-3.html#jls-3.9) (`class`, `private`, `transient`, etc.). In this case, the `@cds.java.name` annotation must be used to specify an alternative property name that will be used for the generation of accessor interfaces and [static model](./cqn-services/persistence-services#staticmodel) interfaces. The element name used as key in the underlying map for [dynamic access](#entities-and-structured-types) isn't affected by this annotation. See the following example: ```cds entity Equity { @cds.java.name : 'clazz' class : String; } ``` ```java interface Equity { @CdsName("class") String getClazz(); @CdsName("class") void setClazz(String clazz); } ``` #### Renaming Types in Java For entities and types it is recommended to use `@cds.java.this.name` to specify an alternative name for the accessor interfaces and [static model](./cqn-services/persistence-services#staticmodel) interfaces. The annotation `@cds.java.this.name` - in contrast to `@cds.java.name` - is not propagated, along projections, includes or from types to elements. ::: warning Unexpected effects of `@cds.java.name` on entities and types The annotation propagation behaviour applied to `@cds.java.name` can have unexpected side effects when used to rename entities or types, as it is propagated along projections, includes or from structured types to (flattened) elements. Nevertheless it might be useful in simple 1:1-projection scenarios, where the base entity and the projected entity should be renamed in the same way. ::: See the following example, renaming an entity: ```cds @cds.java.this.name: 'Book' entity Books { // ... } ``` ```java @CdsName("Books") public interface Book extends CdsData { // ... } ``` Here is another example, renaming a type: ```cds @cds.java.this.name: 'MyName' type Name { firstName: String; lastName: String; } entity Person { publicName: Name; secretName: Name; } ``` ```java @CdsName("Name") public interface MyName extends CdsData { // ... } @CdsName("Person") public interface Person extends CdsData { String PUBLIC_NAME = "publicName"; String SECRET_NAME = "secretName"; MyName getPublicName(); void setPublicName(MyName publicName); MyName getSecretName(); void setSecretName(MyName secretName); } ``` ::: details See how the previous example would turn out with `@cds.java.name` ```cds @cds.java.name: 'MyName' type Name { firstName: String; lastName: String; } entity Person { publicName: Name; secretName: Name; } ``` ```java @CdsName("Name") public interface MyName extends CdsData { // ... } @CdsName("Person") public interface Person extends CdsData { String MY_NAME = "publicName"; String MY_NAME = "secretName"; MyName getMyName(); void setMyName(MyName myName); MyName getMyName(); void setMyName(MyName myName); } ``` Note, that the propagated annotation `@cds.java.name` creates attribute and method conflicts in `Person`. ::: ::: warning This feature requires version 8.2.0 of the [CDS Command Line Interface](/tools/cds-cli). ::: #### Entity Inheritance in Java In CDS models it is allowed to extend a definition (for example, of an entity) with one or more named [aspects](../cds/cdl#aspects). The aspect allows to define elements or annotations that are common to all extending definitions in one place. This concept is similar to a template or include mechanism as the extending definitions can redefine the included elements, for example, to change their types or annotations. Therefore, Java inheritance cannot be used in all cases to mimic the [include mechanism](../cds/cdl#includes). Instead, to establish Java inheritance between the interfaces generated for an aspect and the interfaces generated for an extending definition, the `@cds.java.extends` annotation must be used. This feature comes with many limitations and does not promise support in all scenarios. The `@cds.java.extends` annotation can contain an array of string values, each of which denote the fully qualified name of a CDS definition (typically an aspect) that is extended. In the following example, the Java accessor interface generated for the `AuthorManager` entity shall extend the accessor interface of the aspect `temporal` for which the Java accessor interface `cds.gen.Temporal` is generated. ```cds using { temporal } from '@sap/cds/common'; @cds.java.extends: ['temporal'] entity AuthorManager : temporal { key ID : Integer; name : String(30); } ``` The accessor interface generated for the `AuthorManager` entity is shown in the following sample: ```java import cds.gen.Temporal; import com.sap.cds.CdsData; import com.sap.cds.Struct; import com.sap.cds.ql.CdsName; import java.lang.Integer; import java.lang.String; @CdsName("AuthorManager") public interface AuthorManager extends CdsData, Temporal { String ID = "ID"; String NAME = "name"; @CdsName(ID) Integer getId(); @CdsName(ID) void setId(Integer id); String getName(); void setName(String name); static AuthorManager create() { return Struct.create(AuthorManager.class); } } ``` In CDS, annotations on an entity are propagated to views on that entity. If a view projects different elements, the inheritance relationship defined on the underlying entity via `@cds.java.extends` does not hold for the view. Therefore, the `@cds.java.extends` annotation needs to be overwritten in the view definition. In the following example, a view with projection is defined on the `AuthorManager` entity and the inherited annotation overwritten via `@cds.java.extends : null` to avoid the accessor interface of `AuthorManagerService` to extend the interface generated for `temporal`. ```cds service Catalogue { @cds.java.extends : null entity AuthorManagerService as projection on AuthorManager { Id, name, validFrom, }; } ``` ::: warning The `@cds.java.extends` annotation does not support extending another entity. ::: ### Creating a Data Container for an Interface To create an empty data container for an interface, use the `Struct.create` method: ```java import static com.sap.cds.Struct.create; ... Book book = create(Book.class); book.setTitle("Dracula"); String title = book.getTitle(); // title: "Dracula" ``` Generated accessor interfaces contain a static `create` method that further facilitates the usage: ```java Book book = Books.create(); book.setTitle("Dracula"); String title = book.getTitle(); // title: "Dracula" ``` If the entity has a single key, the generated interface has an additional static `create` method that has the key as the argument. For example, given that the `Book` entity has key `ID` of type `String`, you can create the entity and set a key like that: ```java Book book = Books.create("9780141439846"); String id = book.getId(); // id: "9780141439846" ``` For entities that have more than one key, for example, for draft-enabled entities, the additional `create` method isn't generated and only the default one is available. ### Read-Only Access Create a typed read-only view using `access`. Calling a setter on the view throws an exception. ```java import static com.sap.cds.Struct.access; ... Book book = access(data).asReadOnly(Book.class); String title = book.getTitle(); book.setTitle("CDS4j"); // throws Exception ``` ### Typed Streaming of Data Data given as `Iterable>` can also be [streamed](https://docs.oracle.com/javase/8/docs/api/?java/util/stream/Stream.html): ```java import static com.sap.cds.Struct.stream; ... Stream books = stream(data).as(Book.class); List bookList = books.collect(Collectors.toList()); ``` ### Typed Access to Query Results Typed access through custom or generated accessor interfaces eases the [processing of query result](working-with-cql/query-execution#typed-result-processing). ## Data Processor { #cds-data-processor} The `CdsDataProcessor` allows to process deeply nested maps of CDS data, by executing a sequence of registered actions (_validators_, _converters_, and _generators_). Using the `create` method, a new instance of the `CdsDataProcessor` can be created: ```java CdsDataProcessor processor = CdsDataProcessor.create(); ``` _Validators_, _converters_, and _generators_ can be added using the respective `add` method, which takes a filter and an action as arguments and is executed when the `filter` is matching. ```java processor.addValidator(filter, action); ``` When calling the `process` method of the `CdsDataProcessor`, the actions are executed sequentially in order of the registration. ```java List> data; // data to be processed CdsStructuredType rowType; // row type of the data processor.process(data, rowType); ``` The process method can also be used on CDS.ql results that have a row type: ```java CqnSelect query; // some query Result result = service.run(query); processor.process(result); ``` ### Element Filters Filters can be defined as lambda expressions on `path`, `element`, and `type`, for instance: ```java (path, element, type) -> element.isKey() && type.isSimpleType(CdsBaseType.STRING); ``` which matches key elements of type String. - `path` describes the path from the structured root type of the data to the parent type of `element` and provides access to the data values of each path segment - `element` is the CDS element - `type` - for primitive elements the element's CDS type - for associations the association's target type - for arrayed elements the array's item type ### Data Validators _Validators_ validate the values of CDS elements matching the filter. New _validators_ can be added using the `addValidator` method. The following example adds a _validator_ that logs a warning if the CDS element `quantity` has a negative value. The warning message contains the `path` to the `element`. ```java processor.addValidator( (path, element, type) -> element.getName().equals("quantity"), // filter (path, element, value) -> { // validator if ((int) value < 0) { log.warn("Negative quantity: " + path.toRef()); } }); ``` By default, validators are called if the data map _contains_ a value for an element. This can be changed via the _processing mode_, which can be set to: - `CONTAINS` (default): The validator is called for declared elements for which the data map contains any value, including `null`. - `NOT_NULL`: The validator is called for declared elements for which the data map contains a non-null value. - `NULL`: The validator is called for declared elements for which the data map contains `null` or no value mapping, using `ABSENT` as a placeholder value. - `DECLARED`: The validator is called for all declared elements, using `ABSENT` as a placeholder value for elements with no value mapping. ```java processor.addValidator( (p, e, t) -> e.isNotNull(), // filter (p, e, v) -> { // validator throw new RuntimeException(e.getName() + " must not be null or absent"); }, Mode.NULL); ``` ### Data Converters _Converters_ convert or remove values of CDS elements matching the filter and are only called if the data map contains a value for the element matching the filter. New _converters_ can be added using the `addConverter` method. The following example adds a _converter_ that formats elements with name `price`. ```java processor.addConverter( (path, element, type) -> element.getName().equals("price"), // filter (path, element, value) -> formatter.format(value)); // converter ``` To remove a value from the data, return `Converter.REMOVE`. The following example adds a _converter_ that removes values of associations and compositions. ```java processor.addConverter( (path, element, type) -> element.getType().isAssociation(), // filter (path, element, value) -> Converter.REMOVE); // remover ``` ### Data Generators _Generators_ generate the values for CDS elements matching the filter and are missing in the data or mapped to null. New _generators_ can be added using the `addGenerator` method. The following example adds a UUID generator for elements of type UUID that are missing in the data. ```java processor.addGenerator( (path, element, type) -> type.isSimpleType(UUID), // filter (path, element, isNull) -> isNull ? null : randomUUID()); // generator ``` ## Diff Processor To react on changes in entity data, you need to compare the image of an entity after a certain operation with the image before the operation. To facilitate this task, use the [`CdsDiffProcessor`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/CdsDiffProcessor.html), similar to the [Data Processor](/java/cds-data#cds-data-processor). The Diff Processor traverses through two images (entity data maps) and allows to register handlers that react on changed values. Create an instance of the `CdsDiffProcessor` using the `create()` method: ```java CdsDiffProcessor diff = CdsDiffProcessor.create(); ``` You can compare the data represented as [structured data](/java/cds-data#structured-data), which is a result of the CQN statements or arguments of event handlers. For a comparison with the `CdsDiffProcessor`, the data maps that are compared need to adhere to the following requirements: - The data map must include values for all key elements. - The names in the data map must match the elements of the entity. - Associations must be represented as [nested structures and associations](/java/cds-data#nested-structures-and-associations) according to the associations` cardinalities. The [delta representation](/java/working-with-cql/query-api#deep-update-delta) of collections is also supported. Results of the CQN statements fulfill these conditions if the type [that comes with the result](/java/working-with-cql/query-execution#introspecting-the-row-type) is used, not the entity type. To run the comparison, call the `process()` method and provide the new and old image of the data as a `Map` (or a collection of them) and the type of the compared entity: ```java List> newImage; List> oldImage; CdsStructuredType type; diff.process(newImage, oldImage, type); ``` ```java Result newImage = service.run(Select.from(...)); Result oldImage = service.run(Select.from(...)); diff.process(newImage, oldImage, newImage.rowType()); ``` :::tip Comparing draft-enabled entities If you compare the active image of a draft-enabled entity with the inactive one, make sure that the `IsActiveEntity` values are either absent or the same in both images. ::: In case one of the images is empty, the `CdsDiffProcessor` traverses through the existing image treating it as an addition or removal mirroring the logic accordingly. Changes detected by `CdsDiffProcessor` are reported to one or more visitors implementing the interface [`CdsDiffProcessor.DiffVisitor`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/CdsDiffProcessor.DiffVisitor.html). The visitor is added to `CdsDiffProcessor` with the `add()` method before starting the processing. ```java diff.add(new DiffVisitor() { @Override public void changed(Path newPath, Path oldPath, CdsElement element, Object newValue, Object oldValue) { // changes } @Override public void added(Path newPath, Path oldPath, CdsElement association, Map newValue) { // additions } @Override public void removed(Path newPath, Path oldPath, CdsElement association, Map oldValue) { // removals } }); ``` The visitor can be added together with the [element filter](/java/cds-data#element-filters) that limits the subset of changes reported to the visitor. ```java diff.add( new Filter() { @Override public boolean test(Path path, CdsElement element, CdsType type) { return true; } }, new DiffVisitor() { ... } ); ``` You may add as many visitors as you need by chaining the `add()` calls. Each instance of the `CdsDiffProcessor` can have its own set of visitors added to it. If your visitors need to be stateful, prefer one-time disposable objects for them. `CdsDiffProcessor` does not manage their state. All values are compared using the standard Java `equals()` method, including elements with a structured or arrayed type. ### Implementing a DiffVisitor Additions and removals in the entity image are reported as calls to the methods `added()` or `removed()`. The called methods always receive the complete added or removed content for the entity or an association. The methods `added()` and `removed()` have the following arguments: - `newPath` and the `oldPath` as instances of [`Path`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Path.html) reflecting the new and old image of the entity. - `association` as an instance of [`CdsElement`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsElement.html) given that the association is present. - Changed data as a `Map`, as either the `newValue` or `oldValue`. The instances of the `Path` represent the placement of the changed item within the whole entity as a prefix to the data that is either added or removed. While these paths always have the same structure, `oldPath` and `newPath` can have empty values, which represent the absence of data. The `association` value for `added()` and `removed()` is only provided if data is compared along associations or compositions. Null value represents the complete entity that is added or removed. Let's break it down with the examples: Given that we have a collection of books each has a composition of many editions. + When a new book is added to the collection, the method `added()` is called once with the `Path` instance with one segment representing a book as the `newPath`, `association` will be null and the `newValue` will also be the content of the book. Old image (primary keys are omitted for brevity) of the book collection is: ```json [ { "title": "Wuthering Heights", "editions": [] } ] ``` New image of the book collection is: ```json [ { "title": "Wuthering Heights", "editions": [] }, { "title": "Catweazle", "editions": [] } ] ``` The content of the entity that visitor will observe in the `added()` method as `newValue`: ```json { "title": "Catweazle", "editions": [] } ``` `association` is null in this exact case. + When new editions are added to two of the books in the collection, one per each book, the method `added()` is called twice with the `Path` instance with two segments representing the book and the association to the edition. The association element is the value of the argument `association`, the data of the edition is the `newValue`. In this case, each added edition is accompanied by the content of the respective book. Old image of the book collection is: ```json [ { "title": "Wuthering Heights", "editions": [] }, { "title": "Catweazle", "editions": [] } ] ``` New image of the book collection is: ```json [ { "title": "Wuthering Heights", "editions": [ { "title": "Wuthering Heights: 100th Anniversary Edition" } ] }, { "title": "Catweazle", "editions": [ { "title": "Catweazle: Director's Cut" } ] } ] ``` In the first `added()` call, the first added edition will be available and the paths will have the first book as the root. ```json { "title": "Wuthering Heights: 100th Anniversary Edition" } ``` In the second call - the second added edition with the second book as the root of the path. ```json { "title": "Catweazle: Director's Cut" } ``` + Given the previous example, there are two new editions added to one of the books: the `added()` method will be called once per edition added. Path instances with same book (same primary key) tell you which edition belongs to which book. Old image is the same as before, new image of the book collection is: ```json [ { "title": "Wuthering Heights", "editions": [ { "title": "Wuthering Heights: 100th Anniversary Edition" } ] }, { "title": "Catweazle", "editions": [ { "title": "Catweazle: Director's Cut" }, { "title": "Catweazle: Complete with Extras" } ] } ] ``` First `added()` call will observe the new edition of the first book: ```json { "title": "Wuthering Heights: 100th Anniversary Edition" } ``` The following two calls will observe each added edition of the second book: ```json { "title": "Catweazle: Director's Cut" } ``` ```json { "title": "Catweazle: Complete with Extras" } ``` Method `changed()` is called for each change in the element values and has the following arguments: - A pair of `Path` instances (`newPath` and `oldPath`) reflecting the new and old data of the entity. - The changed element as an instance of [`CdsElement`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsElement.html). - The new and old value as `Object` instances. Paths have the same target, that is, the entity where changed element is. But their values represent the old and new image of the entity as a whole including non-changed elements. You may expect that each change is visited at most once. Let's break it down with the examples: Given the collection of books with editions, as before. ```json [ { "title": "Wuthering Heights", "editions": [ { "title": "Wuthering Heights: 100th Anniversary Edition" } ] }, { "title": "Catweazle", "editions": [ { "title": "Catweazle: Director's Cut" } ] } ] ``` + When book title is changed from one value to the other, the method `changed()` is called once with both `Path` instances representing a book images, element `title` is available as an instance of `CdsElement`, the new and old value of the title are available as `newValue` and `oldValue`. New image: ```json [ { "title": "Wuthering Heights", "editions": [ { "title": "Wuthering Heights: 100th Anniversary Edition" } ] }, { "title": "Catweazle, the series", "editions": [ { "title": "Catweazle: Director's Cut" } ] } ] ``` The Diff Visitor will observe the `Catweazle, the series` and `Catweazle` as the new and the old value. + When title of the edition is changed for one of the books, the `changed()` method is called once, the paths include the book and the edition. Element reference and values are set accordingly. New image: ```json [ { "title": "Wuthering Heights", "editions": [ { "title": "Wuthering Heights: 100th Anniversary Edition" } ] }, { "title": "Catweazle", "editions": [ { "title": "Catweazle: Unabridged" } ] } ] ``` Visitor will observe the `Catweazle: Unabridged` and `Catweazle: Director's Cut` as the new and the old value. For changes in the associations, when association data is present in both images, even if key values are different, the `change()` method will always be called for the content of the association traversing it value-by-value. In case data is absent in one of them, the `added()` or `removed()` will be called instead. Several visitors added to the `CdsDiffProcessor` are called one by one, but you should not expect the guaranteed order of the calls for them. Consider them as an independent. :::danger Immutable data Do not modify the state of the images inside the visitors. Consider the data presented to it immutable. ::: ### Filtering for DiffVisitor Element filters are useful if you want to extract some common condition out of your visitor implementation so that you don't have to branch in all methods of your visitor. As a general rule, you may assume that element filter is called at least once for each changed value you have in your image and the visitor supplied next to the filter is called for elements where the element filter condition is evaluated to `true`. In the implementation of the filter you can use the definition of the [`CdsElement`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsElement.html), its type or a [`Path`](https://www.javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Path.html) to decide if you want your visitor to be notified about the detected change. In simple cases, you may use the element and its type to limit the visitor so that it observes only elements having a certain annotation or having a certain common type, for example, only numbers. If you compare a collection of books to find out of there is a differences in it, but you are only interested in authors, you can write a filter using the entity type that is either the target of some association or the parent of the current element. ```java diff.add(new Filter() { @Override public boolean test(Path path, CdsElement element, CdsType type) { return element.getType().isAssociation() && element.getType().as(CdsAssociationType.class).getTarget().getQualifiedName().equals(Authors_.CDS_NAME) || path.target().type().equals(Authors_.CDS_NAME); } }, ...); ``` Filters cannot limit the nature of the changes your visitor will observe and are always positive. ### Deep Traversal {#cds-diff-processor-deep-traversal} For documents that have a lot of associations or a compositions and are changed in a deep way you might want to see additions for each level separately. To enable this, you create an instance of `CdsDiffProcessor` like that: ```java CdsDiffProcessor diff = CdsDiffProcessor.create().forDeepTraversal(); ``` In this mode, the methods `added()` and `removed()` are called not only for the root of the added or removed data, but also traverse the added or removed data, entity by entity. It's useful, when you want to track the additions and removals of certain entities on the leaf levels or as part of visitors tailored for generic use cases. ## Media Type Processing { #mediatypeprocessing} The data for [media type entity properties](../guides/providing-services#serving-media-data) (annotated with `@Core.MediaType`) - as with any other CDS property with primitive type - can be retrieved by their CDS name from the [entity data argument](./event-handlers/#pojoarguments). See also [Structured Data](#structured-data) and [Typed Access](#typed-access) for more details. The Java data type for such byte-based properties is `InputStream`, and for character-based properties it is `Reader` (see also [Predefined Types](#predefined-types)). Processing such elements within a custom event handler requires some care though, as such an `InputStream` or `Reader` is *non-resettable*. That means, the data can only be read once. This has some implications you must be aware of, depending on what you want to do. Let's assume we have the following CDS model: ```cds entity Books : cuid, managed { title : String(111); descr : String(1111); coverImage : LargeBinary @Core.MediaType: 'image/png'; } ``` When working with media types, we can differentiate upload and download scenarios. Both have their own specifics on how we can deal with the stream. ### No Custom Processing #### Media Upload If you just want to pass the uploaded stream to the persistence layer of the CAP architecture to have the data written into the database, you don't have to implement any custom handler. This is the simplest scenario and our default `On` handler already takes care of that for you. #### Media Download For the download scenario, as well, you don't need to implement any custom handler logic. The default `On` handler reads from the database and passes the stream to the client that requested the media type element. ### Custom Processing #### Media Upload If you want to override the default logic to process the uploaded stream with custom logic (for example, to parse a stream of CSV data), the best place to do that is in a custom `On` handler, as the following examples shows: ```java @On(event = CdsService.EVENT_UPDATE) public void processCoverImage(CdsUpdateEventContext context, List books) { books.forEach(book -> { InputStream is = book.getCoverImage(); // ... your custom code fully consuming the input stream }); context.setResult(books); } ``` ::: warning After you have fully consumed the stream in your handler logic, passing the same `InputStream` or `Reader` instance for further consumption would result in no bytes returned, because a *non-resettable* stream can only be consumed once. In particular, make sure that the default `On` handler is not called after your custom processing. ::: Using a custom `On` handler and setting `context.setResult(books)` prevents the execution of the default `On` handler. #### Media Download The previous described approach is only useful when uploading data. If you need custom processing for media downloads, have a look at the approach using a stream proxy described below. ### Pre- or Post-Processing Using a Stream Proxy The following sections describe how to pre-process an uploaded stream of data before it gets persisted or how to post-process a downloaded stream of data before it's handed over to the client. For example, this is useful if you want to send uploaded data to a virus scanner, before persisting it on the database. This requires that the stream is consumed by several parties (for example, the virus scanner and the persistence layer). To achieve this, implement a proxy that wraps the original `InputStream` or `Reader` instance and executes the processing logic within the `read()` methods on the data read directly. Such a proxy can be implemented by extending a [FilterInputStream](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/FilterInputStream.html), a [ProxyInputStream](https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/input/ProxyInputStream.html), a [FilterReader](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/FilterReader.html) or a [ProxyReader](https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/input/ProxyReader.html). The following example uses a [FilterInputStream](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/FilterInputStream.html): ```java public class CoverImagePreProcessor extends FilterInputStream { public CoverImagePreProcessor(InputStream wrapped) { super(wrapped); } @Override public int read() throws IOException { int nextByte = super.read(); // ... your custom processing code on nextByte return nextByte; } @Override public int read(byte[] bts, int off, int len) throws IOException { int bytesRead = super.read(bts, off, len); // ... your custom processing code on bts array return bytesRead; } } ``` This proxy is then used to wrap the original `InputStream`. This works for both upload and download scenarios. #### Media Upload For uploads, you can either use a custom `Before` or `On` handler to wrap the proxy implementation around the original stream before passing it to its final destination. Using a custom `Before` handler makes sense if the stream's final destination is the persistence layer of the CAP Java SDK, which writes the content to the database. Note that the pre-processing logic in this example is implemented in the `read()` methods of the `FilterInputStream` and is only called when the data is streamed, during the `On` phase of the request: ```java @Before(event = CdsService.EVENT_UPDATE) public void preProcessCoverImage(CdsUpdateEventContext context, List books) { books.forEach(book -> { book.setCoverImage(new CoverImagePreProcessor(book.getCoverImage())); }); } ``` The original `InputStream` is replaced by the proxy implementation in the `coverImage` element of the `book` entity and passed along. Every further code trying to access the `coverImage` element will use the proxy implementation instead. Using a custom `On` handler makes sense if you want to prevent that the default `On` handler is executed and to control the final destination for the stream. You then have the option to pass the streamed data on to some other service for persistence: ```java @On(event = CdsService.EVENT_UPDATE) public Result processCoverImage(CdsUpdateEventContext context, List books) { books.forEach(book -> { book.setCoverImage(new CoverImagePreProcessor(book.getCoverImage())); }); // example for invoking some CQN-based service return service.run(Update.entity(Books_.CDS_NAME).entries(books)); } ``` #### Media Download For download scenarios, the stream to wrap is only available in `After` handlers as shown in this example: ```java @After(event = CdsService.EVENT_READ) public void preProcessCoverImage(CdsReadEventContext context, List books) { books.forEach(book -> { book.setCoverImage(new CoverImagePreProcessor(book.getCoverImage())); }); } ``` ### Reminder ::: tip _Be aware_ in which event phase you do the actual consumption of the `InputStream` or `Reader` instance that is passed around. Once fully consumed, it can no longer be read from in remaining event phases. ::: # Working with CDS CQL Learn here about working with CDS CQL. # Building CQL Statements API to fluently build [CQL](../../cds/cql) statements in Java. ## Introduction The [CDS Query Language (CQL)](../../cds/cql) statement builders allow to fluently construct [CQL](../../cds/cql) statements, which can be [executed](query-execution) by [CDS Services](../cqn-services/#cdsservices). ## Concepts ### The CQL Statement Builders Use the builder classes `Select`, `Insert`, `Upsert`, `Update`, and `Delete` to construct [CQL](../../cds/cql) statements. The following example shows a [CQL](../../cds/cql) query and how it's constructed with the `Select` builder: ```sql -- CQL SELECT from bookshop.Books { title } where ID = 101 ``` ```java // Java CQL (dynamic) Select.from("bookshop.Books").columns("title").byId(101); ``` Instead of using strings to refer to CDS entities and elements, you can also build statements using constants and interfaces [generated](../cqn-services/persistence-services#staticmodel) from the CDS model: ```java import static bookshop.Bookshop_.BOOKS; // Java CQL (static) Select.from(BOOKS).columns(b -> b.title()).byId(101); ``` Using the static model has several advantages: * The names of entities and elements are checked at design time. * Use code completion in the IDE. * Predicates and expressions can be composed in a type-safe way. * More compact code. ::: tip In general, it's recommended to use the static style when implementing business logic that requires accessing particular elements of entities. Using the dynamic style is appropriate for generic code. ::: ### Lambda Expressions To construct complex statements, the [CQL](../../cds/cql) builders leverage [lambda expressions](https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html) to fluently compose [expressions](#expressions) and [path expressions](#path-expressions) that are used in the statements' clauses. ```sql -- CQL SELECT from bookshop.Books { title } where year < 2000 ``` ```java // Java CQL Select.from(BOOKS) .columns(b -> b.title().as("Book")) .where(b -> b.year().lt(2000)); ``` Here, the lambda expression `b -> b.title().as("Book")` references the element `title` of the entity Book `b` under the alias 'Book'. This aliased reference is put on the query's [select list](#projections) using the `columns` method. The lambda expression `b -> b.year().lt(2000)` defines a predicate that compares the book's element `year` with the value 2000, which is then used to define the [where clause](#where-clause) of the select statement. ### Path Expressions Use path expressions to access elements of [related](../../cds/cdl#associations) entities. The following example selects books with authors starting with 'A'. ```java // Java CQL (static) Select.from(BOOKS) .columns(b -> b.title(), b -> b.author().name().as("author")) .where(b -> b.author().name().startsWith("A")); // Java CQL (dynamic) Select.from("bookshop.Books") .columns(b -> b.get("title"), b -> b.get("author.name").as("author")) .where(b -> b.to("author").get("name").startsWith("A")); ``` The CQL query accesses the `name` element of the `Authors` entity, which is reached from `Books` via the `author` [association](../../cds/cdl#associations). In the dynamic CQL builders, you can follow associations and compositions using the `to` method or use `get` with a path using a dot to separate the segments. ### Target Entity Sets {#target-entity-sets} All [CDS Query Language (CQL)](/cds/cql) statements operate on a _target entity set_, which is specified via the `from`, `into`, and `entity` methods of `Select`/`Delete`, `Insert`/`Upsert`, and `Update` statements. In the simplest case, the target entity set identifies a complete CDS entity set: ```java import static bookshop.Bookshop_.BOOKS; // static Select.from(BOOKS); // dynamic Insert.into("bookshop.Books").entry(book); Update.entity("bookshop.Authors").data(author); ``` The _target entity set_ can also be defined by an [entity reference](#entity-refs), which allows using paths over associations and _infix filters_. Entity references can be defined inline using lambda expressions. ```sql -- CQL SELECT from Orders[3].items { quantity, book.title as book } ``` ```java // Java CQL Select.from(ORDERS, o -> o.filter(o.id().eq(3)).items()) .columns(i -> i.quantity(), i -> i.book().title().as("book")); ``` The _target entity set_ in the query is defined by the entity reference in the from clause. The reference targets the `items` of the `Order` with ID 3 via an _infix filter_. From this target entity set (of type `OrderItems`), the query selects the `quantity` and the `title` of the `book`. Infix filters can be defined on any path segment using the `filter` method, which overwrites any existing filter on the path segment. Defining an infix filter on the last path segment is equivalent to adding the filter via the statement's `where` method. However, inside infix filters, path expressions are not supported. In the [CDS Query Language (CQL)](/cds/cql) builder, the lambda expression `o -> o.filter(o.id().eq(3)).items()` is evaluated relative to the root entity `Orders` (o). All lambda expressions that occur in the other clauses of the query are relative to the target entity set `OrderItems`, for example, `i -> i.quantity()` accesses the element `quantity` of `OrderItems`. ::: tip To target components of a structured document, we recommend using path expressions with infix filters. ::: ### Filters {#target-entity-filters} Besides using infix filters in path expressions, the `Select`, `Update`, and `Delete` builders support filtering the [target entity set](#target-entity-sets) via the `where` method. Using `where` is equivalent to defining an infix filter on the last segment of a path expression in the statement's `from` / `entity` clause. For statements that have both, an infix filter on the last path segment and a `where` filter, the resulting target filter is the conjunction (`and`) of the infix filter and the `where` filter. For simple filters, you can use `byId`, `matching`, or `byParams` as an alternative to `where`. All of these filter methods overwrite existing filters, except for infix filters. #### Using `where` {#concepts-where-clause} Using the `where` method, you can define complex predicate [expressions](#expressions) to compose the filter: ```java Select.from(BOOKS) .where(b -> b.author().name().eq("Twain") .and(b.title().startsWith("A").or(b.title().endsWith("Z")))); ``` #### Using `byID` To find an entity with a single key element via its key value, you can use the `byId` method. The following example retrieves the `Author` entity with key 101. ```java Select.from("bookshop.Authors").byId(101); ``` ::: tip The `byId` method isn't supported for entities with compound keys. ::: #### Using `matching` `matching` is a query-by-example style alternative to define the `where` clause. This method adds a predicate to the query that filters out all entities where the elements' values are equal to values given by a key-value filter map. The filter map can contain path keys, referring to elements of an associated entity. In the following example, `bookshop.Books` has a to-one association to the `Author` entity and the path `author.name` refers to the name element within the `Author` entity. ```java Map filter = new HashMap<>(); filter.put("author.name", "Edgar Allen Poe"); filter.put("stock", 0); Select.from("bookshop.Books").matching(filter); ``` #### Using `byParams` `byParams` simplifies filtering by parameters as an alternative to `where` and `CQL.param`: ```java import static bookshop.Bookshop_.BOOKS; // using where Select.from(BOOKS) .where(b -> b.title().eq(param("title")) .and(b.author().name().eq(param("author.name")))); // using byParams Select.from(BOOKS).byParams("title", "author.name"); ``` ### Parameters The [CQL](../../cds/cql) builders support [parameters](#expr-param) in the `where` clause and in infix filters for [parameterized execution](query-execution#parameterized-execution): The following example selects the books of the `Author` with name 'Jules Verne'. ```java import static com.sap.cds.ql.CQL.param; CqnSelect q = Select.from(BOOKS).where(b -> b.author().name().eq(param(0))); dataStore.execute(q, "Jules Verne"); ``` As an alternative, the where clauses can be constructed using the `byParams` method. ```java CqnSelect q = Select.from(BOOKS).byParams("author.name"); dataStore.execute(q, singletonMap("author.name", "Jules Verne")); ``` Parameterized infix filters can be constructed using the `filterByParams` method. Path expressions are not supported. The following example selects the books of the `Author` with ID 101. ```java CqnSelect q = Select.from(AUTHORS, o -> o.filterByParams("ID").books()); dataStore.execute(q, singletonMap("ID", 101)); ``` ### Constant and Non-Constant Literal Values In addition to parameters, the [CQL](../../cds/cql) builders also support literal values, which are already known at design time. These can be constructed using `CQL.constant()` for constant literals and `CQL.val()` for non-constant literals: ```java import static com.sap.cds.ql.CQL.val; Select.from(BOOKS).columns(b -> b.title(), val("available").as("status")) .where(b -> b.stock().gt(0)); ``` In case your application runs against a SQL datastore, for example SAP HANA, the CDS runtime takes literal values constructed with `CQL.val(value)` as a hint to bind the value to a parameter marker. The binding is handled implicitly and not explicitly as with `CQL.param()`. The `CQL.constant(value)` method gives the hint that the literal value should be handled as a constant. For SQL datastores this means that the value is rendered directly into the SQL statement. ```java import static com.sap.cds.ql.CQL.constant; Select.from(BOOKS).columns(b -> b.title()) .where(b -> b.cover().eq(constant("paperback"))); ``` It strongly depends on your application's domain model and business logic, which one of the methods is to be preferred. As a rule of thumb: * Use `val()` for values that change at runtime or depend on external input. * Only use `constant()` for values that don't change at runtime and _don't depend on external input_. With constant literals directly rendered into the statement, a SQL datastore has better options optimizing the statement. On the other hand, using constant literals limits the data store's options to cache statements. ::: warning Constant literals are directly rendered into SQL and therefore **must not** contain external input! ::: ## Select ### Source The source of the select statement determines the data set to which the query is applied. It's specified by the `from` method. #### `FROM` Entity Set {#from-entity-set} Typically a select statement selects from an [entity set](#target-entity-sets): ```sql --CQL query SELECT from bookshop.Books { title, author.name } ``` ```java // Query Builder API (dynamic usage) CqnSelect query = Select.from("bookshop.Books") .columns("title", "author.name"); ``` #### `FROM` Reference {#from-reference} The source can also be defined by a [path expression](#path-expressions) referencing an entity set. This query selects from the items of the order 23. ```sql --CQL query SELECT from Orders[ID = 23]:items ``` ```java // Query Builder API (static usage) import static bookshop.Bookshop_.ORDERS; Select.from(ORDERS, o -> o.filter(o.ID().eq(23)).items()); ``` #### `FROM` Subquery {#from-select} It's also possible to execute a nested select where an _outer_ query operates on the result of a _subquery_. ```sql --CQL query SELECT from (SELECT from Authors order by age asc limit 10) as youngestAuthors order by name ``` ```java // Query Builder API CqnSelect youngestAuthors = Select.from(AUTHORS).orderBy(a -> age()).limit(10); Select.from(youngestAuthors).orderBy("name"); ``` This subquery selects the youngest authors, which the outer query [sorts](#ordering-and-pagination) by name. Limitations: * The subquery must not expand [to-many associations](../../cds/cdl#to-many-associations). * Associations aren't propagated to the outer query and hence can't be used there in path expressions. * The outer query can only be defined with the dynamic builder style. ### Projections {#projections} By default, `Select` statements return all elements of the target entity. You can change this by defining a projection via the `columns` method of the `Select` builder. Elements can be addressed via their name, including path expressions such as _author.name_: ```java CqnSelect query = Select.from("bookshop.Books") .columns("title", "author.name"); ``` To define more complex projections and benefit from code completion, use lambda expressions: ```java // dynamic Select.from("bookshop.Books") .columns(b -> b.get("title"), b -> b.get("author.name").as("authorName")); ``` ```java // static import static bookshop.Bookshop_.BOOKS; Select.from(BOOKS) .columns(b -> b.title(), b -> b.author().name().as("authorName")); ``` The path expression `b.author().name()` is automatically evaluated at runtime. For an SQL data store, it's converted to a LEFT OUTER join. #### Deep Read with `expand` {#expand} Use `expand` to read deeply structured documents and entity graphs into a structured result. ```java // Java example // using expand import static bookshop.Bookshop_.AUTHORS; Select.from(AUTHORS) .columns(a -> a.name().as("author"), a -> a.books().expand( b -> b.title().as("book"), b -> b.year())); ``` It expands the elements `title`, and `year` of the `Books` entity into a substructure with the name of the association `books`: ```json [ { "author" : "Bram Stoker", "books" : [ { "title" : "Dracula", "year" : 1897 }, { "title" : "Miss Betty", "year" : 1898 } ] }, ... ] ``` To only expand entities that fulfill a certain condition, use [infix filters](#target-entity-sets) on the association: ```java Select.from(AUTHORS) .columns(a -> a.name(), a -> a.books() .filter(b -> b.year().eq(1897)) .expand(b -> b.title())) .where(a -> name().in("Bram Stroker", "Edgar Allen Poe")); ``` This query expands only books that were written in 1897: ```json [ { "name" : "Bram Stoker", "books" : [ { "title" : "Dracula" } ] }, { "name" : "Edgar Allen Poe", "books" : [ ] } ] ``` Expands can be nested and have an alias, for example, to further expand the publisher names of the author's books: ```java Select.from(AUTHORS) .columns(a -> a.name(), a -> a.books().as("novels").expand( b -> b.title(), b -> b.publisher().expand(p -> p.name()))); ``` Which returns a deeply structured result: ```json [ { "name" : "Bram Stoker", "novels" : [ { "title" : "Dracula", "publisher" : { "name": "Constable" } }, ... ] }, ... ] ``` To expand all non-association elements of an associated entity, use the `expand()` method without parameters after the association you want to expand. For example, the following query expands _all_ elements of the book's author: ```java Select.from(BOOKS) .columns(b -> b.title(), b -> b.author().expand()); ``` To expand all first level associations of an entity, use `expand()` on the entity level: ```java Select.from(BOOKS).columns(b -> b.expand()); ``` ::: warning Don't use distinct together with expand The `distinct` clause removes duplicate rows from the root entity and effectively aggregates rows. Expanding child entities from aggregated rows is not well-defined and can lead to issues that can be resolved by removing distinct. ::: ::: tip Resolving duplicates in to-many expands Duplicates in to-many expands can occur on associations that are mapped as many-to-many without using a [link entity](../../guides/domain-modeling#many-to-many-associations) and don't correctly define the source cardinality. This can be resolved by adding the cardinality in the CDS model: `Association [*,*] to Entity`. ::: ##### Optimized Expand Execution {#expand-optimization} For *to-one expands*: - The expand item list mustn't contain any literal value. - The expand item list mustn't contain expression. For *to-many expands*: - The `on` condition of the association must only use equality predicates and conjunction (`AND`). - The `from` clause isn't a [subquery](#from-select). - The `where` clause doesn't contain [path expressions](#path-expressions). - The query doesn't use [groupBy](#group-by) or `distinct`. - The `columns`/`items` clause must contain at least one [element reference](#element-references). In case the default query optimization leads to issues, annotate the association with `@cds.java.expand: {using: 'parent-keys'}` to fall back to the unoptimized expand execution and make sure the parent entity has all key elements exposed. #### Flattened Results with `inline` {#inline} To flatten deeply structured documents or include elements of associated entities into a flat result, you can use `inline` as a short notation for using multiple paths. ```java // Java example import static bookshop.Bookshop_.AUTHORS; // using multiple path expressions Select.from(AUTHORS) .columns(a -> a.name(), a -> a.books().title().as("book"), a -> a.books().year()); // using inline Select.from(AUTHORS) .columns(a -> a.name(), a -> a.books().inline( b -> b.title().as("book"), b -> b.year())); ``` Both queries are equivalent and have the same result: a _flat_ structure: ```json [ { "name" : "Bram Stoker", "book" : "Dracula", "year" : 1897 }, { "name" : "Bram Stoker", "book" : "Miss Betty", "year" : 1898 } ] ``` #### Managed Associations on the Select List To select the key elements of a [managed to-one association](../../cds/cdl#managed-associations)'s target entity, simply put the association on the select list. This will return the target key elements as structured result: ```java // dynamic Select.from("bookshop.Books") .columns(b -> b.get("author")); // static import static bookshop.Bookshop_.BOOKS; CqnSelect q = Select.from(BOOKS) .columns(b -> b.author()); Row book = dataStore.execute(q).single(); Object authorId = book.get("author.Id"); // path access ``` ::: tip Only to-one associations that are mapped via the primary key elements of the target entity are supported on the select list. The execution is optimized and gives no guarantee that the target entity exists, if this is required use expand or enable [integrity constraints](../../guides/databases#database-constraints) on the database. ::: ### Filtering and Searching { #filtering} The `Select` builder supports [filtering](#target-entity-filters) the target entity set via `where`, `byId`, `matching` and `byParams`. In contrast to infix filters, `where` filters of `Select` statements support path expressions. Additionally, `Select` supports `search` clauses. The `search` method adds a predicate to the query that filters out all entities where any searchable element contains a given [search term](#search-term) or matches a [search expression](#search-expression). 1. Define searchable elements {#searchable-elements} By default all elements of type `cds.String` of an entity are searchable. However, using the `@cds.search` annotation the set of elements to be searched can be defined. You can extend the search also to associated entities. For more information on `@cds.search`, refer to [Search Capabilities](../../guides/providing-services#searching-data). Consider following CDS Entity. There are two elements, `title` and `name`, of type String, making them both searchable by default. ```cds entity Book { key ID : Integer; name : String; title : String; } ``` In the following example, element `title` is included in `@cds.search`. Only this particular element is searchable then. ```cds @cds.search: {title} entity Book { key ID : Integer; name : String; title : String; } ```
2. Construct queries with `search` Let's consider the following Book entity once again: ```cds entity Book { key ID : Integer; name : String; title : String; } ``` * Use search terms {#search-term} The following Select statement shows how to search for an entity containing the single _search term_ "Allen". ```java // Book record - (ID, title, name) VALUES (1, "The greatest works of James Allen", "Unwin") Select.from("bookshop.Books") .columns("id", "name") .search("Allen"); ``` > The element `title` is [searchable](#searchable-elements), even though `title` isn't selected. * Use search expressions {#search-expression} It's also possible to create a more complex _search expression_ using `AND`, `OR`, and `NOT` operators. Following examples show how you can search for entities containing either term "Allen" or "Heights". ```java // Book records - // (ID, title, name) VALUES (1, "The greatest works of James Allen", "Unwin") // (ID, title, name) VALUES (2, "The greatest works of Emily Bronte", "Wuthering Heights") Select.from("bookshop.Books") .columns("id", "name") .search(term -> term.has("Allen").or(term.has("Heights"))); ``` #### Using `where` Clause {#where-clause} In a where clause, leverage the full power of [CDS Query Language (CQL)](/cds/cql) [expressions](#expressions) to compose the query's filter: ```java Select.from("bookshop.Books") .where(b -> b.get("ID").eq(251).or( b.get("title").startsWith("Wuth"))); ``` ### Grouping The Query Builder API offers a way to group the results into summarized rows (in most cases these are aggregate functions) and apply certain criteria on it. Let's assume the following dataset for our examples: |ID |NAME | |----|------| |100 |Smith | |101 |Miller| |102 |Smith | |103 |Hugo | |104 |Smith | #### Group By The `groupBy` clause groups by one or more elements and usually involves aggregate [functions](query-api#scalar-functions), such as `count`, `countDistinct`, `sum`, `max`, `avg`, and so on. It returns one row for each group. In the following example, we select the authors' name and, using the aggregate function `count`, determine how many authors with the same name exist in `bookshop.Authors`. ```java import com.sap.cds.ql.CQL; Select.from("bookshop.Authors") .columns(c -> c.get("name"), c -> CQL.count(c.get("name")).as("count")) .groupBy(g -> g.get("name")); ``` If we execute the query on our dataset, we get the following result: |name |count| |------|-----| |Smith |3 | |Miller|1 | |Hugo |1 | #### Having To filter the [grouped](#group-by) result, `having` is used. Both, `having` and `where`, filter the result before `group by` is applied and can be used in the same query. The following example selects authors where count is higher than 2: ```java Select.from("bookshop.Authors") .columns(c -> c.get("name"), c -> func("count", c.get("name")).as("count")) .groupBy(c -> c.get("name")) .having(c -> func("count", c.get("name")).gt(2)); ``` If we execute the query on our dataset, we get the following result: |name |count| |------|-----| |Smith |3 | ### Ordering and Pagination The Query Builder API allows to specify the sort order of query results. The _sort specification_ governs, according to which elements the result is sorted, and which sort order (ascending or descending) is applied. By default `Select` returns the rows in no particular order. #### Order By To ensure a specific order in a query use [`orderBy`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Select.html#orderBy-java.util.function.Function...-), which allows sorting by one or more columns in ascending or descending order. ```java Select.from("bookshop.Books") .columns(c -> c.get("ID"), c -> c.get("title")) .orderBy(c -> c.get("ID").desc(), c -> c.get("title").asc()); ``` You can order by the alias of a column of the select list or a column that is defined as a result of the function call. ```java Select.from("bookshop.Person") .columns(p -> p.get("name").toUpper().as("aliasForName")) .orderBy(p -> p.get("aliasForName").asc()); ``` Aliases of columns have precedence over the element names when `orderBy` is evaluated. ::: warning Aliases may shadow elements names. To avoid shadowing, don't use element names as aliases. :::: On SAP HANA, the user's locale is passed to the database, resulting in locale-specific sorting of string-based columns. By default, `null` values come before non-`null` values when sorting in ascending order and after non-`null` values when sorting in descending order. Use the `ascNullsLast` and `descNullsFirst` methods if you need to change this behavior. The following query would sort `null` values for the element `nickname` last: ```java Select.from("bookshop.Person") .orderBy(p -> p.get("name").asc(), p -> c.get("nickname").ascNullsLast()); ``` If we execute the query on our dataset, we get the following result: | name | nickname | | --------|----------| | William | Bill | | William | null | #### Pagination Pagination (dividing the result set into discrete subsets of a certain size) can be achieved by using [limit](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Select.html#limit-int-int-), which has the following optional parameters: * `rows`: A number of rows to be returned. It's useful when dealing with large amounts of data, as returning all records in one shot can impact performance. * `offset`: A particular number of rows to be skipped. The following example selects all books, skip the first 20 rows, and return only 10 subsequent books: ```java Select.from("bookshop.Books").limit(10, 20); ``` In this example, it's assumed that the total number of books is more or equal to 20. Otherwise, result set is empty. ::: tip The pagination isn't stateful. If rows are inserted or removed before a subsequent page is requested, the next page could contain rows that were already contained in a previous page or rows could be skipped. ::: ### Pessimistic Locking { #write-lock} Use the `lock()` method to enforce [Pessimistic Locking](../../guides/providing-services#select-for-update). The following example shows how to build a select query with an _exclusive_ (write) lock. The query tries to acquire a lock for a maximum of 5 seconds, as specified by an optional parameter `timeout`: ```java Select.from("bookshop.Books").byId(1).lock(5); ... Update.entity("bookshop.Books").data("price", 18).byId(1); ``` To set a _shared_ (read) lock, specify the lock mode `SHARED` in the lock method: ```java import static com.sap.cds.ql.cqn.CqnLock.Mode.SHARED; Select.from("bookshop.Books").byId(1).lock(SHARED); ``` Not every entity exposed via a CDS entity can be locked with the `lock()` clause. To use the `lock()` clause, databases require that the target of such statements is represented by one of the following: - a single table - a simple view, so that the database can unambiguously identify which rows to lock Views that use joins, aggregate data, include calculated or coalesced fields cannot be locked. Some databases might have additional restrictions or limitations specific to them. There are few notable examples of such restrictions: * You cannot use the `lock()` together with a `distinct()` or a `groupBy()`. * You cannot use the `lock()` in a statement with the subquery as a source. * Localized entities can be locked only if your query is run without a locale, as described in the chapter: [Modifying Request Context](../event-handlers/request-contexts#modifying-requestcontext). Alternatively, they can be locked by removing the localized element from the select list (columns). * Entities that contain "on-read" calculated elements can't be locked when the statement references them in the select list or a filter. As a general rule, prefer the statements that select primary keys with a simple condition, such as `byId` or `matching`, to select the target entity set that is locked. ## Insert The [Insert](../../cds/cqn#insert) statement inserts new data into a target entity set. An `Insert` statement is created by the [Insert](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Insert.html) builder class. The target of the insert is specified by the `into` method. As in the following example, the target of the insert can be specified by a fully qualified entity name or by a [CdsEntity](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsEntity.html) you obtain from the [Reflection API](../../node.js/cds-reflect): ```java Map book = new HashMap<>(); book.put("ID", 101); book.put("title", "Capire"); CqnInsert insert = Insert.into("bookshop.Books").entry(book); ``` or it can be a [path expression](#path-expressions), for example to add an item for Order 1001: ```java import static bookshop.Bookshop_.ORDERS; Insert.into(ORDERS, o -> o.matching(Map.of("ID", 1001))).items()) .entry(Map.of("book", Map.of("ID", 251), "amount", 1)); ``` ### Single Insert To insert a single entry, provide the data as a map to the [entry](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Insert.html#entry-java.util.Map-) method: ```java Map book; book.put("ID", 101); book.put("title", "Capire 2"); CqnInsert insert = Insert.into("bookshop.Books").entry(book); ``` ### Bulk Insert `Insert` also supports a bulk operation. Here the data is passed as an Iterable of maps to the [entries](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Insert.html#entries-java.lang.Iterable-) method: ```java import static bookshop.Bookshop_.BOOKS; var data = List.of( Map.of("ID", 101, "title", "Capire"), Map.of("ID", 103, "title", "CAP Java")); CqnInsert insert = Insert.into(BOOKS).entries(data); ``` ::: tip A bulk insert can also perform deep inserts. ::: ### Deep Insert To build a deep insert, the input data maps can contain maps or list of maps as values, such as items of an order. By default, the insert operation cascades over compositions only. To cascade it also over selected associations, use the [@cascade](query-execution#cascading-over-associations) annotation. CDS Model: ```cds entity Orders { key OrderNo : String; Items : Composition of many OrderItems on Items.parent = $self; ... } entity OrderItems { key ID : Integer; book : Association to Books; quantity : Integer; ... } ``` [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples-java/blob/5396b0eb043f9145b369371cfdfda7827fedd039/db/schema.cds#L24-L36){.learn-more} Java: ```java import static bookshop.Bookshop_.ORDERS; var items = List.of(Map.of("ID", 1, "book_ID", 101, "quantity", 1)); var order = Map.of("OrderNo", "1000", "Items", items); CqnInsert insert = Insert.into(ORDERS).entry(order); ``` ::: tip On SQL data stores the execution order of the generated insert statements is parent first. ::: ## Upsert { #upsert} [Upsert](../../cds/cqn#upsert) updates existing entities or inserts new ones if they don't exist in the database. `Upsert` statements are created with the [Upsert](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Upsert.html) builder and are translated into DB native upsert statements by the CAP runtime whenever possible. The main use case of upsert is data replication. If upsert data is incomplete only the given values are updated or inserted, which means the `Upsert` statement has "PATCH semantics". ::: warning Upsert is **not** equivalent to Insert, even if an entity doesn't exist in the database. ::: The following actions are *not* performed on Upsert: * UUID key values are _not generated_. * The `@cds.on.insert` annotation is _not handled_. * Elements are _not initialized_ with default values if the element's value is not given. * Generic CAP handlers, such as audit logging, are not invoked. `Upsert` statements don't have a where clause. Just as with bulk [Updates](#bulk-update) and [Inserts](#single-insert), the key values of the entity that is upserted are extracted from the data. ::: tip The upsert data must contain values for all mandatory and key elements. ::: ### Single Upsert To upsert a single entry, provide the data as a map to the [entry](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Upsert.html#entry-java.util.Map-) method: ```java import static bookshop.Bookshop_.BOOKS; import bookshop.Books; Books book = Books.create(); book.setId(101); book.setTitle("CAP for Beginners"); CqnUpsert upsert = Upsert.into(BOOKS).entry(book); ``` ### Bulk Upsert The `Upsert` also supports bulk operations. Here an `Iterable` of data maps is passed to the [entries](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Upsert.html#entries-java.lang.Iterable-) method: ```java import static bookshop.Bookshop_.BOOKS; import bookshop.Books; Books b1 = Books.create(101); b1.setTitle("Odyssey"); Books b2 = Books.create(103); b2.put("title", "Ulysses"); List data = Arrays.asList(b1, b2); CqnUpsert upsert = Upsert.into(BOOKS).entries(data); ``` ::: tip Bulk upserts with entries updating/inserting the same set of elements can be executed more efficiently than individual upsert operations and bulk upserts with heterogeneous data. ::: ### Deep Upsert { #deep-upsert} Upsert can operate on deep [document structures](../cds-data#nested-structures-and-associations) modeled via [compositions](../../guides/domain-modeling#compositions), such as an `Order` with many `OrderItems`. Such a _Deep Upsert_ is similar to [Deep Update](#deep-update), but it creates the root entity if it doesn't exist and comes with some [limitations](#upsert) as already mentioned. The [full set](#deep-update-full-set) and [delta](#deep-update-delta) representation for to-many compositions are supported as well. ::: warning Upsert doesn't allow changing the key of a child of a composition `of one`. ::: ## Update Use the [Update](../../cds/cqn#update) statement to update existing entities with new data. The update data can be partial (patch semantics), elements without update values keep their old value, except for elements annotated with `@cds.on.update`, which are updated with the annotation value. Depending on the filter condition, the `Update` can target [individual](#update-individual-entities) or [multiple](#searched-update) entity records. ::: tip Check the [row count](query-execution#batch-execution) of the update result to get the number of updated records. It is 0 if no entity matched the filter condition. ::: Use the [Update](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html) builder to create an update statement. ### Updating Individual Entities {#update-individual-entities} The target entity set of the update is specified by the [entity](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#entity-java.lang.String-) method. In the following example, the update target is an entity of the [static model](../cqn-services/persistence-services#staticmodel). The update data is provided as a map to the [data](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#data-java.util.Map-) method, using [accessor interfaces](../cds-data#typed-access) to construct the data in a typed way. The filter condition of the update is constructed from the key values in the update data: ```java import static bookshop.Bookshop_.BOOKS; import bookshop.Books; Books book = Books.create(); book.setId(100); // key value filter in data book.setTitle("CAP Matters"); CqnUpdate update = Update.entity(BOOKS).data(book); ``` As an alternative to adding the key values to the data, you can use the [byId](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#byId-java.lang.Object-) filter for entities with a single key element or [matching](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#matching-java.util.Map-) for entities with compound key. ```java Update.entity(BOOKS) .data("title", "CAP Matters").byId(100); ``` Furthermore, you can use filters in [path expressions](#path-expressions) to specify the update target: ```java Update.entity(BOOKS, b -> b.matching(Books.create(100))) .data("title", "CAP Matters"); ``` ::: danger If key values are not contained in the data and no filter (`where`, `byId`, `matching`) is specified a [searched update](#searched-update) is performed, which updates _all_ entities with the given data. ::: ### Update with Expressions {#update-expressions} The [data](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#data(java.util.Map)), [entry](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#entry(java.util.Map)), and [entries](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#entries(java.lang.Iterable)) methods allow to specify the new values as plain Java values. In addition/alternatively you can use the `set` method to specify the new [value](#values) as a [CqnValue](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/CqnValue.html), which can even be an [arithmetic expression](#arithmetic-expressions). This allows, for example, to decrease the stock of Book 101 by 1: ```java // dynamic Update.entity(BOOKS).byId(101).set("stock", CQL.get("stock").minus(1)); // static Update.entity(BOOKS).byId(101).set(b -> b.stock(), s -> s.minus(1)); ``` You can also combine update data with expressions: ```java Update.entity(BOOKS).where(b -> b.stock().eq(0)) .data("available", true) .set(b -> b.stock(), s -> s.plus(CQL.param("addStock"))); ``` ### Deep Update { #deep-update} Use deep updates to update _document structures_. A document structure comprises a single root entity and one or multiple related entities that are linked via compositions into a [contained-in-relationship](../../guides/domain-modeling#compositions). Linked entities can have compositions to other entities, which become also part of the document structure. By default, only target entities of [compositions](../../guides/domain-modeling#compositions) are updated in deep updates. Nested data for managed to-one associations is used only to [set the reference](../cds-data#setting-managed-associations-to-existing-target-entities) to the given target entity. This can be changed via the [@cascade](query-execution#cascading-over-associations) annotation. For to-many compositions there are two ways to represent changes in the nested entities of a structured document: *full set* and *delta*. In contrast to *full set* representation which describes the target state of the entities explicitly, a change request with *delta* payload describes only the differences that need to be applied to the structured document to match the target state. For instance, in deltas, entities that are not included remain untouched, whereas in full set representation they are deleted. #### Full Set Representation { #deep-update-full-set} In the update data, nested entity collections in **full set** representation have to be _complete_. All pre-existing entities that are not contained in the collection are deleted. The full set representation requires the runtime to execute additional queries to determine which entities to delete and is therefore not as efficient to process as the [delta representation](#deep-update-delta). Given the following *Order*: ```json { "OrderNo": "1000", "status": "new", "createdAt": "2020-03-01T12:21:34.000Z", "items": [{"Id":1, "book":{"ID":100}, "quantity":1}, {"Id":2, "book":{"ID":200}, "quantity":2}, {"Id":3, "book":{"ID":200}, "quantity":3}] } ``` Do a deep update `Update.entity(ORDERS).data(order)` with the following order data: ```json { "OrderNo": "1000", "status": "in process", "items": [{"Id":1, "quantity":2}, {"Id":4, "book":{"ID":400}, "quantity":4}] } ``` > Constructed using `CdsData`, `CdsList` and the generated [accessor interfaces](../cds-data#typed-access). See the result of the updated *Order*: ```json { "OrderNo": "1000", "status": "in process", "createdAt": "2020-03-01T12:21:34.000Z", "items": [{"Id":1, "book":{"ID":100}, "quantity":2}, {"Id":4, "book":{"ID":400}, "quantity":4}] } ``` - Order `status` changed to "in process" - Item 1 `quantity` changed to 2 - Items 2 and 3 removed from `items` and deleted - Item 4 created and added to `items` #### Delta Representation { #deep-update-delta} In **delta** representation, nested entity collections in the update data can be partial: the runtime only processes entities that are contained in the collection but entities that aren't contained remain untouched. Entities that shall be removed need to be included in the list and explicitly _marked for removal_. Using the same sample _Order_ as in the previous full-set chapter, do a deep delta update with the following update data: ```java import static com.sap.cds.CdsList.delta; Order order = Order.create(1000); order.setStatus("in process"); OrderItem item1 = OrderItem.create(1); item1.setQuantity(2); OrderItem item2 = OrderItem.create(2); OrderItem item4 = OrderItem.create(4); item4.setBook(Book.create(400)); item4.setQuantity(4); // items delta with order item 2 marked for removal order.setItems(delta(item1, item2.forRemoval(), item4)); Update.entity(ORDER).data(order); ``` > Create delta collections via `CdsList` and `CdsData`. The deep update with order items in delta representation has similar effects as the update with items in full set representation. The only difference is that `OrderItem 3` is not deleted. ### Bulk Update: Update Multiple Entity Records with Individual Data {#bulk-update} To update multiple entity records with individual update data, use the [entries](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#entries-java.lang.Iterable-) method and provide the key values of the entities in the data. The individual update entries can be [deep](#deep-update). The following example illustrates this, using the generated accessor interfaces. The statement updates the status of order 1 and 2 and the header comment of order 3: ```java Orders o1 = Orders.create(1); o1.setStatus("canceled"); Orders o2 = Orders.create(2); o2.setStatus("in process"); Orders o3 = Orders.create(3); o3.put("header.comment", "Deliver with Order 2"); List orders = Arrays.asList(o1, o2, o3); CqnUpdate update = Update.entity(ORDERS).entries(orders); ``` ::: tip In general, a bulk update can be executed more efficiently than multiple individual updates, especially if all bulk update entries update the same set of elements. ::: ### Update Multiple Entity Records with the same Data To update multiple entity records with the same update data, use searched or batch updates. #### Searched Update {#searched-update} Use the [where](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#where-java.util.function.Function-) clause or [matching](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/Update.html#matching-java.util.Map-) to update _all_ entities that match the [filter](#expressions) with _the same_ update data. In the following example, the `stock` of all books with the title containing *CAP* is set to 100: ```java Update.entity(BOOKS).data("stock", 100) .where(b -> b.title().contains("CAP")); ``` #### Parameterized Batch Update {#batch-update} Use `CQL.param` in the `where` clause or `byParams` to create a parameterized update statement to execute the statement with one or multiple [parameter value sets](query-execution#batch-execution). ```java // using where CqnUpdate update = Update.entity(BOOKS).data("stock", 0) .where(b -> b.title().eq(CQL.param("title")) .and(b.author().name().eq(CQL.param("author.name")))); // using byParams CqnUpdate update = Update.entity(BOOKS).data("stock", 0) .byParams("title", "author.name"); Map paramSet1 = new HashMap<>(); paramSet1.put("author.name", "Victor Hugo"); paramSet1.put("title", "Les Misérables"); Map paramSet2 = new HashMap<>(); paramSet2.put("author.name", "Emily Brontë"); paramSet2.put("title", "Wuthering Heights"); Result result = service.run(update, asList(paramSet1, paramSet2)); ``` ## Delete The [Delete](../../cds/cqn#delete) operation can be constructed as follows: ```cds // CDS model entity Orders { key OrderNo : String; Items : Composition of many OrderItems on Items.parent = $self; ... } entity OrderItems { book : Association to Books; ... } ``` ```java // dynamic CqnDelete delete = Delete.from("my.bookshop.Orders") .where(b -> b.get("OrderNo").eq(1000)); ``` ```java // static import static bookshop.Bookshop_.ORDERS; CqnDelete delete = Delete.from(ORDERS) .where(b -> b.OrderNo().eq(1000)); ``` By default, delete operations are cascaded along compositions. In the example, the `delete` statement would delete the order with id 1000 including its items, but no books since this relationship is modeled as an association. To enable cascading deletes over selected associations, use the [@cascade](query-execution#cascading-over-associations) annotation. ### Using `matching` As an alternative to `where`, you can use `matching` to define the delete filter based on a map. In the following example, the entity `bookshop.Article` has a composite primary key made up of `ID` and `journalID`. ```java import static com.sap.cds.ql.CQL.param; Map params = new HashMap<>(); params.put("ID", param("ID")); params.put("journalID", 101); // using matching CqnDelete delete = Delete.from("bookshop.Article").matching(params); // using where CqnDelete delete = Delete.from("bookshop.Article") .where(t -> t.get("ID").eq(param("ID")) .and(t.get("journalID").eq(101))); // execution Map row1 = singletonMap("ID", 1); Map row2 = singletonMap("ID", 2); dataStore.execute(delete, asList(row1, row2)); ``` #### Using `byParams` To delete multiple records of an entity you can use `byParams` as an alternative to parameters in `matching`/`where`. The records are then identified by the parameter values, which are given on statement [execution](query-execution#batch-execution): ```java import static bookshop.Bookshop_.BOOKS; // using where Delete.from(BOOKS) .where(b -> b.title().eq(param("title")) .and(b.author().name().eq(param("author.name")))); // using byParams Delete.from(BOOKS).byParams("title", "author.name"); ``` ## Expressions The Query Builder API supports using expressions in many places. Expressions consist of [values](#values), which can be used, for example, in [Select.columns](#projections) to specify the select list of the statement. Values can also be used in [predicates](#predicates) that allow, for example, to specify filter criteria for [Select](#select) or [Delete](#delete) statements. ### Entity References {#entity-refs} Entity references specify entity sets. They can be used to define the target entity set of a [CQL](../../cds/cql) statement. They can either be defined inline using lambda expressions in the Query Builder (see [Target Entity Sets](#target-entity-sets)) or via the `CQL.entity` method, which is available in an _untyped_ version as well as in a _typed_ version that uses the generated [model interfaces](../cqn-services/persistence-services#model-interfaces). The following example shows an entity reference describing the set of *authors* that have published books in the year 2020: ```java import com.sap.cds.ql.CQL; // bookshop.Books[year = 2020].author // [!code focus] Authors_ authors = CQL.entity(Books_.class).filter(b -> b.year().eq(2020)).author(); // [!code focus] // or as untyped entity ref StructuredType authors = CQL.entity("bookshop.Books").filter(b -> b.get("year").eq(2020)).to("author"); // SELECT from bookshop.Books[year = 2020]:author { name } // [!code focus] Select.from(authors).columns("name"); // [!code focus] ``` You can also get [entity references](query-execution#entity-refs) from the result of a CDS QL statement to address an entity via its key values in other statements. ### Values Use values in a query's [select list](#projections) as well as in order-by. In addition, values are useful to compose filter [expressions](#expressions). #### Element References Element references reference elements of entities. To compose an element reference, the Query Builder API uses lambda expressions. Here the function `b -> e.title()` accesses the book's title. The dynamic usage `b.to("author").get("name")` accesses the name of a book's author, as a shortcut `b.get("author.name")` can be used. ```java import static com.sap.cds.ql.CQL.literal; Select.from(BOOKS) .columns(b -> b.title(), b -> b.author().name()); ``` --- #### Literal Values Specify values that are already known when the query is built. The `val` method of `CQL` is used to create a literal value that can be used in the Query Builder API: ```java import static com.sap.cds.ql.CQL.val; Select.from(EMPLOYEE) .columns(e -> e.name()) .where(e -> val(50).gt(e.age())); ``` Alternatively, the factory methods for comparison predicates directly accept Java values. The query could also be written as: ```java Select.from(EMPLOYEE) .columns(e -> e.name()) .where(e -> e.age().le(50)); ``` Use `CQL.constant` if the literal value shall be treated as [constant](#constant-and-non-constant-literal-values). --- #### List Values Combine multiple values with `CQL.list` to a list value (row value), which you can use in comparisons. For example, the following query returns all sales after Q2/2012: ```java import static com.sap.cds.ql.CQL.*; CqnListValue props = list(get("year"), get("quarter")); CqnListValue vals = list(val(2012), val(2)); CqnSelect q = Select.from(SALES).where(comparison(props, GT, vals)); ``` You can also compare multiple list values at once using an [`IN` predicate](#in-predicate). #### Parameters {#expr-param} The [`param`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/CQL.html#param--) method can be statically imported from the helper class [CQL](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/CQL.html). It provides an option to use a parameter marker in a query that is bound to an actual value only upon query execution. Using parameters you can execute a query multiple times with different parameter values. Parameters are either _indexed_ or _named_. Using _indexed_ parameters means, the values are bound to the parameters according to their index. Using _named_ parameters means, the values are given as a map: ```java // indexed import static com.sap.cds.ql.CQL.param; Select.from("bookshop.Authors") .where(a -> a.firstName().eq(param(0)).and( a.lastName().eq(param(1)))); dataStore.execute(query, "Paul", "Mueller"); ``` ```java // named import static com.sap.cds.ql.CQL.param; Select.from("bookshop.Authors") .where(a -> a.firstName().eq(param("first")).and( a.lastName().eq(param("last")))); Map paramValues = new HashMap<>(); paramValues.put("first", "Paul"); paramValues.put("last", "Mueller"); dataStore.execute(query, paramValues); ``` ::: tip When using named parameters, `Update` and `Delete` statements can be executed as [batch](query-execution#batch-execution)) with multiple parameter sets. ::: #### Scalar Functions Scalar functions are values that are calculated from other values. This calculation can be executing a function on the underlying data store or applying an operation, like an addition, to its parameters. The Query Builder API supports the generic `func` function, as well as a number of build-in functions. * Generic Scalar Function The generic function `func`, creates a scalar function call that is executed by the underlying data store. The first argument, being the native query language function name, and the remaining arguments are passed on as arguments of the specified function. In the following example, the native query language `count` function is called on the `name` element. This function returns the count of number of elements with name `Monika`. ```java import static com.sap.cds.ql.CQL.func; Select.from(EMPLOYEE) .columns(e -> e.name(), e -> func("COUNT", e.name()).as("count")) .where(e -> e.name().eq("Monika")); ``` * To Lower The `toLower` function is a built-in string function for converting a given string value to lower case using the rules of the underlying data store. ```java import static com.sap.cds.ql.CQL.toLower; Select.from(EMPLOYEE).columns(e -> e.name()) .where(e -> e.name().endsWith(toLower("IKA"))); ``` In the following example, the `toLower` function is applied on the `name` element before applying the equals predicate. ```java Select.from(EMPLOYEE).columns(e -> e.name()) .where(e -> e.name().toLower().eq("monika")); ``` * To Upper The `toUpper` function is a built-in string function for converting a given string value to upper case using the rules of the underlying data store. ```java import static com.sap.cds.ql.CQL.toUpper; Select.from(EMPLOYEE).columns(e -> e.name()) .where(e -> e.name().endsWith(toUpper("ika"))); ``` In the following example, the `toUpper` function is applied on the `name` element before applying the equals predicate. ```java Select.from(EMPLOYEE).columns(e -> e.name()) .where(e -> e.name().toUpper().eq("MONIKA")); ``` * Substring The `substring` method creates an expression for substring extraction from a string value. Extract a substring from a specified starting position of either a given length or to the end of the string. The first position is zero. ```java Select.from("bookshop.Authors") .columns(a -> a.get("name").substring(0,2).as("shortname")) ``` In the following example, the `substring` function is applied as part of a predicate to test whether a subset of characters matches a given string. ```java Select.from("bookshop.Authors") .where(e -> e.get("name").substring(2).eq("ter")); ``` #### Case-When-Then Expressions Use a case expression to compute a value based on the evaluation of conditions. The following query converts the stock of Books into a textual representation as 'stockLevel': ```java Select.from(BOOKS).columns( b -> b.title(), b -> b.when(b.stock().lt(10)).then("low") .when(b.stock().gt(100)).then("high") .orElse("medium").as("stockLevel").type(CdsBaseType.STRING)); ``` #### Arithmetic Expressions Arithmetic Expressions are captured by scalar functions as well: * Plus Function `plus` creates an arithmetic expression to add a specified value to this value. ```java // SELECT from Author {id + 2 as x : Integer} Select.from(AUTHOR) .columns(a -> a.id().plus(2).as("x")); ``` * Minus Function `minus` creates an arithmetic expression to subtract a specified value with this value. ```java Select.from("bookshop.Authors") .columns("name") .limit(a -> literal(3).minus(1)); ``` * Times Function `times` creates an arithmetic expression to multiply a specified value with this value. In the following example, `p` is an Integer parameter value passed when executing the query. ```java Parameter p = param("p"); Select.from(AUTHOR) .where(a -> a.id().between(10, p.times(30))); ``` * Divided By Function `dividedBy` creates an arithmetic expression to divide this value with the specified value. ```java Select.from(AUTHOR) .where(a -> a.id().between(10, literal(30).dividedBy(2))); ``` ### Predicates Predicates are expressions with a Boolean value, which are used in [filters](#where-clause) to restrict the result set or to specify a [target entity set](#target-entity-sets). #### `Comparison Operators` {#comparison-operators} These comparison operators are supported:
Predicate Description Example
EQ Test if this value equals a given value. NULL values might be treated as unknown resulting in a three-valued logic as in SQL. Select.from("bookshop.Books") .where(b -> b.get("stock") .eq(15));
NE Test if this value is NOT equal to a given value. NULL values might be treated as unknown resulting in a three-valued logic as in SQL. Select.from("bookshop.Books") .where(b -> b.get("stock") .ne(25));
IS Test if this value equals a given value. NULL values are treated as any other value. Select.from("bookshop.Books") .where(b -> b.get("stock") .is(15));
IS NOT Test if this value is NOT equal to a given value. NULL values are treated as any other value. Select.from("bookshop.Books") .where(b -> b.get("stock") .isNot(25));
GT Test if this value is greater than a given value. Select.from("bookshop.Books") .where(b -> b.get("stock") .gt(5));
LT Test if this value is less than a given value. Select.from("bookshop.Books") .where(b -> b.get("stock") .lt(5));
LE Test if this value is less than or equal to a given value. Select.from("bookshop.Books") .where(b -> b.get("stock") .le(5));
BETWEEN Test if this value is between1 a range of values. Select.from("bookshop.Books") .where(b -> b.get("stock") .between(5, 10));
1 upper and lower bound are included #### `IN` Predicate The `IN` predicate tests if a value is equal to any value in a given list. The following example, filters for books written by Poe or Hemingway: ```java Select.from(BOOKS) .where(b -> b.author().name().in("Poe", "Hemingway")); ``` The values can also be given as a list: ```java List authorNames = List.of("Poe", "Hemingway"); Select.from(BOOKS) .where(b -> b.author().name().in(authorNames)); ``` You can also use the `IN` predicate to compare multiple [list values](#list-values) at once - for example to efficiently filter by multiple key value sets: ```java import static com.sap.cds.ql.CQL.*; CqnListValue elements = list(get("AirlineID"), get("ConnectionID")); CqnListValue lh454 = list(val("LH"), val(454)); CqnListValue ba119 = list(val("BA"), val(119)); Select.from(FLIGHT_CONNECTION).where(in(elements, List.of(lh454, ba119))); ``` #### `IN` Subquery Predicate Use the `in` subquery to test if an element (or tuple of elements) of an outer query is contained in the result of a subquery. ```java // fluent style Select.from(AUTHORS).where(author -> author.name().in( Select.from(JOURNALISTS).columns(journalist -> journalist.name()) )); ``` In this example we check whether the tuple (`firstName`, `lastName`) is contained in the result of the subquery: ```java // generic tree style via CQL api CqnListValue fullName = CQL.list(CQL.get("firstName"), CQL.get("lastName")); CqnSelect subquery = Select.from("socialmedia.Journalists").columns("firstName", "lastName"); Select.from("bookshop.Authors").where(CQL.in(fullName, subquery)); ``` #### `ETag Predicate` {#etag-predicate} The [ETag predicate](query-execution#etag-predicate) specifies expected ETag values for [conflict detection](query-execution#optimistic) in an [update](#update) or [delete](#delete) statement: ```java Instant expectedLastModification = ...; Update.entity(ORDER) .entry(newData) .where(o -> o.id().eq(85).and(o.eTag(expectedLastModification))); ``` You can also use the `eTag` methods of the `CQL` interface to construct an ETag predicate in [tree style](#cql-helper-interface): ```java import static com.sap.cds.ql.CQL.*; Instant expectedLastModification = ...; Update.entity(ORDER) .entry(newData) .where(and(get("id").eq(85), eTag(expectedLastModification))); ``` #### `Logical Operators` {#logical-operators} Predicates can be combined using logical operators:
Operator Description Example
AND Returns a predicate that represents a logical AND of this predicate and another. Select.from("bookshop.Authors") .where(a -> a.get("name").eq("Peter) .and(a.get("Id").eq(1)));
OR Returns a predicate that represents a logical OR of this predicate and another. Select.from("bookshop.Authors") .where(a -> a.get("name").eq("Peter) .or(a.get("Id").eq(1)));
NOT Returns a predicate that represents the logical negation of this predicate. Select.from("bookshop.Authors") .where(a -> not(a.get("Id").eq(3)));
#### `Predicate Functions` {#predicate-functions} These boolean-valued functions can be used in filters:
Operator Description Example
CONTAINS Test if this string value contains a given substring. Select.from(EMPLOYEE) .where(e -> e.name() .contains("oni"));
STARTS WITH Test if this string value starts with a given prefix. Select.from("bookshop.Books") .where(b -> b.get("title") .startsWith("The"));
ENDS WITH Test if this string value ends with a given suffix. Select.from("bookshop.Books") .where(b -> b.get("title") .endsWith("Raven"));
#### `matchesPattern` Predicate {#matches-pattern} The `matchesPattern` predicate is applied to a String value and tests if it matches a given regular expression. The regular expressions are evaluated on the database. Therefore, the supported syntax of the regular expression and the options you can use depends on the database you are using. For example, following code matches title of the book that contains the word "CAP" in the title: ```java Select.from("bookshop.Books").where(t -> t.get("title").matchesPattern("CAP")); ``` ::: tip As a general rule, consider regular expressions as a last resort. They are powerful, but also complex and hard to read. For simple string operations, prefer other simpler functions like `contains`. :::: In the following example, the title of the book must start with the letter `C` and end with the letter `e` and contains any number of letters in between: ```java Select.from("bookshop.Books").where(t -> t.get("title").matchesPattern("^C\\w*e$")); ``` The behavior of the regular expression can be customized with the options that can be passed as a second argument of the predicate. The set of the supported options and their semantics depends on the underlying database. For example, the following code matches that the title of the book begins with the word "CAP" while ignoring the case of the letters: ```java Select.from("bookshop.Books").where(t -> t.get("title").matchesPattern(CQL.val("^CAP.+$"), CQL.val("i"))); ``` #### `anyMatch/allMatch` Predicate {#any-match} The `anyMatch` and `allMatch` predicates are applied to an association and test if _any_ instance/_all_ instances of the associated entity set match a given filter condition. They are supported in filter conditions of [Select](#select), [Update](#update) and [Delete](#delete) statements. This query selects the Authors that have written any book in the year 2000 that is published by a publisher starting with 'X': ```java import static bookshop.Bookshop_.AUTHORS; Select.from(AUTHORS) .where(a -> a.books().anyMatch(b -> b.year().eq(2000).and(b.publisher().name().startsWith("X")))); ``` The next statement deletes all Authors that have published all their books with publisher 'A': ```java Delete.from(AUTHORS).where(a -> a.books().allMatch(b -> b.publisher().name().eq("A"))); ``` The reference, to which `anyMatch`/`allMatch` is applied, may navigate multiple path segments. The following query selects all authors, for which the publisher of all books is named "CAP Publications": ```java Select.from(AUTHORS).where(a -> a.books().publisher().allMatch(p -> p.name().eq("CAP Publications"))); ``` This is equivalent to ```java Select.from(AUTHORS).where(a -> a.books().allMatch(b -> b.publisher().name().eq("CAP Publications"))); ``` Like in the previous example, a reference used in a match predicate filter may navigate to-one associations. Nested match predicates need to be used, if you want to express a condition in a match predicate filter on a reference that navigates to-many associations. The following example selects authors that have written a book where the word "unicorn" occurs on all pages: ```java Select.from(AUTHORS).where(a -> a.books().anyMatch( b -> b.pages().allMatch(p -> p.text().contains("unicorn")))); ``` #### `EXISTS` Subquery {#exists-subquery} An `exists` subquery is used to test if a subquery returns any records. Typically a subquery is correlated with the enclosing _outer_ query. You construct an `exists` subquery with the [`exists`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/StructuredType.html#exists-java.util.function.Function-) method, which takes a [function](#lambda-expressions) that creates the subquery from a reference to the _outer_ query. To access elements of the outer query from within the subquery, this _outer_ reference must be used: ```java import static bookshop.Bookshop_.AUTHORS; import static socialmedia.Journalists_.JOURNALISTS; // fluent style Select.from(AUTHORS) .where(author -> author.exists($outer -> Select.from(JOURNALISTS).where(journalist -> journalist.name().eq($outer.name())) ) ); ``` This query selects all authors with the name of an journalist. ::: tip With an `exists` subquery, you can correlate entities that aren't linked with associations. ::: When using the [tree-style API](#composing-predicates) the _outer_ query is addressed by the special reference name `"$outer"`: ```java // tree style CqnSelect subquery = Select.from("Journalists") .where(a -> a.get("name").eq(CQL.get("$outer.name"))); Select.from("Authors").where(CQL.exists(subquery)); ``` > **Note:** Chaining `$outer` in nested subqueries is not supported. ## Parsing CQN [CQL](../../cds/cql) queries can also be constructed from a [CQN](../../cds/cqn) string*: ```java String cqnQuery = """ {'SELECT': {'from': {'ref': ['my.bookshop.Books']}, 'where': [{'ref': ['title']}, '=', {'val': 'Capire'}]}} """; CqnSelect query = Select.cqn(cqnQuery); ``` > * For readability reasons, we used single quotes instead of double quotes as required by the JSON specification. The constructed queries can then be modified using the query builder API: ```java String cqnQuery = ... CqnSelect query = Select.cqn(cqnQuery).columns("price"); ``` For `Insert`, `Update`, and `Delete` this is supported as well. ## CQL Expression Trees { #cql-helper-interface} As an alternative to fluent API the [CQL](../../cds/cql) statement can be built, copied, and modified using [CQL Interface](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/CQL.html), which allows to build and reuse the parts of the statement. ### Composing Predicates As opposed to fluent API it's possible to build the queries in a tree-style. Consider the following example: ```java // CQL: SELECT from Books where year >= 2000 and year <= 2010 // // AND // | // +---------+---------+ // | | // => <= // | | // +----+----+ +----+----+ // | | | | // year 2000 year 2010 import static com.sap.cds.ql.CQL.*; import com.sap.cds.sql.cqn.CqnComparisonPredicate; CqnValue year = get("year"); CqnPredicate filter = and(comparison(year, Operator.GE, val(2000)), comparison(year, Operator.LE, val(2010))); ``` In the previous example using the `CQL.and`, a predicate limiting the `year` between 2000 and 2010 was built. Using CQL Interface can be handy when the part of the statement should be built on the fly based on some condition. The following example demonstrates that, showing the usage of a `CQL.in` expression: ```java // CQL: SELECT from Books where year >= 2000 and year <= 2010 // OR // SELECT from Books where year in (2000, 2001, ...) List years = ...; List> yearValues = years.stream().map(y -> val(y)).collect(toList()); CqnElementRef year = CQL.get("year"); CqnPredicate filter; if (years.isEmpty()) { filter = and(comparison(year, Operator.GE, val(2000)), comparison(year, Operator.LE, val(2010))); } else { filter = CQL.in(year, yearValues); } Select.from("bookshop.Books").where(filter); ``` #### Connecting Streams of Predicates You can leverage the Java Stream API to connect a stream of predicates with `AND` or `OR` using the `Collector`s `withAnd` or `withOr`. In this example we build a predicate that tests if a Person matches any first name/last name pair in a list: ```java List names = ... CqnPredicate filter = names.stream() .map(n -> CQL.and( CQL.get("firstName").eq(n.first()), CQL.get("lastName").eq(n.last()))) .collect(CQL.withOr()); ``` ### Working with Select List Items In addition to `CQL.get`, which creates a reference to a particular element, it's also possible to reference all elements using `CQL.star` method and use the expands as well. The next example demonstrates how to select all elements of `Book` and expand elements of associated `Author` of the book with `CQL.to(...).expand`: ```java // SELECT from Books {*, author {*}} Expand authorItems = CQL.to("author").expand(); Select.from("bookshop.Books").columns(CQL.star(), authorItems); ``` ### Using Functions and Arithmetic Expressions CQL Interface provides multiple well-known functions such as: `min`, `max`, `average`, and so on. The following example shows how to use the function call to query the `min` and `max` stock of the `Books`: ```java // CQL: SELECT from Books { MIN(stock) as minStock, MAX(stock) as maxStock } CqnElementRef stock = CQL.get("stock"); Select.from("bookshop.Books").columns( CQL.min(stock).as("minStock"), CQL.max(stock).as("maxStock")); ``` In addition to that it's also possible to build a custom function using `CQL.func`: ```java // CQL: SELECT from Books { LENGTH(title) as titleLength } CqnElementRef title = CQL.get("title"); Select.from("bookshop.Books").columns(func("LENGTH", title).as("titleLength")); ``` Other than `CQL.func`, which returns a value, the `CQL.booleanFunc` constructs the function, which returns a predicate and thus can be used in `where` clause of a query. In the following example, SAP HANA function `CONTAINS` is used to execute fuzzy search on the column of the entity: ```java Select.from("bookshop.Books") .where(e -> booleanFunc("CONTAINS", Arrays.asList(CQL.get(Books.TITLE).asRef(), val("Wuthering"), plain("FUZZY(0.5)")))); ``` Assume the `Book` has an element `price : Decimal`. One can calculate the discount price by subtracting the fixed value. This can be done using `CQL.expression`: ```java // CQL: SELECT from Books { *, price - 5 as discountPrice } CqnSelectListValue discountPrice = CQL.expression( CQL.get("price"), Operator.SUB, CQL.val(5)).as("discountPrice"); // Price reduced by 5 Select.from("bookshop.Books").columns(CQL.star(), discountPrice); ``` When using custom functions or expressions, you sometimes want to ensure that the return value is typed with a specific CDS type. You can use a CDL cast for this, by leveraging the `type` method. By default, values returned by custom functions or expressions are not typed. If no explicit CDL cast is applied, the representation of the return value in Java is dependent on the database and its JDBC driver implementation. In the following example, the result of the `ADD_SECONDS` function is ensured to be represented as a CDS `Timestamp` type. This ensures the return value is typed as an `Instant` in Java. ```java // CQL: SELECT from Books { ADD_SECONDS(modifiedAt, 30) as addedSeconds : Timestamp } CqnElementRef modified = CQL.get("modifiedAt"); Select.from("bookshop.Books").columns( CQL.func("ADD_SECONDS", modified, CQL.constant(30)) .type(CdsBaseType.TIMESTAMP).as("addedSeconds")); ``` ## Copying & Modifying CDS QL Statements {#copying-modifying-cql-statements} Use [`CQL::copy`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/CQL.html#copy-S-com.sap.cds.ql.cqn.Modifier-) and a modifier to copy and modify CDS QL statements and their components such as values and predicates: ```java import com.sap.cds.ql.CQL; // CQL: SELECT from Books where title = 'Capire' CqnSelect query = Select.from(BOOKS).where(b -> b.title().eq("Capire")); CqnSelect copy = CQL.copy(query, modifier); // implement Modifier ``` By overriding the default implementations of the `Modifier` interface, different parts of a statement or predicate can be replaced in the copy. The following sections show some common examples of statement modifications, for a complete list of modifier methods, check the [Modifier](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Modifier.html) interface. ### Replacing Predicates {#modify-where} The following modifier replaces the `where` clause of the copy with a new predicate that connects the `where` clause of the query with `or` to `title = 'CAP Java'`. ```java import com.sap.cds.ql.CQL; // query: SELECT from Books where title = 'Capire' // copy: SELECT from Books where title = 'Capire' or title = 'CAP Java' CqnSelect copy = CQL.copy(query, new Modifier() { @Override public Predicate where(Predicate where) { return CQL.or(where, CQL.get("title").eq("CAP Java")); } }); ``` To replace comparison predicates, override the [`Modifier::comparison`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Modifier.html#comparison-com.sap.cds.ql.Value-com.sap.cds.ql.cqn.CqnComparisonPredicate.Operator-com.sap.cds.ql.Value-) method. The following modifier replaces the value of the `title` comparison with `'CAP'`. ```java // query: SELECT from Books where title = 'Capire' // copy: SELECT from Books where title = 'CAP' CqnSelect copy = CQL.copy(query, new Modifier() { @Override public Predicate comparison(Value lhs, Operator op, Value rhs) { if (lhs.isRef() && lhs.asRef().lastSegment().equals("title")) { rhs = CQL.val("CAP"); } return CQL.comparison(lhs, op, rhs); } }); ``` ### Replacing References {#modify-ref} References to elements and structured types are _immutable_. You can replace them by overriding the `Modifier::ref` methods. The following modifier replaces the ref to the `Books` entity (1) in the copy of the query with a new ref that has a filter `year > 2000` and replaces the `title` ref (2) with a new ref with "book" as alias. ```java // query: SELECT from Books { title } // copy: SELECT from Books[year > 2000] { title as book } CqnSelect copy = CQL.copy(query, new Modifier() { @Override // (1) public CqnStructuredTypeRef ref(CqnStructuredTypeRef ref) { return CQL.to(ref.firstSegment()) .filter(CQL.get("year").gt(2000)) .asRef(); } @Override // (2) public CqnValue ref(CqnElementRef ref) { return CQL.get(ref.segments()).as("book"); } }); ``` ### Modify the Select List {#modify-select} The modifier can also be used to add or remove select list items via [`Modifier::items`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Modifier.html#items-java.util.List-): ```java // query: SELECT from Books where title = 'Capire' // copy: SELECT from Books {title, author {name}} where title = 'Capire' CqnSelect copy = CQL.copy(query, new Modifier() { @Override public List items(List items) { items.add(CQL.get("title")); // add title items.add(CQL.to("author").expand("name")); // expand author name return items; } }); ``` ### Modify the Order-By Clause {#modify-order-by} To modify the `orderBy` clause of a query, override [`Modifier::orderBy`](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/Modifier.html#orderBy-java.util.List-): ```java // query: SELECT from Books where title = 'Capire' // copy: SELECT from Books where title = 'Capire' ORDER BY title desc CqnSelect copy = CQL.copy(query, new Modifier() { @Override public List orderBy(List order) { order.add(CQL.get("title").desc()); return order; } }); ``` # Executing CQL Statements API to execute CQL statements on services accepting CQN queries. ## Query Execution { #queries} [CDS Query Language (CQL)](./query-api) statements can be executed using the `run` method of any [service that accepts CQN queries](../cqn-services/#cdsservices): ```java CqnService service = ... CqnSelect query = Select.from("bookshop.Books") .columns("title", "price"); Result result = service.run(query); ``` ### Parameterized Execution Queries, as well as update and delete statements, can be parameterized with _named_, or _indexed parameters_. Update and delete statements with _named_ parameters can be executed in batch mode using multiple parameter sets. #### Named Parameters The following statement uses two parameters named *id1* and *id2*. The parameter values are given as a map: ```java import static com.sap.cds.ql.CQL.param; CqnDelete delete = Delete.from("bookshop.Books") .where(b -> b.get("ID").eq(param("id1")) .or(b.get("ID").eq(param("id2")))); Map paramValues = new HashMap<>(); paramValues.put("id1", 101); paramValues.put("id2", 102); Result result = service.run(delete, paramValues); ``` ::: warning The parameter value map **must** be of type `Map`, otherwise the map is interpreted as a single positional/indexed parameter value, which results in an error. ::: #### Indexed Parameters The following statement uses two indexed parameters defined through `param(i)`: ```java import static com.sap.cds.ql.CQL.param; CqnDelete delete = Delete.from("bookshop.Books") .where(b -> b.get("ID").in(param(0), param(1))); Result result = service.run(delete, 101, 102); ``` Before the execution of the statement the values 101 and 102 are bound to the defined parameters. #### Batch Execution Update and delete statements with _named parameters_ can be executed as batch with multiple parameter sets. The named parameters example from above can be expressed using batch delete with a single parameter and two value sets: ```java import static com.sap.cds.ql.CQL.param; CqnDelete delete = Delete.from("bookshop.Books").byParams("ID"); Map paramSet1 = singletonMap("ID", 101); Map paramSet1 = singletonMap("ID", 102); Result result = service.run(query, asList(paramSet1, paramSet2)); long deletedRows = result.rowCount(); ``` From the result of a batch update/delete the total number of updated/deleted rows can be determined by [rowCount()](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/Result.html#rowCount--), and [rowCount(batchIndex)](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/Result.html#rowCount-int-) returns the number of updated/deleted rows for a specific parameter set of the batch. The number of batches can be retrieved via the [batchCount()](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/Result.html#batchCount--) method. Batch updates also return the update data. The maximum batch size for update and delete can be configured via `cds.sql.max-batch-size` and has a default of 1000. #### Querying Parameterized Views on SAP HANA { #querying-views} To query [views with parameters](../../advanced/hana#views-with-parameters) on SAP HANA, build a select statement and execute it with [named parameter](#named-parameters) values that correspond to the view's parameters. Let's consider the following `Books` entity and a parameterized view `BooksView`, which returns the `ID` and `title` of `Books` with `stock` greater or equal to the value of the parameter `minStock`: ```cds entity Books { key ID : UUID; title : String; stock : Integer; } entity BooksView(minStock : Integer) as SELECT from Books {ID, title} where stock >= :minStock; ``` To query `BooksView` in Java, run a select statement and provide values for all view parameters: ```java CqnSelect query = Select.from("BooksView"); var params = Map.of("minStock", 100); Result result = service.run(query, params); ``` #### Adding Query Hints for SAP HANA { #hana-hints} To add a hint clause to a statement, use the `hints` method and prefix the [SAP HANA hints](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/4ba9edce1f2347a0b9fcda99879c17a1.htmlS) with `hdb.`: ```java CqnSelect query = Select.from(BOOKS).hints("hdb.USE_HEX_PLAN", "hdb.ESTIMATION_SAMPLES(0)"); ``` ::: warning Hints prefixed with `hdb.` are directly rendered into SQL for SAP HANA and therefore **must not** contain external input! ::: ### Data Manipulation The CQN API allows to manipulate data by executing insert, update, delete, or upsert statements. #### Update The [update](./query-api) operation can be executed as follows: ```java Map book = Map.of("title", "CAP"); CqnUpdate update = Update.entity("bookshop.Books").data(book).byId(101); Result updateResult = service.run(update); ``` The update `Result` contains the data that is written by the statement execution. Additionally to the given data, it may contain values generated for [managed data](../../guides/domain-modeling#managed-data) and foreign key values. The [row count](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/Result.html#rowCount()) of the update `Result` indicates how many rows where updated during the statement execution: ```java CqnUpdate update = ... long rowCount = service.run(update).rowCount(); ``` If no rows are touched the execution is successful but the row count is 0. :::warning The setters of an [update with expressions](../working-with-cql/query-api#update-expressions) are evaluated on the database. The result of these expressions is not contained in the update result. ::: ### Working with Structured Documents It's possible to work with structured data as the insert, update, and delete operations cascade along *compositions*. #### Cascading over Associations { #cascading-over-associations} By default, *insert*, *update* and *delete* operations cascade over [compositions](../../guides/domain-modeling#compositions) only. For associations, this can be enabled using the `@cascade` annotation. ::: warning Cascading operations over associations isn't considered good practice and should be avoided. ::: Annotating an *association* with `@cascade: {insert, update, delete}` enables deep updates/upserts through this association. Given the following CDS model with two entities and an association between them, only *insert* and *update* operations are cascaded through `author`: ```cds entity Book { key ID : Integer; title : String; @cascade: {insert, update} author : Association to Author; } entity Author { key ID : Integer; name : String; } ``` ::: warning _❗ Warning_ For inactive draft entities `@cascade` annotations are ignored. ::: ::: warning _❗ Warning_ The @cascade annotation is not respected by foreign key constraints on the database. To avoid unexpected behaviour you might have to disable a FK constraint with [`@assert.integrity:false`](../../guides/databases#database-constraints). ::: #### Deep Insert / Upsert { #deep-insert-upsert} [Insert](./query-api#insert) and [upsert](./query-api#upsert) statements for an entity have to include the keys and (optionally) data for the entity's composition targets. The targets are inserted or upserted along with the root entity. ```java Iterable> books; CqnInsert insert = Insert.into("bookshop.Books").entries(books); Result result = service.run(insert); CqnUpsert upsert = Upsert.into("bookshop.Books").entries(books); Result result = service.run(upsert); ``` #### Cascading Delete The [delete](./query-api) operation is cascaded along the entity's compositions. All composition targets that are reachable from the (to be deleted) entity are deleted as well. The following example deletes the order with ID *1000* including all its items: ```java CqnDelete delete = Delete.from("bookshop.Orders").matching(singletonMap("OrderNo", 1000)); long deleteCount = service.run(delete).rowCount(); ``` ### Resolvable Views and Projections { #updatable-views} The CAP Java SDK aims to resolve statements on non-complex views and projections to their underlying entity. When delegating queries between Application Services and Remote Services, statements are resolved to the entity definitions of the targeted service. Using the Persistence Service, only modifying statements are resolved before executing database queries. This allows to execute [Insert](./query-api#insert), [Upsert](./query-api#upsert), [Update](./query-api#update), and [Delete](./query-api#delete) operations on database views. For [Select](./query-api#select) statements database views are always leveraged, if available. Views and projections can be resolved if the following conditions are met: - The view definition does not use any other clause than `columns` and `excluding`. - The projection includes all key elements; with the exception of insert operations with generated UUID keys. - The projection includes all elements with a `not null` constraint, unless they have a default value. - The projection must not include calculated fields when running queries against a remote OData service. - The projection must not include [path expressions](../../cds/cql#path-expressions) using to-many associations. For [Insert](./query-api#insert) or [Update](./query-api#update) operations, if the projection contains functions or expressions, these values are ignored. Path expressions navigating *to-one* associations, can be used in projections as shown by the `Header` view in the following example. The `Header` view includes the element `country` from the associated entity `Address`. ```cds // Supported entity Order as projection on bookshop.Order; entity Order as projection on bookshop.Order { ID, status as state }; entity Order as projection on bookshop.Order excluding { status }; entity Header as projection on bookshop.OrderHeader { key ID, address.country as country }; ``` If a view is too complex to be resolved by the CDS runtime, the statement remains unmodified. Views that cannot be resolved by the CDS runtime include the use of `join`, `union` and the `where` clause. - For the Persistence Service, this means the runtime _attempts_ to execute the write operation on the database view. Whether this execution is possible is [database dependent](../cqn-services/persistence-services#database-support). - For Application Services and Remote Services, the targeted service will reject the statement. Example of a view that can't be resolved: ```cds // Unsupported entity DeliveredOrders as select from bookshop.Order where status = 'delivered'; entity Orders as select from bookshop.Order inner join bookshop.OrderHeader on Order.header.ID = OrderHeader.ID { Order.ID, Order.items, OrderHeader.status }; ``` ## Concurrency Control Concurrency control allows protecting your data against unexpected concurrent changes. ### Optimistic Concurrency Control {#optimistic} Use _optimistic_ concurrency control to detect concurrent modification of data _across requests_. The implementation relies on an _ETag_, which changes whenever an entity instance is updated. Typically, the ETag value is stored in an element of the entity. #### Optimistic Concurrency Control in OData In the [OData protocol](../../guides/providing-services#etag), the implementation relies on `ETag` and `If-Match` headers in the HTTP request. The `@odata.etag` annotation indicates to the OData protocol adapter that the value of an annotated element should be [used as the ETag for conflict detection](../../guides/providing-services#etag): {#on-update-example} ```cds entity Order : cuid { @odata.etag @cds.on.update : $now @cds.on.insert : $now modifiedAt : Timestamp; product : Association to Product; } ``` #### The ETag Predicate {#etag-predicate} An ETag can also be used programmatically in custom code. Use the `CqnEtagPredicate` to specify the expected ETag values in an update or delete operation. ETag checks are not executed on upsert. You can create an ETag predicate using the `CQL.eTag` or the `StructuredType.eTag` methods. ```java PersistenceService db = ... Instant expectedLastModification = ...; CqnUpdate update = Update.entity(ORDER).entry(newData) .where(o -> o.id().eq(85).and( o.eTag(expectedLastModification))); Result rs = db.execute(update); if (rs.rowCount() == 0) { // order 85 does not exist or was modified concurrently } ``` In the previous example, an `Order` is updated. The update is protected with a specified ETag value (the expected last modification timestamp). The update is executed only if the expectation is met. ::: warning Application has to check the result No exception is thrown if an ETag validation does not match. Instead, the execution of the update (or delete) succeeds but doesn't apply any changes. Ensure that the application checks the `rowCount` of the `Result` and implement your error handling. If the value of `rowCount` is 0, that indicates that no row was updated (or deleted). ::: #### Providing new ETag Values with Update Data A convenient option to determine a new ETag value upon update is the [@cds.on.update](../../guides/domain-modeling#cds-on-update) annotation as in the [example above](#on-update-example). The CAP Java runtime automatically handles the `@cds.on.update` annotation and sets a new value in the data before the update is executed. Such _managed data_ can be used with ETags of type `Timestamp` or `UUID` only. We do not recommend providing a new ETag value by custom code in a `@Before`-update handler. If you do set a value explicitly in custom code and an ETag element is annotated with `@cds.on.update`, the runtime does not generate a new value upon update for this element. Instead, the value that comes from your custom code is used. #### Runtime-Managed Versions Alternatively, you can store ETag values in _version elements_. For version elements, the values are exclusively managed by the runtime without the option to set them in custom code. Annotate an element with `@cds.java.version` to advise the runtime to manage its value. ```cds entity Order : cuid { @odata.etag @cds.java.version version : Int32; product : Association to Product; } ``` Compared to `@cds.on.update`, which allows for ETag elements with type `Timestamp` or `UUID` only, `@cds.java.version` additionally supports all integral types `Uint8`, ... `Int64`. For timestamp, the value is set to `$now` upon update, for elements of type UUID a new UUID is generated, and for elements of integral type the value is incremented. Version elements can be used with an [ETag predicate](#etag-predicate) to programmatically check an expected ETag value. Moreover, if additionally annotated with `@odata.etag`, they can be used for [conflict detection](../../guides/providing-services#etag) in OData. ##### Expected Version from Data If the update data contains a value for a version element, this value is used as the _expected_ value for the version. This allows using version elements in a programmatic flow conveniently: ```java PersistenceService db = ... CqnSelect select = Select.from(ORDER).byId(85); Order order = db.run(select).single(Order.class); order.setAmount(5000); CqnUpdate update = Update.entity(ORDER).entry(order); Result rs = db.execute(update); if (rs.rowCount() == 0) { // order 85 does not exist or was modified concurrently } ``` During the execution of the update statement it's asserted that the `version` has the same value as the `version`, which was read previously and hence no concurrent modification occurred. The same convenience can be used in bulk operations. Here the individual update counts need to be introspected. ```java CqnSelect select = Select.from(ORDER).where(o -> amount().gt(1000)); List orders = db.run(select).listOf(Order.class); orders.forEach(o -> o.setStatus("cancelled")); Result rs = db.execute(Update.entity(ORDER).entries(orders)); for(int i = 0; i < orders.size(); i++) if (rs.rowCount(i) == 0) { // order does not exist or was modified concurrently } ``` > If an [ETag predicate is explicitly specified](#providing-new-etag-values-with-update-data), it overrules a version value given in the data. ### Pessimistic Locking { #pessimistic-locking} Use database locks to ensure that data returned by a query isn't modified in a concurrent transaction. _Exclusive_ locks block concurrent modification and the creation of any other lock. _Shared_ locks, however, only block concurrent modifications and exclusive locks but allow the concurrent creation of other shared locks. To lock data: 1. Start a transaction (either manually or let the framework take care of it). 2. Query the data and set a lock on it. 3. Perform the processing and, if an exclusive lock is used, modify the data inside the same transaction. 4. Commit (or roll back) the transaction, which releases the lock. To be able to query and lock the data until the transaction is completed, just call a [`lock()`](./query-api#write-lock) method and set an optional parameter `timeout`. In the following example, a book with `ID` 1 is selected and locked until the transaction is finished. Thus, one can avoid situations when other threads or clients are trying to modify the same data in the meantime: ```java // Start transaction // Obtain and set a write lock on the book with id 1 service.run(Select.from("bookshop.Books").byId(1).lock()); ... // Update the book locked earlier Map data = Collections.singletonMap("title", "new title"); service.run(Update.entity("bookshop.Books").data(data).byId(1)); // Finish transaction ``` The `lock()` method has an optional parameter `timeout` that indicates the maximum number of seconds to wait for the lock acquisition. If a lock can't be obtained within the `timeout`, a `CdsLockTimeoutException` is thrown. If `timeout` isn't specified, a database-specific default timeout will be used. The parameter `mode` allows to specify whether an `EXCLUSIVE` or a `SHARED` lock should be set. ## Runtime Views { #runtimeviews} The CDS compiler generates [SQL DDL](../../guides/databases?impl-variant=java#generating-sql-ddl) statements based on your CDS model, which include SQL views for all CDS [views and projections](../../cds/cdl#views-projections). This means adding or changing CDS views requires a deployment of the database schema changes. To avoid schema updates due to adding or updating CDS views, annotate them with [@cds.persistence.skip](../../guides/databases#cds-persistence-skip). In this case the CDS compiler won't generate corresponding static database views. Instead, the CDS views are dynamically resolved by the CAP Java runtime. ```cds entity Books { key id : UUID; title : String; stock : Integer; author : Association to one Authors; } @cds.persistence.skip // [!code focus] entity BooksWithLowStock as projection on Books { // [!code focus] id, title, author.name as author // [!code focus] } where stock < 10; // [!code focus] ``` At runtime, CAP Java resolves queries against runtime views until an entity is reached that isn't annotated with *@cds.persistence.skip*. For example, the CQL query ```sql Select BooksWithLowStock where author = 'Kafka' ``` is executed against SQL databases as ```SQL SELECT B.ID, B.TITLE, A.NAME as "author" FROM BOOKS AS B LEFT OUTER JOIN AUTHORS AS A ON B.AUTHOR_ID = A.ID WHERE B.STOCK < 10 AND A.NAME = ? ``` ::: warning Limitations Runtime views are supported for simple [CDS projections](../../cds/cdl#as-projection-on). Constant values, expressions such as *case when* and [association filters](../../cds/cdl#publish-associations-with-filter) are currently ignored. Complex views using aggregations or union/join/subqueries in `FROM` are not supported. ::: ### Using I/O Streams in Queries As described in section [Predefined Types](../cds-data#predefined-types) it's possible to stream the data, if the element is annotated with `@Core.MediaType`. The following example demonstrates how to allocate the stream for element `coverImage`, pass it through the API to an underlying database and close the stream. Entity `Books` has an additional annotated element `coverImage : LargeBinary`: ```cds entity Books { key ID : Integer; title : String; ... @Core.MediaType coverImage : LargeBinary; } ``` Java snippet for creating element `coverImage` from file `IMAGE.PNG` using `java.io.InputStream`: ```java // Transaction started Result result; try (InputStream resource = getResource("IMAGE.PNG")) { Map book = new HashMap<>(); book.put("title", "My Fancy Book"); book.put("coverImage", resource); CqnInsert insert = Insert.into("bookshop.Books").entry(book); result = service.run(insert); } // Transaction finished ``` ### Using Native SQL CAP Java doesn't have a dedicated API to execute native SQL Statements. However, when using Spring as application framework you can leverage Spring's features to execute native SQL statements. See [Execute SQL statements with Spring's JdbcTemplate](../cqn-services/persistence-services#jdbctemplate) for more details. ## Query Result Processing { #result} The result of a query is abstracted by the `Result` interface, which is an iterable of `Row`. A `Row` is a `Map` with additional convenience methods and extends [CdsData](../cds-data#cds-data). You can iterate over a `Result`: ```java Result result = ... for (Row row : result) { System.out.println(row.get("title")); } ``` Or process it with the [Stream API](https://docs.oracle.com/javase/8/docs/api/?java/util/stream/Stream.html): ```java Result result = ... result.forEach(r -> System.out.println(r.get("title"))); result.stream().map(r -> r.get("title")).forEach(System.out::println); ``` If your query is expected to return exactly one row, you can access it with the `single` method: ```java Result result = ... Row row = result.single(); ``` If it returns a result, like a `find by id` would, you can obtain it using `first`: ```java Result result = ... Optional row = result.first(); row.ifPresent(r -> System.out.println(r.get("title"))); ``` The `Row`'s `getPath` method supports paths to simplify extracting values from nested maps. This also simplifies extracting values from results with to-one expands using the generic accessor. Paths with collection-valued segments and infix filters are not supported. ```java CqnSelect select = Select.from(BOOKS).columns( b -> b.title(), b -> b.author().expand()).byId(101); Row book = dataStore.execute(select).single(); String author = book.getPath("author.name"); ``` ### Null Values A result row _may_ contain `null` values for an element of the result if no data is present for the element in the underlying data store. Use the `get` methods to check if an element is present in the result row: ```java if (row.get("name") == null) { // handle mising value for name } ``` Avoid using `containsKey` to check for the presence of an element in the result row. Also, when iterating the elements of the row, keep in mind, that the data _may_ contain `null` values: ```java row.forEach((k, v) -> { if (v == null) { // handle mising value for element v } }); ``` ### Typed Result Processing The element names and their types are checked only at runtime. Alternatively you can use interfaces to get [typed access](../cds-data#typed-access) to the result data: ```java interface Book { String getTitle(); Integer getStock(); } Row row = ... Book book = row.as(Book.class); String title = book.getTitle(); Integer stock = book.getStock(); ``` Interfaces can also be used to get a typed list or stream over the result: ```java Result result = ... List books = result.listOf(Book.class); Map titleToDescription = result.streamOf(Book.class).collect(Collectors.toMap(Book::getTitle, Book::getDescription)); ``` For the entities defined in the data model, CAP Java SDK can generate interfaces for you through [a Maven plugin](../cqn-services/persistence-services#staticmodel). ### Using Entity References from Result Rows in CDS QL Statements {#entity-refs} For result rows that contain all key values of an entity, you get an [entity reference](./query-api#entity-refs) via the `ref()` method. This reference addresses the entity via the key values from the result row. ```java // SELECT from Author[101] CqnSelect query = Select.from(AUTHOR).byId(101); Author authorData = service.run(query).single(Author.class); String authorName = authorData.getName(); // data access Author_ author = authorData.ref(); // typed reference to Author[101] ``` Similar for untyped results: ```java Row authorData = service.run(query).single(); StructuredType author = authorData.ref(); // untyped reference to Author[101] ``` This also works for `Insert` and `Update` results: ```java CqnUpdate update = Update.entity(AUTHOR).data("name", "James Joyce").byId(101); Author_ joyce = service.run(update).single(Author.class).ref(); ``` Using entity references you can easily write CDS QL statements targeting the source entity: ```java // SELECT from Author[101].books { sum(stock) as stock } CqnSelect q = Select.from(joyce.books()) .columns(b -> func("sum", b.stock()).as("stock")); CqnInsert i = Insert.into(joyce.books()) .entry("title", "Ulysses"); CqnUpdate u = Update.entity(joyce.biography()) .data("price", 29.95); CqnDelete d = Delete.from(joyce.address()) .where(b -> b.stock().lt(1)); ``` ### Introspecting the Row Type The `rowType` method allows to introspect the element names and types of a query's `Result`. It returns a `CdsStructuredType` describing the result in terms of the [Reflection API](../reflection-api): ```java CqnSelect query = Select.from(AUTHOR) .columns(a -> a.name().as("authorName"), a -> a.age()); Result result = service.run(query); CdsStructuredType rowType = result.rowType(); rowType.elements(); // "authorName", "age" rowType.getElement("age").getType().getQualifiedName(); // "cds.Integer" rowType.findElement("ID"); // Optional.empty() ``` # Introspecting CQL Statements API to introspect CDS Query Language (CQL) statements in Java. ## Introduction Handlers of [CQN-based services](../cqn-services/#cdsservices) often need to understand the incoming CQN statements. The statement analysis can be done in two different ways. Depending on the complexity of the statement it can be done using: - CQN Analyzer: A specialized API to extract filter values from filter predicates of queries, and to analyze the structure and filters of references - CQN Visitor: A general purpose API, to traverse CQN token trees such as expressions, predicates, values etc. ### CqnAnalyzer vs. CqnVisitor The `CqnAnalyzer` allows for analysis and extraction of element values for most of the queries, but it comes with some limitations. The main rule here is: ::: tip The value of an element reference in a `where` and `filter` predicate must be unambiguously identified. ::: This implies the following: - The operator of comparison predicate must be either `eq` or `is`: ```java Select.from("bookshop.Book").where(b -> b.get("ID").eq(42)); ``` - Only the conjunction `and` is used to connect predicates: ```java Select.from("bookshop.Book") .where(b -> b.get("ID").eq(42).and(b.get("title").is("Capire"))); ``` This rule also applies to all segments of all references of the query, be it simple query or the one with path expression: ```java Select.from("bookshop.Book", b -> b.filter(b.get("ID").eq(41)) .to("author").filter(a -> a.get("Id").eq(1))); ``` ### When to Use What Use `CqnAnalyzer` when element references of the query are: - Unambiguously mapped to a value by: a comparison predicate using `eq` or `is`, used in `byId`, or a `matching` clause - Used in conjunction (`and`) predicates Use `CqnVisitor` when element references of the query are: - Compared with `lt`, `gt`, `le`, `ge`, `ne`, `isNot` operator - Used within `in` - Negated with `not` - Used in `search` - Used in functions - Used in subqueries - Referencing elements of an associated entity ## CqnAnalyzer The [CQL](../../cds/cql) introspection API allows to analyze [CQL](../../cds/cql) statements and extract values and information on the CDS entities in references. The [CqnAnalyzer](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ql/cqn/CqnAnalyzer.html) can be constructed from a [CDS model](../reflection-api#the-cds-model): ```java import com.sap.cds.ql.cqn.CqnAnalyzer; CdsModel cdsModel = context.getModel(); CqnAnalyzer cqnAnalyzer = CqnAnalyzer.create(cdsModel); ``` Furthermore, the static `isCountQuery(cqn)` method can be used to check if a [CQL](../../cds/cql) query only returns a single count: ```java // cqn: Select.from("Books").columns(CQL.count().as("bookCount")); boolean isCount = CqnAnalyzer.isCountQuery(cqn); // true ``` ### Usage Given the following CDS model and CQL query: ```cds entity Orders { key OrderNo : String; Items : Composition of many OrderItems on Items.parent = $self; ... } entity OrderItems { key ID : Integer; book : Association to Books; ... } ``` [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples-java/blob/5396b0eb043f9145b369371cfdfda7827fedd039/db/schema.cds#L31-L36){.learn-more} ```sql --CQL query SELECT from Orders[OrderNo = '42']:items[ID = 1] ``` the corresponding CQN statement can be analyzed using the `analyze` method of the `CqnAnalyzer`: ```java CqnStatement cqn = context.getCqn(); AnalysisResult result = cqnAnalyzer.analyze(cqn.ref()); ``` ### Resolving CDS Entities Based on the `AnalysisResult`, information on the CDS entities can be accessed through the [Reflection API](../reflection-api): ```java CdsEntity order = result.rootEntity(); // Orders CdsEntity item = result.targetEntity(); // OrderItems ``` ### Extracting Filter Values A non-complex filter predicate might map (restrict) some element to a particular _filter value_. If some filter values can be _unambiguously_ determined, the `CqnAnalyzer` can extract these filter values and return them as a `Map`. A filtered data set will contain only data that matches the filter values. Examples: ```sql WHERE name = 'Sue' WHERE name = 'Bob' AND age = 50 WHERE name = 'Alice' AND (age = 25 OR age = 35) WHERE name = 'Alice' AND age = 25 OR name = 'Alice' AND age = 35 ``` The first example above maps `name` to `Sue`. The second example maps `name` to 'Bob' and `age` to 50. In the third example only `name` is unambigously mapped to 'Alice' but a value for `age` can't be extracted. The fourth example is equivalent to the third. The key values of the entities can be extracted as a map using the `rootKeys` and `targetKeys` method of the `AnalysisResult` object: ```java Map rootKeys = result.rootKeys(); String orderNo = (String) rootKeys.get("OrderNo"); // 42 Map targetKeys = result.targetKeys(); Integer itemId = (Integer) targetKeys.get("ID"); // 1 ``` To extract all filter values of the target entity including non-key values, the `targetValues` method can be used: ```java Map filterValues = result.targetValues(); ``` For `CqnSelect`, `CqnUpdate`, and `CqnDelete`, values can also be extracted from the statement's `where` condition: ```sql --CQL query SELECT from Orders[OrderNo = '42'].items where ID = 3 and status = 'open' ``` ```java CqnSelect select = context.getCqn(); AnalysisResult result = cqnAnalyzer.analyze(select); Map targetKeys = result.targetKeys(); Integer itemId = (Integer) targetKeys.get("ID"); // 3 Map filterValues = result.targetValues(); String status = (String) filterValues.get("status"); // 'open' ``` ### Using the Iterator The methods prefixed with `root` and `target` access the first respectively last segment of the CQN statement's reference. If the reference has more than two segments, such as: ```sql --CQL query SELECT from Orders[OrderNo = '42']:items[ID = 1].book ``` the segment `items` can be analyzed using an iterator: ```java Iterator iterator = result.iterator(); CdsEntity order = iterator.next().entity(); CdsEntity item = iterator.next().entity(); CdsEntity book = iterator.next().entity(); ``` or a reverse iterator starting from the last segment: ```java Iterator iterator = result.reverse(); CdsEntity book = iterator.next().entity(); CdsEntity item = iterator.next().entity(); CdsEntity order = iterator.next().entity(); ``` In the same way, also the filter values for each segment can be extracted using the `values` and `keys` method instead of the `entity` method. ## CqnVisitor `CqnVisitor` interface is part of a public API, which allows to traverse CQN token trees such as expressions, predicates, values etc. It follows the Visitor design pattern. When a visitor is passed to a token's `accept` method, it is traversed through the token's expression tree. Generally the `accept` methods of the token's children are called first (depth-first). Afterwards the `visit` method that is most specific to the token is invoked. Classes implementing the `CqnVisitor` interface may override the default `visit` method to perform arbitrary operations. ### Fields of Application It is a powerful tool, which can be handy to introspect the complex queries and its compound parts. It can be used to analyze the information about: - Element references - Expand associations - Connective predicates (`and`, `or`) - Comparison predicates with binary (`gt`, `lt`, `ne`, etc.) and unary (`not`) operators - `search` and `in` predicates - Functions and expressions - Literals and parameters ### Usage In the following example, the `CqnVisitor` is used to evaluate whether the data matches a given filter expression. #### Data ```java List> books = new ArrayList<>(); books.add(ImmutableMap.of("title", "Catweazle", "stock", 3)); books.add(ImmutableMap.of("title", "The Raven", "stock", 42)); books.add(ImmutableMap.of("title", "Dracula", "stock", 66)); ``` #### Filter ```java Predicate titles = CQL.get("title").in("Catweazle", "The Raven"); Predicate stock = CQL.get("stock").gt(10); // title IN ('Catweazle', 'The Raven') AND stock > 10 Predicate filter = CQL.and(titles, stock); ``` The `filter` consists of three predicates, substituting the following tree: ``` AND ┌───────────────────┴───────────────────┐ IN GT ┌───────────┴───────────┐ ┌───────┴───────┐ title ['Catweazle', 'The Raven'] stock 10 ``` which corresponds to the following CQN token tree (numbers in brackets show the visit order): ``` CqnConnectivePredicate (8) ┌───────────────────┴───────────────────┐ CqnInPredicate (4) CqnComparisonPredicate (7) ┌───────────┴───────────┐ ┌───────────┴───────────┐ CqnElementRef (1) CqnLiteral (2, 3) CqnElementRef (5) CqnLiteral (6) ``` #### Visitor As already mentioned, the `CqnAnalyzer` is not suitable to analyze such a predicate, as neither the element `title` nor `stock` is uniquely restricted to a single value. To overcome this issue a `CqnVisitor` is to be implemented to evaluate whether the `data` meets the filter expression. The visitor has access to the `data` that is checked. To respect the depth-first traversal order, it uses a `stack` to store intermediate results: ```java class CheckDataVisitor implements CqnVisitor { private final Map data; private final Deque stack = new ArrayDeque<>(); CheckDataVisitor(Map data) { this.data = data; } boolean matches() { return (Boolean) stack.pop(); } ... } ``` On the leaf-level, the stack is used to store the concrete values from both data payload and filter expression: ```java @Override public void visit(CqnElementRef ref) { Object dataValue = data.get(ref.displayName()); stack.push(dataValue); } @Override public void visit(CqnLiteral literal) { stack.push(literal.value()); } ``` When visiting the predicates, the values are popped from the stack and evaluated based on the predicate type and comparison operator. The `Boolean` result of the evaluation is pushed to the stack: ```java @Override public void visit(CqnInPredicate in) { List values = in.values().stream() .map(v -> stack.pop()).collect(toList()); Object value = stack.pop(); stack.push(values.stream().anyMatch(value::equals)); } @Override public void visit(CqnComparisonPredicate comparison) { Comparable rhs = (Comparable) stack.pop(); Comparable lhs = (Comparable) stack.pop(); int cmp = lhs.compareTo(rhs); switch (comparison.operator()) { case EQ: stack.push(cmp == 0); break; case GT: stack.push(cmp > 0); break; // ... } } ``` The `visit` method of the `CqnConnectivePredicate` pops the `Boolean` evaluation results from the stack, applies the corresponding logical operator, and pushes the result to the stack: ```java @Override public void visit(CqnConnectivePredicate connect) { Boolean rhs = (Boolean) stack.pop(); Boolean lhs = (Boolean) stack.pop(); switch (connect.operator()) { case AND: stack.push(lhs && rhs); break; case OR: stack.push(lhs || rhs); break; } } ``` The whole process can be considered as a reduce operation when traversing the tree from bottom to top. To evaluate whether given `data` matches the filter expression, an instance `v` of the visitor is created. Afterwards the filter's accept method traverses its expression tree with the visitor, which evaluates the expression during the traversal: ```java for (Map book : books) { CheckDataVisitor v = new CheckDataVisitor(book); filter.accept(v); System.out.println(book.get("title") + " " + (v.matches() ? "match" : "no match")); } ``` The output will be: ```txt Catweazle no match The Raven match Dracula no match ``` # Services [Services](../about/best-practices#services) are one of the core concepts of CAP. This section describes how services are represented in the CAP Java SDK and how their event-based APIs can be used. One of the key APIs provided by services is the uniform query API based on [CQN statements](working-with-cql/query-api). ## An Event-Based API Services dispatch events to [Event Handlers](event-handlers/), which implement the behaviour of the service. A service can process synchronous as well as asynchronous events and offers a user-friendly API layer around these events. Every service implements the [Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/Service.html) interface, which offers generic event processing capabilities through its [emit(EventContext)](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/Service.html#emit-com.sap.cds.services.EventContext-) method. The [Event Context](event-handlers/#eventcontext) contains information about the event and its parameters. The `emit` method takes care of dispatching an Event Context to all event handlers registered on the respective event and is the central API to process asynchronous and synchronous events. Usually service implementations extend the `Service` interface to provide a custom, user-friendly API layer on top of the `emit()` method. Examples are the [Application Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/ApplicationService.html), [Persistence Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/persistence/PersistenceService.html), and [Remote Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/RemoteService.html), which offer a common CQN query execution API for their CRUD events. However, also technical components are implemented as services, for example the [AuthorizationService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/authorization/AuthorizationService.html) or the [MessagingService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messaging/MessagingService.html). ### Using Services Often times your Java code needs to interact with other services. The [ServiceCatalog](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceCatalog.html) provides programmatic access to all available services. The Service Catalog can be accessed from the [Event Context](event-handlers/#eventcontext) or from the [CdsRuntime](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/CdsRuntime.html). ```java ServiceCatalog catalog = context.getServiceCatalog(); Stream allServices = catalog.getServices(); Stream appServices = catalog.getServices(ApplicationService.class); ``` To look up a service in the Service Catalog, you need to know its name. Application Services are created with the fully qualified name of their CDS definition by default: ```java ApplicationService adminService = catalog.getService(ApplicationService.class, "AdminService"); ``` As of version 2.4.0, the [CAP Java SDK Maven Plugin](./developing-applications/building#cds-maven-plugin) is capable of generating specific interfaces for services in the CDS model. These service interfaces also provide Java methods for actions and functions, which allows easily calling actions and functions with their parameters. These specific interfaces can also be used to get access to the service: ```java AdminService adminService = catalog.getService(AdminService.class, "AdminService"); ``` Technical services, like the Persistence Service have a `DEFAULT_NAME` constant defined in their interface: ```java PersistenceService db = catalog.getService(PersistenceService.class, PersistenceService.DEFAULT_NAME); ``` When running in Spring, all services are available as Spring beans. Dependency injection can therefore be used to get access to the service objects: ```java @Component public class EventHandlerClass implements EventHandler { @Autowired private PersistenceService db; @Autowired @Qualifier("AdminService") private ApplicationService adminService; } ``` Instead of the generic service interface, also the more specific service interfaces can be injected: ```java @Component public class EventHandlerClass implements EventHandler { @Autowired private PersistenceService db; @Autowired private AdminService adminService; } ``` ::: tip For the injection of specific service interfaces the annotation `@Qualifier` is usually not required. ::: ## CQN-based Services The most used services in CAP are the [CQN-based services](cqn-services/) which define APIs accepting CQN queries: - [Application Services](cqn-services/application-services) exposed CDS services to clients. - [Persistence Services](cqn-services/persistence-services) are CQN-based database clients. - [Remote Services](cqn-services/remote-services) are CQN-based clients for remote APIs ## Application Lifecycle Service The Application Lifecycle Service emits events when the `CdsRuntime` is fully initialized, but the application is not started yet, or when the application is stopped. Its API and events are defined in the [ApplicationLifecycleService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/application/ApplicationLifecycleService.html) interface. You can use these events to register an event handler which performs custom initialization or shutdown logic. In addition the Application Lifecycle Service provides an event to globally adapt the error response handling. [Learn more about adapting the error response handling in section Indicating Errors.](./event-handlers/indicating-errors#errorhandler){.learn-more} # CQN Services { #cdsservices } The most used services in CAP are the CQN-based services. The most prominent of these are the Application Service, Persistence Service, and Remote Service. Those services can handle CRUD events by accepting CQN statements. They all implement the common interface [CqnService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CqnService.html), which defines the CQN-based APIs. ::: tip To learn more about how to run queries on these services, see sections [Building CQN Queries](../working-with-cql/query-api) and [Executing CQN Queries](../working-with-cql/query-execution). ::: ## Application Services [Application Services](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/ApplicationService.html) define the APIs that are exposed by a CAP application to its clients. They're backed by a [CDS Service](../../cds/cdl#services) definition in the CDS model, which defines the structure of the API. Consequently, they only accept CQN statements targeting entities that are defined as part of their service definition. Typically these services are served by protocol adapters, such as OData V4, which use their CQN-based APIs. [Learn more about adding business logic to Application Services.](./application-services){.learn-more} Application Services are automatically augmented with generic providers (built-in event handlers), which handle common aspects such as [authorization](../../guides/security/authorization), [input validation](../../guides/providing-services#input-validation), [implicit pagination](../../guides/providing-services#implicit-pagination) and many more. Their default ON event handler delegates CQN statements to the Persistence Service. [Learn more about these capabilities in our Cookbooks.](../../guides/){.learn-more} The creation of Application Services can be customized through configuration. By default an Application Service is created for every service that is defined in the CDS model. Through configuration, it's also possible to create multiple Application Services based on the same model definition. [Learn more about the configuration possibilities in our CDS Properties Reference.](../developing-applications/properties){.learn-more} ### Draft Services { #draftservices} If an Application Service is created based on a service definition, that contains a draft-enabled entity, it also implements the [DraftService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/draft/DraftService.html) interface. This interface provides an API layer around the [draft-specific events](../fiori-drafts#draftevents), and allows to create new draft entities, patch, cancel or save them, and put active entities back into edit mode. [Learn more about Draft Services in section Fiori Drafts.](../fiori-drafts){.learn-more} ## Persistence Services { #persistenceservice} [Persistence Services](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/persistence/PersistenceService.html) are CQN-based database clients. CAP applications most commonly use SQL databases like SAP HANA in production. For test and development, it's also possible to use a light-weight, in-memory database such as [H2](https://www.h2database.com). The CAP Java SDK therefore provides a JDBC-based Persistence Service implementation out of the box. However, also other Persistence Service implementations based on NoSQL databases, such as MongoDB, are possible, even if not provided by the CAP Java SDK ready to use. [Learn more about supported databases and their restrictions.](./persistence-services#database-support){.learn-more} A Persistence Service isn't bound to a specific service definition in the CDS model. It's capable of accepting CQN statements targeting any entity or view that is stored in the corresponding database. Transaction management is built in to Persistence Services. They take care of lazily initializing and maintaining database transactions as part of the active changeset context. Some generic providers are registered on Persistence Services instead of on Application Services, like the ones for [managed data](../../guides/domain-modeling#managed-data). This ensures that the functionality is also triggered, when directly interacting with a Persistence Service. The Persistence Service is used when implementing event handlers for Application Services, for example when additional data needs to be read when performing custom validations. Additionally, the default ON event handler of Application Services delegates CQN statements to the default Persistence Service. [Learn more about how Persistence Services are created and configured.](./persistence-services){.learn-more} ## Remote Services [Remote Services](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/RemoteService.html) are CQN-based clients for remote APIs, for example OData. They're backed by a [CDS Service](../../cds/cdl#services) definition, that reflects the structure of the remote API. The CDS service definition is usually [imported](../../guides/using-services#external-service-api), for example from an EDMX specification. They can be used when integrating APIs provided by the application with APIs provided by other applications or micro-services. This integration can happen synchronously by delegating CQN statements from Application Services to Remote Services or asynchronously by using Remote Services to replicate data into the applications own persistence. Remote Services need to be explicitly configured and are never created automatically. The configuration of a Remote Service specifies the destination where the remote API is available and its protocol type. It's also possible to create multiple Remote Services with different destinations based on the same model definition. If a Remote Service is created for a service definition in the CDS model, no Application Service is automatically created for that definition. [Learn more about how to configure and use Remote Services.](./remote-services){.learn-more} # Persistence Services Persistence Services are CQN-based database clients. This section describes which database types are supported, how datasources to these databases are created and how they are turned into Persistence Services. ## Database Support { #database-support} CAP Java has built-in support for various databases. This section describes the different databases and any differences between them with respect to CAP features. There's out of the box support for SAP HANA with CAP currently as well as H2 and SQLite. However, it's important to note that H2 and SQLite aren't enterprise grade databases and are recommended for non-productive use like local development or CI tests only. PostgreSQL is supported in addition, but has various limitations in comparison to SAP HANA, most notably in the area of schema evolution. ### SAP HANA Cloud SAP HANA Cloud is the CAP standard database recommended for productive use with needs for schema evolution and multitenancy. Noteworthy: 1. Write operations through views that can't be resolved by the CAP runtime are passed through to SAP HANA Cloud. Limitations are described in the [SAP HANA Cloud documentation](https://help.sap.com/docs/HANA_CLOUD_DATABASE/c1d3f60099654ecfb3fe36ac93c121bb/20d5fa9b75191014a33eee92692f1702.html#loio20d5fa9b75191014a33eee92692f1702__section_trx_ckh_qdb). 2. [Shared locks](../working-with-cql/query-execution#pessimistic-locking) are supported on SAP HANA Cloud only. 3. When using `String` elements in locale-specific ordering relations (`>`, `<`, ... , `between`), a statement-wide collation is added, which can have negative impact on the performance. If locale-specific ordering isn't required for specific `String` elements, annotate the element with `@cds.collate: false`. ```cds entity Books : cuid { title : localized String(111); descr : localized String(1111); @cds.collate : false // [!code focus] isbn : String(40); // does not require locale-specific handling // [!code focus] } ``` > When disabling locale-specific handling for a String element, binary comparison is used, which is generally faster but results in *case-sensitive* order (A, B, a, b). ::: info Disable Collating To disable collating for all queries, set [`cds.sql.hana.ignoreLocale`](../developing-applications/properties#cds-sql-hana-ignoreLocale) to `true`. ::: 4. SAP HANA supports _Perl Compatible Regular Expressions_ (PCRE) for regular expression matching. If you need to match a string against a regular expression and are not interested in the exact number of the occurrences, consider using lazy (_ungreedy_) quantifiers in the pattern or the option `U`. ### PostgreSQL CAP Java SDK is tested on [PostgreSQL](https://www.postgresql.org/) 15 and supports most of the CAP features. Known limitations are: 1. No locale specific sorting. The sort order of queries behaves as configured on the database. 2. Write operations through CDS views are only supported for views that can be [resolved](../working-with-cql/query-execution#updatable-views) or are [updatable](https://www.postgresql.org/docs/14/sql-createview.html#SQL-CREATEVIEW-UPDATABLE-VIEWS) in PostgreSQL. 3. The CDS type `UInt8` can't be used with PostgreSQL, as there's no `TINYINT`. Use `Int16` instead. 4. [Multitenancy](../../guides/multitenancy/) and [extensibility](../../guides/extensibility/) aren't yet supported on PostgreSQL. ### H2 Database [H2](https://www.h2database.com/html/main.html) is the recommended in-memory database for local development and testing with CAP Java. There's no production support for H2 from CAP and there are the following limitations: 1. H2 only supports database-level collation and the default sort order is by ASCII-code. You can set a [collation](https://www.h2database.com/html/commands.html#set_collation) to sort using dictionary order instead. 2. Case-insensitive comparison isn't yet supported. 3. By default, views aren't updatable on H2. However, the CAP Java SDK supports some views to be updatable as described [here](../working-with-cql/query-execution#updatable-views). 4. Although referential and foreign key constraints are supported, H2 [doesn't support deferred checking](https://www.h2database.com/html/grammar.html#referential_action). As a consequence, schema SQL is never generated with referential constraints. 5. In [pessimistic locking](../working-with-cql/query-execution#pessimistic-locking), _shared_ locks are not supported but an _exclusive_ lock is used instead. 6. The CDS type `UInt8` can't be used with H2, as there is no `TINYINT`. Use `Int16` instead. 7. For regular expressions, H2's implementation is compatible with Java's: the matching behaviour is an equivalent of the `Matcher.find()` call for the given pattern. ::: warning Support for localized and temporal data via session context variables requires H2 v2.2.x or later. ::: ### SQLite CAP supports [SQLite](https://www.sqlite.org/index.html) out of the box. When working with Java, it's [recommended](../../guides/databases-sqlite?impl-variant=java#sqlite-in-production) to use SQLite only for development and testing purposes. CAP does support most of the major features on SQLite, although there are a few shortcomings that are listed here: 1. SQLite has only limited support for concurrent database access. You're advised to limit the connection pool to *1* as shown above (parameter `maximum-pool-size: 1`), which effectively serializes all database transactions. 2. The predicate function `contains` is supported. However, the search for characters in the word or phrase is case-insensitive in SQLite. 3. SQLite doesn't support [pessimistic locking](../working-with-cql/query-execution#pessimistic-locking). 4. Streaming of large object data isn't supported by SQLite. Hence, when reading or writing data of type `cds.LargeString` and `cds.LargeBinary` as a stream, the framework temporarily materializes the content. Thus, storing large objects on SQLite can impact the performance. 5. Sorting of character-based columns is never locale-specific but if any locale is specified in the context of a query then case insensitive sorting is performed. 6. Views in SQLite are read-only. However, the CAP Java SDK supports some views to be updatable as described in [Updatable Views](../working-with-cql/query-execution#updatable-views). 7. Foreign key constraints are supported, but are disabled by default. To activate the feature using JDBC URL, append the `foreign_keys=on` parameter to the connection URL, for example, `url=jdbc:sqlite:file:testDb?mode=memory&foreign_keys=on`. For more information, visit the [SQLite Foreign Key Support](https://sqlite.org/foreignkeys.html) in the official documentation. 8. CAP enables regular expressions on SQLite via a Java implementation. The matching behaviour is an equivalent of the `Matcher.find()` call for the given pattern. ## Datasources Java Applications usually connect to SQL databases through datasources (`java.sql.DataSource`). The CAP Java SDK can auto-configure datasources from service bindings and pick up datasources configured by Spring Boot. These datasources are used to create Persistence Services, which are CQN-based database clients. ### Datasource Configuration Datasources are usually backed by a connection pool to ensure efficient access to the database. If datasources are created from a service binding the connection pool can be configured through the properties `cds.dataSource...*`. An example configuration could look like this: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: dataSource: my-service-instance: hikari: maximum-pool-size: 20 ``` ::: Supported pool types for single tenant scenarios are `hikari`, `tomcat`, and `dbcp2`. For a multitenant scenario `hikari`, `tomcat`, and `atomikos` are supported. The corresponding pool dependencies need to be available on the classpath. You can find an overview of the available pool properties in the respective documentation of the pool. For example, properties supported by Hikari can be found [here](https://github.com/brettwooldridge/HikariCP#gear-configuration-knobs-baby). It is also possible to configure the database connection itself. For Hikari this can be achieved by using the `data-source-properties` section. Properties defined here are passed to the respective JDBC driver, which is responsible to establish the actual database connection. The following example sets such a [SAP HANA-specific configuration](https://help.sap.com/docs/SAP_HANA_PLATFORM/0eec0d68141541d1b07893a39944924e/109397c2206a4ab2a5386d494f4cf75e.html): ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: dataSource: my-service-instance: hikari: data-source-properties: packetSize: 300000 ``` ::: ### SAP HANA #### Service Bindings SAP HANA can be configured when running locally as well as when running productively in the cloud. The datasource is auto-configured based on available service bindings in the `VCAP_SERVICES` environment variable or locally the _default-env.json_. This only works if an application profile is used, that doesn't explicitly configure a datasource using `spring.datasource.url`. Such an explicit configuration always takes precedence over service bindings from the environment. Service bindings of type *service-manager* and, in a Spring-based application, *hana* are used to auto-configure datasources. If multiple datasources are used by the application, you can select one auto-configured datasource to be used by the default Persistence Service through the property `cds.dataSource.binding`. #### Configure the DDL generation Advise the CDS Compiler to generate _tables without associations_, as associations on SAP HANA are not used by CAP Java: ::: code-group ```json [.cdsrc.json] { "sql": { "native_hana_associations" : false } } ``` ::: #### SQL Optimization Mode By default, the SAP HANA adapter in CAP Java generates SQL that is optimized for the new [HEX engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-performance-guide-for-developers/query-execution-engine-overview) in SAP HANA Cloud. To generate SQL that is compatible with SAP HANA 2.x ([HANA Service](https://help.sap.com/docs/HANA_SERVICE_CF/6a504812672d48ba865f4f4b268a881e/08c6e596b53843ad97ae68c2d2c237bc.html)) and [SAP HANA Cloud](https://www.sap.com/products/technology-platform/hana.html), set the [CDS property](../developing-applications/properties#cds-properties): ```yaml cds.sql.hana.optimizationMode: legacy ``` Use the [hints](../working-with-cql/query-execution#hana-hints) `hdb.USE_HEX_PLAN` and `hdb.NO_USE_HEX_PLAN` to overrule the configured optimization mode per statement. ::: warning Rare error in `HEX` mode In some corner cases, particularly when using [native HANA views](../../advanced/hana#create-native-sap-hana-objects), queries in `HEX` optimization mode may fail with a "hex enforced but cannot be selected" error. This is the case if the statement execution requires the combination of HEX only features with other features that are not yet supported by the HEX engine. If CAP detects this error it will, as a fallback, execute the query in _legacy_ mode. If you know upfront that a query can't be executed by the HEX engine, you can add a `hdb.NO_USE_HEX_PLAN` hint to the query, so the SQL generator won't use features that require the HEX engine. ::: ### PostgreSQL { #postgresql-1 } PostgreSQL can be configured when running locally as well as when running productively in the cloud. Similar to HANA, the datasource is auto-configured based on available service bindings, if the feature `cds-feature-postgresql` is added. #### Initial Database Schema To generate a `schema.sql` for PostgreSQL, use the dialect `postgres` with the `cds deploy` command: `cds deploy --to postgres --dry`. The following snippet configures the [cds-maven-plugin](../developing-applications/building#cds-maven-plugin) accordingly: ::: code-group ```xml [srv/pom.xml] schema.sql cds deploy --to postgres --dry --out "${project.basedir}/src/main/resources/schema.sql" ``` ::: The generated `schema.sql` can be automatically deployed by Spring if you configure the [sql.init.mode](https://docs.spring.io/spring-boot/how-to/data-initialization.html#howto.data-initialization.using-basic-sql-scripts) to `always`. Using the `@sap/cds-dk` you can add PostgreSQL support to your CAP Java project: ```sh cds add postgres ``` ::: warning Automatic schema deployment isn't suitable for productive use. Consider using production-ready tools like Flyway or Liquibase. See more on that in the [Database guide for PostgreSQL](../../guides/databases-postgres.md?impl-variant=java#deployment-using-liquibase) ::: #### Configure the Connection Data Explicitly { #postgres-connection } If you don't have a compatible PostgreSQL service binding in your application environment, you can also explicitly configure the connection data of your PostgreSQL database in the _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: postgres datasource: url: username: password: driver-class-name: org.postgresql.Driver ``` ::: ### H2 For local development, [H2](https://www.h2database.com/) can be configured to run in-memory or in the file-based mode. To generate a `schema.sql` for H2, use the dialect `h2` with the `cds deploy` command: `cds deploy --to h2 --dry`. The following snippet configures the [cds-maven-plugin](../developing-applications/building#cds-maven-plugin) accordingly: ::: code-group ```xml [srv/pom.xml] schema.sql cds deploy --to h2 --dry --out "${project.basedir}/src/main/resources/schema.sql" ``` ::: In Spring, H2 is automatically initialized in-memory when present on the classpath. See the official [documentation](https://www.h2database.com/html/features.html) for H2 for file-based database configuration. Using the `@sap/cds-dk` you can add H2 support to your CAP Java project: ```sh cds add h2 ``` ### SQLite #### Initial Database Schema To generate a `schema.sql` for SQLite, use the dialect `sqlite` with the `cds deploy` command: `cds deploy --to sqlite --dry`. The following snippet configures the [cds-maven-plugin](../developing-applications/building#cds-maven-plugin) accordingly: ::: code-group ```xml [srv/pom.xml] schema.sql cds deploy --to sqlite --dry --out "${project.basedir}/src/main/resources/schema.sql" ``` ::: Using the `@sap/cds-dk` you can add SQLite support to your CAP Java project: ```sh cds add sqlite ``` #### File-Based Storage The database content is stored in a file, `sqlite.db` as in the following example. Since the schema is initialized using `cds deploy` command, the initialization mode is set to `never`: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: sqlite sql: init: mode: never datasource: url: "jdbc:sqlite:sqlite.db" driver-class-name: org.sqlite.JDBC hikari: maximum-pool-size: 1 ``` ::: #### In-Memory Storage The database content is stored in-memory only. The schema initialization done by Spring, executes the `schema.sql` script. Hence, the initialization mode is set to `always`. If Hikari closes the last connection from the pool, the in-memory database is automatically deleted. To prevent this situation, set `max-lifetime` to *0*: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: default sql: init: mode: always datasource: url: "jdbc:sqlite:file::memory:?cache=shared" driver-class-name: org.sqlite.JDBC hikari: maximum-pool-size: 1 max-lifetime: 0 ``` ::: ## Persistence Services Persistence Services are CQN-based database clients. You can think of them as a wrapper around a datasource, which translates CQN to SQL. In addition, Persistence Services have built-in transaction management. They take care of lazily initializing and maintaining database transactions as part of the active changeset context. [Learn more about ChangeSet Contexts and Transactions.](../event-handlers/changeset-contexts){.learn-more} A Persistence Service isn't bound to a specific service definition in the CDS model. It's capable of accepting CQN statements targeting any entity or view that is stored in the corresponding database. All Persistence Service instances reflect on the same CDS model. It is the responsibility of the developer to decide which artifacts are deployed into which database at deploy time and to access these artifacts with the respective Persistence Service at runtime. ### The Default Persistence Service { #default-persistence-service} The default Persistence Service is used by the generic handlers of Application Services to offer out-of-the-box CRUD functionality. The name of the default Persistence Service is stored in the global constant [`PersistenceService.DEFAULT_NAME`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/persistence/PersistenceService.html#DEFAULT_NAME). If only a single datasource exists in the application the CAP Java SDK creates the default Persistence Service from it. This is usually the case when specifying a datasource through Spring Boot's configuration (`spring.datasource.url` or auto-configured H2) or when having a single database service binding. If multiple datasources exist in the application, the CAP Java SDK needs to know for which the default Persistence Service should be created, otherwise the application startup will fail. By setting the property `cds.dataSource.binding` the datasource created from the specified database service binding is marked as primary. If the datasource to be used is directly created as a bean in Spring Boot you need to ensure to mark it as primary using Spring Boot's `@Primary` annotation. ### Additional Persistence Services For each non-primary database service binding a Persistence Service is automatically created. The name of the Persistence Service is the name of the service binding. It is possible to configure how Persistence Services are created. To change the name of a Persistence Service you can specify it in your configuration and connect it explicitly with the corresponding database service binding. The following configuration creates a Persistence Service named "my-ps" for the service binding "my-hana-hdi": ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: persistence.services: my-ps: binding: "my-hana-hdi" ``` ::: You can also disable the creation of a Persistence Service for specific database service bindings. The following configuration disables the creation of a Persistence Service for the service binding "my-hana-hdi": ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: persistence.services: my-hana-hdi: enabled: false ``` ::: To create a non-default Persistence Service for a datasource explicitly created as Spring bean a configuration is required. The following examples shows a Java example to register such a datasource bean: ```java @Configuration public class DataSourceConfig { @Bean public DataSource customDataSource() { return DataSourceBuilder.create() .url("jdbc:sqlite:sqlite.db") .build(); } } ``` In the configuration you need to refer to the name of the datasource: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: persistence.services: my-ps: dataSource: "customDataSource" ``` ::: ::: tip Any usage of non-default Persistence Services needs to happen in custom handlers. ::: ### Example: Multitenant Application with Tenant-independent Datasource A common scenario for multiple Persistence Services is in multitenant applications, which require an additional tenant-independent database. These applications usually use the Service Manager to maintain a dedicated SAP HANA HDI container for each tenant. However, additional tenant-independent data needs to be stored in a separate HDI container, shared by all tenants. When running such a scenario productively it is as easy as binding two database service bindings to your application: The Service Manager binding and the additional HDI container binding. The only configuration required in that scenario is to mark the Service Manager binding as the primary one, in order to create the default Persistence Service from it: ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: cloud cds: dataSource: binding: "my-service-manager-binding" ``` ::: At deploy time it is currently recommended to deploy all CDS entities into both the tenant-dependent as well as the tenant-independent databases. At runtime you need to ensure to access the tenant-dependent entities through the default Persistence Service and the tenant-independent entities through the additional Persistence Service. #### Local Development and Testing with MTX In case you are testing your multitenant application locally with the setup described in [Local Development and Testing](../../guides/multitenancy/#test-locally), you need to perform additional steps to create an in-memory tenant-independent datasource. To create an in-memory datasource, initialized with the SQL schema, add the following configuration to your Spring Boot application: ```java @Configuration public class DataSourceConfig { @Bean @ConfigurationProperties("app.datasource.tenant-independent") public DataSourceProperties tenantIndependentDataSourceProperties() { return new DataSourceProperties(); } @Bean public DataSource tenantIndependentDataSource() { return tenantIndependentDataSourceProperties() .initializeDataSourceBuilder() .build(); } @Bean public DataSourceInitializer tenantIndependentInitializer() { ResourceDatabasePopulator resourceDatabasePopulator = new ResourceDatabasePopulator(); resourceDatabasePopulator.addScript(new ClassPathResource("schema.sql")); DataSourceInitializer dataSourceInitializer = new DataSourceInitializer(); dataSourceInitializer.setDataSource(tenantIndependentDataSource()); dataSourceInitializer.setDatabasePopulator(resourceDatabasePopulator); return dataSourceInitializer; } } ``` You can then refer to that datasource in your Persistence Service configuration and mark the auto-configured MTX SQLite datasource as primary: ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: local-mtxs cds: persistence.services: tenant-independent: dataSource: "tenantIndependentDataSource" dataSource: binding: "mtx-sqlite" ``` ::: #### Local Development and Testing without MTX In case you're testing your application in single-tenant mode without MTX sidecar you need to configure two in-memory databases. The primary one is used for your tenant-dependant persistence and the secondary one for your tenant-independent persistence. Due to the way the Spring Boot DataSource auto-configuration works, you can't use the configuration property `spring.datasource.url` for one of your datasources. Spring Boot doesn't pick up this configuration anymore, as soon as you explicitly define another datasource, which is required in this scenario. You therefore need to define the configuration for two datasources. In addition, you need to define the transaction manager for the primary datasource. ```java @Configuration public class DataSourceConfig { /** * Configuration of tenant-dependant persistence */ @Bean @Primary @ConfigurationProperties("app.datasource.tenant-dependent") public DataSourceProperties tenantDependentDataSourceProperties() { return new DataSourceProperties(); } @Bean @Primary public DataSource tenantDependentDataSource() { return tenantDependentDataSourceProperties() .initializeDataSourceBuilder() .build(); } @Bean @Primary public DataSourceTransactionManager tenantDependentTransactionManager() { return new DataSourceTransactionManager(tenantDependentDataSource()); } /** * Configuration of tenant-independent persistence */ @Bean @ConfigurationProperties("app.datasource.tenant-independent") public DataSourceProperties tenantIndependentDataSourceProperties() { return new DataSourceProperties(); } @Bean public DataSource tenantIndependentDataSource() { return tenantIndependentDataSourceProperties() .initializeDataSourceBuilder() .build(); } @Bean public DataSourceInitializer tenantIndependentInitializer() { ResourceDatabasePopulator resourceDatabasePopulator = new ResourceDatabasePopulator(); resourceDatabasePopulator.addScript(new ClassPathResource("schema.sql")); DataSourceInitializer dataSourceInitializer = new DataSourceInitializer(); dataSourceInitializer.setDataSource(tenantIndependentDataSource()); dataSourceInitializer.setDatabasePopulator(resourceDatabasePopulator); return dataSourceInitializer; } } ``` The primary datasource is automatically picked up by the CAP Java SDK. The secondary datasource needs to be referred in your Persistence Service configuration: ::: code-group ```yaml [srv/src/main/resources/application.yaml] spring: config.activate.on-profile: local cds: persistence.services: tenant-independent: dataSource: "tenantIndependentDataSource" ``` ::: ## Native SQL ### Native SQL with JDBC Templates { #jdbctemplate} The JDBC template is the Spring API, which in contrast to the CQN APIs allows executing native SQL statements and call stored procedures (alternative to [Native HANA Object](../../advanced/hana#create-native-sap-hana-objects)). It seamlessly integrates with Spring's transaction and connection management. The following example shows the usage of `JdbcTemplate` in the custom handler of a Spring Boot enabled application. It demonstrates the execution of the stored procedure and native SQL statement. ```java @Autowired JdbcTemplate jdbcTemplate; ... public void setStockForBook(int id, int stock) { jdbcTemplate.update("call setStockForBook(?,?)", id, stock); // Run the stored procedure `setStockForBook(id in number, stock in number)` } public int countStock(int id) { SqlParameterSource namedParameters = new MapSqlParameterSource().addValue("id", id); return jdbcTemplate.queryForObject( "SELECT stock FROM Books WHERE id = :id", namedParameters, Integer.class); // Run native SQL } ``` See [Class JdbcTemplate](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jdbc/core/JdbcTemplate.html) for more details. ### Using CQL with a Static CDS Model { #staticmodel} The static model and accessor interfaces can be generated using the [CDS Maven Plugin](../developing-applications/building#cds-maven-plugin). ::: warning _❗ Warning_ Currently, the generator doesn't support using reserved [Java keywords](https://docs.oracle.com/javase/specs/jls/se13/html/jls-3.html#jls-3.9) as identifiers in the CDS model. Conflicting element names can be renamed in Java using the [@cds.java.name](../cds-data#renaming-elements-in-java) annotation. For entities it is recommended to use [@cds.java.this.name](../cds-data#renaming-types-in-java). ::: #### Static Model in the Query Builder The [Query Builder API](../working-with-cql/../working-with-cql/query-api) allows you to dynamically create [CDS Query Language (CQL)](/cds/cql) queries using entity and element names given as strings: ```java Select.from("my.bookshop.Books") .columns("title") .where(book -> book.to("author").get("name").eq("Edgar Allan Poe")); ``` This query is constructed dynamically. It's checked only at runtime that the entity `my.bookshop.Authors` actually exists and that it has the element `name`. Moreover, the developer of the query doesn't get any code completion at design time. These disadvantages are avoided by using a static model to construct the query. #### Model Interfaces The static model is a set of interfaces that reflects the structure of the CDS model in Java (like element references with their types, associations, etc.) and allow to fluently build queries in a type-safe way. For every entity in the model, the model contains a corresponding `StructuredType` interface, which represents this type. As an example, for this CDS model the following model interfaces are generated: CDS model ```cds namespace my.bookshop; entity Books { key ID : Integer; title : String(111); author : Association to Authors; } entity Authors { key ID : Integer; name : String(111); books : Association to many Books on books.author = $self; } ``` [Find this source also in **cap/samples**.](https://github.com/sap-samples/cloud-cap-samples-java/blob/5396b0eb043f9145b369371cfdfda7827fedd039/db/schema.cds#L5-L21){.learn-more} Java ```java @CdsName("my.bookshop.Books") public interface Books_ extends StructuredType { ElementRef ID(); ElementRef title(); Authors_ author(); Authors_ author(Function filter); } ``` ```java @CdsName("my.bookshop.Authors") public interface Authors_ extends StructuredType { ElementRef ID(); ElementRef name(); Books_ books(); Books_ books(Function filter); } ``` #### Accessor Interfaces The corresponding data is captured in a data model similar to JavaBeans. These beans are interfaces generated by the framework, providing the data access methods - getters and setters - and containing the CDS element names as well. The instances of the data model are created by the [CDS Query Language (CQL)](/cds/cql) Execution Engine (see the following example). Note the following naming convention: the model interfaces, which represent the structure of the CDS Model, always end with an underscore, for example `Books_`. The accessor interface, which refers to data model, is simply the name of the CDS entity - `Books`. The following data model interface is generated for `Books`: ```java @CdsName("my.bookshop.Books") public interface Books extends CdsData { String ID = "ID"; String TITLE = "title"; String AUTHOR = "author"; Integer getID(); void setID(Integer id); String getTitle(); void setTitle(String title); Authors getAuthor(); void setAuthor(Map author); } ``` #### Javadoc comments The static model and accessor interfaces can be extended with [Javadoc comments](../../cds/cdl#doc-comment). Currently, the generator supports Javadoc comments using the interface and getter/setter methods. The following example shows Javadoc comments defined in the CDS model and how they appear in the generated interfaces. ```cds namespace my.bookshop; /** * The creator/writer of a book, article, or document. */ entity Authors { key ID : Integer; /** * The name of the author. */ name : String(30); } ``` ```java /** * The creator/writer of a book, article, or document. */ @CdsName("my.bookshop.Authors") public interface Authors extends CdsData { String ID = "ID"; String NAME = "name"; Integer getId(); void setId(Integer id); /** * The name of the author. */ String getName(); /** * The name of the author. */ void setName(String name); } ``` #### Usage In the query builder, the interfaces reference entities. The interface methods can be used in lambda expressions to reference elements or to compose path expressions: ```java // Note the usage of model interface `Books_` here Select query = Select.from(Books_.class) .columns(book -> book.title()) .where (book -> book.author().name().eq("Edgar Allan Poe")); // After executing the query the result can be converted to // a typed representation List of Books. List books = dataStore.execute(query).listOf(Books.class); ``` # Application Services Application Services define the APIs that a CAP application exposes to its clients, for example through OData. This section describes how to add business logic to these services, by extending CRUD events and implementing actions and functions. ## Handling CRUD Events { #crudevents} Application Services provide a [CQN query API](./index#cdsservices). When running a CQN query on an Application Service CRUD events are triggered. The processing of these events is usually extended when adding business logic to the Application Service. The following table lists the static event name constants that exist for these event names on the [CqnService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CqnService.html) interface and their corresponding [event-specific Event Context interfaces](../event-handlers/#eventcontext). These constants and interfaces should be used, when registering and implementing event handlers: | Event | Constant | Event Context | | --- | --- | --- | | CREATE | `CqnService.EVENT_CREATE` | [CdsCreateEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsCreateEventContext.html) | | READ | `CqnService.EVENT_READ` | [CdsReadEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsReadEventContext.html) | | UPDATE | `CqnService.EVENT_UPDATE` | [CdsUpdateEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsUpdateEventContext.html) | | UPSERT | `CqnService.EVENT_UPSERT` | [CdsUpsertEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsUpsertEventContext.html) | | DELETE | `CqnService.EVENT_DELETE` | [CdsDeleteEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsDeleteEventContext.html) | The following example shows how these constants and Event Context interfaces can be leveraged, when adding an event handler to be run when new books are created: ```java @Before(event = CqnService.EVENT_CREATE, entity = Books_.CDS_NAME) public void createBooks(CdsCreateEventContext context, List books) { } ``` ::: tip To learn more about the entity data argument `List books` of the event handler method, have a look at [this section](../event-handlers/#pojoarguments). ::: ### OData Requests Application Services are used by OData protocol adapters to expose the Application Service's API as an OData API on a path with the following pattern: ```txt http(s)://// ``` |Parameter | Description | --- | --- | |`` | For the OData V2 and OData V4 protocol adapters, `` can be configured with the application configuration properties `cds.odataV2.endpoint.path` and `cds.odataV4.endpoint.path` respectively. Please see [CDS Properties](../developing-applications/properties) for their default values. | |`` | The name of the Application Service, which by default is the fully qualified name of its definition in the CDS model. However, you can override this default per service by means of the `@path` annotation (see [Service Definitions in CDL](../../cds/cdl#service-definitions)). | [Learn more about how OData URLs are configured.](application-services#serve-configuration){.learn-more} The OData protocol adapters use the CQN query APIs to retrieve a response for the requests they receive. They transform OData-specific requests into a CQN query, which is run on the Application Service. The following table shows which CRUD events are triggered by which kind of OData request: | HTTP Verb | Event | Hint | | --- | --- | --- | | POST | CREATE | | | GET | READ | The same event is used for reading a collection or a single entity | | PATCH | UPDATE | If the update didn't find an entity, a subsequent `CREATE` event is triggered | | PUT | UPDATE | If the update didn't find an entity, a subsequent `CREATE` event is triggered | | DELETE | DELETE | | > In CAP Java versions < 1.9.0, the `UPSERT` event was used to implement OData V4 `PUT` requests. This has been changed, as the semantics of `UPSERT` didn't really match the semantics of the OData V4 `PUT`. ### Deeply Structured Documents Events on deeply structured documents, are only triggered on the target entity of the CRUD event's CQN statement. This means, that if a document is created or updated, events aren't automatically triggered on composition entities. Also when reading a deep document, leveraging `expand` capabilities, `READ` events aren't triggered on the expanded entities. The same applies to a deletion of a document, which doesn't automatically trigger `DELETE` events on composition entities to which the delete is cascaded. When implementing validation logic, this can be handled like shown in the following example: ```java @Before(event = CqnService.EVENT_CREATE, entity = Orders_.CDS_NAME) public void validateOrders(List orders) { for(Orders order : orders) { if (order.getItems() != null) { validateItems(order.getItems()); } } } @Before(event = CqnService.EVENT_CREATE, entity = OrderItems_.CDS_NAME) public void validateItems(List items) { for(OrderItems item : items) { if (item.getQuantity() <= 0) { throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid quantity"); } } } ``` In the example, the `OrderItems` entity exists as a composition within the `Items` element of the `Orders` entity. When creating an order a deeply structured document can be passed, which contains order items. For this reason, the event handler method to validate order items (`validateItems`) is called as part of the order validation (`validateOrders`). In case an order item is directly created (for example through a containment navigation in OData V4) only the event handler for validation of the order items is triggered. ## Result Handling `@On` handlers for `READ`, `UPDATE`, and `DELETE` events _must_ set a result, either by returning the result, or using the event context's `setResult` method. ### READ Result `READ` event handlers must return the data that was read, either as an `Iterable` or [Result](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/Result.html) object created via the [ResultBuilder](#result-builder-read). For queries with inline count, a `Result` object _must_ be used as the inline count is obtained from the `Result` interface. `READ` event handlers are also called, for OData `/$count` requests. These requests determine the total amount of entity instances of a specific entity. When handling these requests in a custom `@On` event handler a `Map` with a single key `count` needs to be returned as a result: ```java @On(entity = MyEntity_.CDS_NAME) List> readMyEntity(CdsReadEventContext context) { if (CqnAnalyzer.isCountQuery(context.getCqn())) { int count = 100; // determine correct count value return List.of(Collections.singletonMap("count", count)); } // handle non /$count requests } ``` ### UPDATE and DELETE Results `UPDATE` and `DELETE` statements have an optional filter condition (where clause) which determines the entities to be updated/deleted. Handlers _must_ return a `Result` object with the number of entities that match this filter condition and have been updated/deleted. Use the [ResultBuilder](#result-builder) to create the `Result` object. ::: warning _❗ Warning_
If an event handler for an `UPDATE` or `DELETE` event does not specify a result the number of updated/deleted rows is automatically set to 0 and the OData protocol adapter will translate this into an HTTP response with status code `404` (Not Found). ::: ### INSERT and UPSERT Results Event handlers for `INSERT` and `UPSERT` events can return a result representing the data that was inserted/upserted. A failed insert is indicated by throwing an exception, for example, a `UniqueConstraintException` or a `CdsServiceException` with error status `CONFLICT`. ### Result Builder { #result-builder} When implementing custom `@On` handlers for CRUD events, a `Result` object can be constructed with the [ResultBuilder](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/ResultBuilder.html). The semantics of the constructed `Result` differ between the CRUD events. Clients of Application Services, for example the OData protocol adapters, rely on these specific semantics for each event. It is therefore important that custom ON handlers fulfill these semantics as well, when returning or setting a `Result` using the `setResult()` method of the respective event context. The following table lists the events and the expected `Result`: | Event | Expected Semantic | `ResultBuilder` method | | --- | --- | --- | | CREATE | The data of all created entity rows | `insertedRows` | | READ | The data of all read entity rows and (if requested) the inline count | `selectedRows` | | UPDATE | The number of updated entity rows and (optionally) the updated data | `updatedRows` | | UPSERT | The data of all upserted entity rows | `insertedRows` | | DELETE | The number of deleted entity rows | `deletedRows` | Use the `selectedRows` or `insertedRows` method for query and insert results, with the data given as `Map` or list of maps: ```java import static java.util.Arrays.asList; import static com.sap.cds.ResultBuilder.selectedRows; Map row = new HashMap<>(); row.put("title", "Capire"); Result res = selectedRows(asList(row)).result(); context.setResult(res); // CdsReadEventContext ``` { #result-builder-read} For query results, the inline count can be set through the `inlineCount` method: ```java Result r = selectedRows(asList(row)).inlineCount(inlineCount).result(); ``` { #result-builder-update} For update results, use the `updatedRows` method with the update count and the update data: ```java import static com.sap.cds.ResultBuilder.updatedRows; int updateCount = 1; // number of updated rows Map data = new HashMap<>(); data.put("title", "CAP Java"); Result r = updatedRows(updateCount, data).result(); ``` For delete results, use the `deletedRows` method and provide the number of deleted rows: ```java import static com.sap.cds.ResultBuilder.deletedRows; int deleteCount = 7; Result r = deletedRows(deleteCount).result(); ``` ## Actions and Functions { #actions} [Actions](../../cds/cdl#actions) and [Functions](../../cds/cdl#actions) enhance the API provided by an Application Service with custom operations. They have well-defined input parameters and a return value, that are modelled in CDS. Actions or functions are handled - just like CRUD events - using event handlers. To trigger an action or function on an Application Service an event with the action's or function's name is emitted on it. ### Implement Event Handler The CAP Java runtime doesn't provide any default `On` handlers for actions and functions. For each action or function an event handler of the [`On`](../event-handlers/#on) phase should be defined, which implements the business logic and provides the return value of the operation, if applicable. The event handler needs to take care of [completing the event processing](../event-handlers/#eventcompletion). If an action or function is __bound to an entity__, the entity needs to be specified while registering the event handler. The following example shows how to implement an event handler for an action: Given this CDS model: ```cds service CatalogService { entity Books { key ID: UUID; title: String; } actions { action review(stars: Integer) returns Reviews; }; entity Reviews { book : Association to Books; stars: Integer; } } ``` The `cds-maven-plugin` generates event context interfaces for the action or function, based on its CDS model definition. These event context interfaces provide direct access to the parameters and the return value of the action or function. For bound actions or functions the event context interface provides a [CqnSelect](../working-with-cql/query-api#select) statement, which targets the entity on which the action or function was triggered. Action-specific event context, generated by the CAP Java SDK Maven Plugin: ```java @EventName("review") public interface ReviewEventContext extends EventContext { // CqnSelect that points to the entity the action was called on CqnSelect getCqn(); void setCqn(CqnSelect select); // The 'stars' input parameter Integer getStars(); void setStars(Integer stars); // The return value void setResult(Reviews review); Reviews getResult(); } ``` The event handler registration and implementation is as follows: ```java @Component @ServiceName(CatalogService_.CDS_NAME) public class CatalogServiceHandler implements EventHandler { @On(event = "review", entity = Books_.CDS_NAME) public void reviewAction(ReviewEventContext context) { CqnSelect selectBook = context.getCqn(); Integer stars = context.getStars(); Reviews review = ...; // create the review context.setResult(review); } } ``` ### Trigger Action or Function As of version 2.4.0, the [CAP Java SDK Maven Plugin](../developing-applications/building#cds-maven-plugin) is capable of generating specific interfaces for services in the CDS model. These service interfaces also provide Java methods for actions and functions, which allow direct access to the action's or function's parameters. You can just call them in custom Java code. If an action or function is bound to an entity, the first argument of the method is an entity reference providing the required information to address the entity instance. Given the same CDS model as in the previous section, the corresponding generated Java service interface looks like the following: ```java @CdsName(CatalogService_.CDS_NAME) public interface CatalogService extends CqnService { @CdsName(ReviewContext.CDS_NAME) Reviews review(Books_ ref, @CdsName(ReviewContext.STARS) Integer stars); interface Application extends ApplicationService, CatalogService { } interface Remote extends RemoteService, CatalogService { } } ``` In the custom handler class, the specific service interface can be injected as it is already known for generic service interfaces: ```java ... @Autowired private CatalogService catService; ... ``` Now, just call the review action from custom handler code: ```java ... private void someCustomMethod() { String bookId = "myBookId"; Books_ ref = CQL.entity(Books_.class).filter(b -> b.ID().eq(bookId)); this.catService.review(ref, 5); } ... ``` Alternatively, the event context can be used to trigger the action or function. This approach is useful for generic use cases, where typed interfaces are not available. The event context needs to be filled with the parameter values and emitted on the service: ```java EventContext context = EventContext.create("review", "CatalogService.Books"); context.put("cqn", Select.from("CatalogService.Books").byId("myBookId")); context.put("rating", review.getRating()); this.catService.emit(context); Map result = (Map) context.get("result"); ``` ## Best Practices and FAQs This section summarizes some best practices for implementing event handlers and provides answers to frequently asked questions. 1. On which service should I register my event handler? Event handlers implementing business or domain logic should be registered on an Application Service. When implementing rather technical requirements, like triggering some code whenever an entity is written to the database, you can register event handlers on the Persistence Service. 2. Which services should my event handlers usually interact with? The CAP Java SDK provides [APIs](../services) that can be used in event handlers to interact with other services. These other services can be used to request data, that is required by the event handler implementation. If you're implementing an event handler of an Application Service, and require additional data of other entities part of that service for validation purposes, it's a good practice to read this data from the database using the [Persistence Service](../cqn-services/#persistenceservice). When using the Persistence Service, no user authentication checks are performed. If you're mashing up your service with another Application Service and also return data from that service to the client, it's a good practice to consume the other service through its service API. This keeps you decoupled from the possibility that the service might be moved into a dedicated micro-service in the future ([late-cut micro services](../../about/best-practices#agnostic-by-design)) and automatically lets you consume the business or domain logic of that service. If you do not require this decoupling, you can also access the service's entities directly from the database. In case you're working with draft-enabled entities and your event handler requires access to draft states, you should use the [Draft Service](../fiori-drafts#draftservices) to query and interact with drafts. 3. How should I implement business or domain logic shared across services? In general, it's a good practice to design your services with specific use cases in mind. Nevertheless, it might be necessary to share certain business or domain logic across multiple services. To achieve this, simple utility methods can be implemented, which can be called from different event handlers. If the entities for which a utility method is implemented are different projections of the same database-level entity, you can manually map the entities to the database-level representation and use this to implement your utility method. If they're independent from each other, a suitable self-defined representation needs to be found to implement the utility method. ## Serve Configuration Configure how application services are served. You can define per service which ones are served by which protocol adapters. In addition, you configure on which path they are available. Finally, the combined path an application service is served on, is composed of the base path of a protocol adapter and the relative path of the application service. ### Configure Base Path { #configure-base-path} Each protocol adapter has its own and unique base path. By default, the CAP Java SDK provides protocol adapters for OData V4 and V2 and the base paths of both can be configured with [CDS Properties](../developing-applications/properties) in the _application.yaml_: | Protocol | Default base path | CDS Property | |----------|-------------------|-----------------------------------------------------------------------------------| | OData V4 | `/odata/v4` | [`cds.odataV4.endpoint.path`](../developing-applications/properties#cds-odataV4-endpoint-path) | | OData V2 | `/odata/v2` | [`cds.odataV2.endpoint.path`](../developing-applications/properties#cds-odataV2-endpoint-path) | The following example shows, how to deviate from the defaults: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: odataV4.endpoint.path: '/api' odataV2.endpoint.path: '/api-v2' ``` ::: ### Configure Path and Protocol With the annotation `@path`, you can configure the relative path of a service under which it's served by protocol adapters. The path is appended to the protocol adapter's base path. With the annotations `@protocol` or `@protocols`, you can configure a list of protocol adapters a service should be served by. By default, a service is served by all installed protocol adapters. If you explicitly define a protocol, the service is only served by that protocol adapter. In the following example, the service `CatalogService` is available on the combined paths `/odata/v4/browse` with OData V4 and `/odata/v2/browse` with OData V2: ```cds @path : 'browse' @protocols: [ 'odata-v4', 'odata-v2' ] service CatalogService { ... } ``` The same can also be configured in the _application.yaml_ in the `cds.application.services..serve` section. Replace `` with the service name to configure path and protocols: ```yml cds.application.services.CatalogService.serve: path: 'browse' protocols: - 'odata-v4' - 'odata-v2' ``` You can also disable serving a service if needed: ```cds @path : 'browse' @protocol: 'none' service InternalService { ... } ``` [Learn more about all `cds.application.services..serve` configuration possibilities.](../developing-applications/properties#cds-application-services--serve){.learn-more} ### Configure Endpoints With the annotations `@endpoints.path` and `@endpoints.protocol`, you can provide more complex service endpoint configurations. Use them to serve an application service on different paths for different protocols. The value of `@endpoints.path` is appended to the [protocol adapter's base path](#configure-base-path). In the following example, the service `CatalogService` is available on different paths for the different OData protocols: ```cds @endpoints: [ {path : 'browse', protocol: 'odata-v4'}, {path : 'list', protocol: 'odata-v2'} ] service CatalogService { ... } ``` The `CatalogService` is accessible on the combined path `/odata/v4/browse` with the OData V4 protocol and on `/odata/v2/list` with the OData V2 protocol. The same can also be configured in the _application.yaml_ in the `cds.application.services..serve.endpoints` section. Replace `` with the service name to configure the endpoints: ```yml cds.application.services.CatalogService.serve.endpoints: - path: 'browse' protocol: 'odata-v4' - path: 'list' protocol: 'odata-v2' ``` [Learn more about all `cds.application.services..serve.endpoints` configuration possibilities.](../developing-applications/properties#cds-application-services--serve-endpoints){.learn-more} # Remote Services Remote Services are CQN-based clients to remote APIs that a CAP application consumes. This section describes how to configure and use these services. The CAP Java SDK supports _Remote Services_ for OData V2 and V4 APIs out of the box. The CQN query APIs enable [late-cut microservices](../../guides/providing-services#late-cut-microservices) with simplified mocking capabilities. Regarding multitenant applications, these APIs keep you extensible, even towards remote APIs. In addition, they free developers from having to map CQN to OData themselves. Cross-cutting aspects like security are provided by configuration. Applications do not need to provide additional code. The CAP Java SDK leverages the [SAP Cloud SDK](https://sap.github.io/cloud-sdk) and in particular its destination capabilities to cover these aspects. Destinations in the Cloud SDK are the means to express and define connectivity to a remote endpoint including authentication details. Cloud SDK destinations can be created from various sources such as [SAP BTP Destination Service](#destination-based-scenarios) or [Service Bindings](#service-binding-based-scenarios). They can also be defined and registered [programmatically](#programmatic-destination-registration) in code. The application can choose the best fitting option for their scenario. Every Remote Service internally uses a destination for connectivity. On top of that CAP integrates nicely with Cloud SDK, for example, ensuring automatic propagation of tenant and user information from the _Request Context_ to the Cloud SDK. ![This graphic depicts the integration of SAP Cloud SDK into SAP CAP Java.](../assets/remote%20services.drawio.svg){ class="mute-dark"} CAP's clear recommendation is to use _Remote Services_ over directly using the SAP Cloud SDK. However, if you can't leverage CQN-based _Remote Services_, refer to [native consumption with Cloud SDK](#native-consumption) for details. ::: tip To learn more about how to use _Remote Services_ end to end read the [Consuming Services cookbook](../../guides/using-services). ::: ## Configuring Remote Services To enable _Remote Services_ for OData V2 or V4 APIs in an application, add the following Maven dependency to your project: ```xml com.sap.cds cds-feature-remote-odata runtime ``` _Remote Services_ need to be configured explicitly in your application configuration. The configuration needs to define two main aspects: 1. The CDS service definition of the remote API from the CDS model. 1. The (BTP or programmatic) destination or service binding of the remote API and its protocol type. The following example, shows how you can configure _Remote Services_ in Spring Boot's _application.yaml_ based on a destination: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: API_BUSINESS_PARTNER: type: "odata-v2" destination: name: "s4-business-partner-api" ``` ::: Remote Services use a CDS service definition from the CDS model as a specification of the remote API. This API specification is required to properly translate CQN statements into respective OData V2 and V4 requests. By default the CDS service definition is looked up in the CDS model using the name of the _Remote Service_. The name can be explicitly configured using the `name` property. It defaults to the YAML key of the remote service configuration section (here: `API_BUSINESS_PARTNER`). The `type` property defines the protocol used by the remote API. The CAP Java SDK currently supports `odata-v4` (default) or `odata-v2`. ::: tip You can use the `cds import` command to generate a CDS service definition from an EDMX API specification. To learn more about this, have a look at the section [Importing Service Definitions](../../guides/using-services#import-api). ::: [Learn about all `cds.remote.services` configuration possibilities in our **CDS Properties Reference**.](../developing-applications/properties#cds-remote-services){.learn-more} ### Configuring CDS Service Name The CDS service definition is, by default, looked up in the CDS model using the name of the _Remote Service_. However, the name of the _Remote Service_ needs to be unique, as it's also used to look up the service in Java. Therefore, it's possible to explicitly configure the name of the CDS service definition from the CDS model using the `model` property. This is especially useful when creating multiple _Remote Services_ for the same API with different destinations: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: bupa-abc: model: "API_BUSINESS_PARTNER" destination: name: "s4-business-partner-api-abc" bupa-def: model: "API_BUSINESS_PARTNER" destination: name: "s4-business-partner-api-def" ``` ::: ### Using Service Bindings { #service-binding-based-scenarios } If the remote API is running on SAP BTP, it's likely that you can leverage Service Binding-based _Remote Services_. The CAP Java SDK extracts the relevant information from the service binding to connect to the remote API. Service-binding-based _Remote Services_ are simple to use, as the service binding abstracts from several aspects of remote service communication. For instance, it provides authentication information and the URL of the service. In contrast to destinations, it can be created and refreshed as part of the application lifecycle, that is, application deployment. Hence, the location and security aspects of remote services are transparent to CAP applications in the case of service bindings. #### Binding to a Reuse Service If the remote API is exposed by a BTP reuse service, a service broker typically provides means to create service instances of the BTP service. The CAP application requires a service binding to this service to consume the remote API as a _Remote Service_. These service instances of BTP services provide the URL of the remote API in their service binding. Therefore, you only need to specify the binding name in the `application.yaml` configuration, like in the following example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: SomeReuseService: binding: name: some-service-binding ``` ::: :::details If binding structure isn't understood ... In some cases, SAP Cloud SDK doesn't understand the service binding structure of the specific BTP service. In that case it's required to contribute a mapping by the means of Cloud SDK's `PropertySupplier`. This `PropertySupplier` needs to be registered with the Cloud SDK once at application startup. ```java static { OAuth2ServiceBindingDestinationLoader.registerPropertySupplier( options -> options.getServiceBinding().getTags().contains(""), SomeReuseServiceOAuth2PropertySupplier::new); } ``` The `` needs to be replaced by the concrete name of the tag provided in the binding of the BTP service. Alternatively, a check on the service name can be chosen as well. The class `SomeReuseServiceOAuth2PropertySupplier` needs to be provided by you extending the Cloud SDK base class `DefaultOAuth2PropertySupplier`. [Learn more about registering OAuth2PropertySupplier in the **SAP Cloud SDK documentation**.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/service-bindings#customization){.learn-more} ::: #### Binding to a Service with Shared Identity If the remote API is available within the same SaaS application and using the same (shared) service instance of XSUAA or Identity (IAS) for authentication, no service broker-based reuse service is required. The _Remote Service_ can be configured using the shared service instance as binding (here: `shared-xsuaa`): ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: OtherCapService: binding: name: shared-xsuaa options: url: https://url-of-the-second-cap-application ``` ::: The plain service binding of XSUAA or IAS does not contain the URL of the remote API. Therefore, it needs to be explicitly configured in the `options` section. Since the URL is typically not known during development, you can define it as an environment variable. For the previous example, use `CDS_REMOTE_SERVICES_OTHERCAPSERVICE_BINDING_OPTIONS_URL`. [Learn more about Binding From Environment Variables in the Spring Boot documentation.](https://docs.spring.io/spring-boot/reference/features/external-config.html#features.external-config.typesafe-configuration-properties.relaxed-binding.environment-variables){.learn-more} :::tip Remote APIs which require IAS-based authentication might expect certificate based client authentication in addition to the IAS-based JWT token, see [ProofOfPossession validation](https://github.com/SAP/cloud-security-services-integration-library/tree/main/java-security#proofofpossession-validation). CAP _Remote Services_ automatically takes care of this by initiating a mutual TLS handshake with the remote API. ::: #### Configuring the Authentication Strategy While service bindings typically provide authentication details, they don't predetermine the user propagation and authentication strategy, for example, technical user or named user flow. The parameter `onBehalfOf` in the `binding` configuration section allows to define these strategies. The following options are available: - `currentUser`: Use the user of the current [Request Context](/java/event-handlers/request-contexts). This propagates the named user if available or falls back to a (tenant-specific) technical user otherwise. (default) - `systemUser`: Use a (tenant-specific) technical user, based on the tenant set in the current Request Context. - `systemUserProvider`: Use a technical user of the provider tenant. This is especially helpful on an internal communication channel that is not authorized tenant-specifically. ### Using Destinations { #destination-based-scenarios } If your _remote API_ is not using Service Bindings, you typically need to separately obtain the URL and additional metadata like credentials from the service provider. You can store these in destinations of SAP BTP Destination Service or [programmatically register a destination](#programmatic-destination-registration) with Cloud SDK to make them available for usage in your CAP application. Based on the following configuration, a destination with name `s4-business-partner-api` is looked up using the Cloud SDK: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: API_BUSINESS_PARTNER: type: "odata-v2" destination: name: s4-business-partner-api ``` ::: If your CAP application is using IAS and you want to call a _remote API_ that is provided by another IAS-based application (ie. Application2Application scenario), you can utilize a simplified security configuration in the destination. As a pre-requisite, your CAP application and the called application need to trust the same IAS tenant and you need to define a dependency in IAS to consume the respective API provided by the _remote API_. Create a destination configuration with the following parameters: - _URL_: `` - _Authentication_: `NoAuthentication` - Additional Properties: - _cloudsdk.ias-dependency-name_: `` At runtime, this destination configuration will use the bound `identity` service instance's credentials to request a token for the _remote API_. [Learn more about consuming APIs from Other IAS-Appications in the **SAP Cloud Identity Services documentation**.](https://help.sap.com/docs/cloud-identity-services/cloud-identity-services/consume-apis-from-other-applications){.learn-more} The CAP Java SDK obtains the destination for a _Remote Service_ from the `DestinationAccessor` using the name that is configured in the _Remote Service_'s destination configuration. If you're using the SAP BTP Destination Service, this is the name you used when you defined the destination there. To properly resolve the destination from SAP BTP Destination Service [additional Cloud SDK dependencies](#cloud-sdk-dependencies) are required. In multitenant scenarios, the SAP BTP Destination Service tries to look up the destination from the subaccount of the current tenant, set on the `RequestContext`. This is not restricted to subscriber tenants, but also includes the provider tenant. Retrieval strategies are part of a set of configuration options provided by the Cloud SDK, which are exposed by CAP Java as part of the configuration for _Remote Services_. For details refer to the section about [destination strategies](#destination-strategies). ::: tip As a prerequisite for destination lookup in subscriber accounts, the CAP application needs to define a dependency to the Destination service for their subscriptions, for example, in the SaaS registry. This can be enabled by setting the `cds.multiTenancy.dependencies.destination` to `true` in the configuration. ::: [Learn more about destinations in the **SAP Cloud SDK documentation**.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/sdk-connectivity-destination-service){.learn-more} ### Configuring the URL The destination or service binding configuration provides the base URL to the OData V2 or V4 service, that should be used by the _Remote Service_. The full service URL however is built from three parts: 1. The URL provided by the destination or the service binding configuration. 1. An optional URL suffix provided in the _Remote Service_ http configuration under the `suffix` property. 1. The name of the service, either obtained from the optional `service` configuration property or the fully qualified name of the CDS service definition. Consider this example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: API_BUSINESS_PARTNER: http: suffix: "/sap/opu/odata/sap" destination: name: s4-business-partner-api ``` ::: In this case, the destination with name `s4-business-partner-api` would be obtained from the `DestinationAccessor`. Given that this destination holds the URL `https://s4.sap.com`, the resulting service URL for OData requests would be `https://s4.sap.com/sap/opu/odata/sap/API_BUSINESS_PARTNER`. ## Consuming Remote Services _Remote Services_ can be used in your CAP application just like any other [service that accepts CQN queries](/java/cqn-services/): ```java @Autowired @Qualifier(ApiBusinessPartner_.CDS_NAME) CqnService bupa; CqnSelect select = Select.from(ABusinessPartnerAddress_.class) .where(a -> a.BusinessPartner().eq("4711")); ABusinessPartnerAddress address = bupa.run(select) .single(ABusinessPartnerAddress.class); ``` ::: tip To learn more about how to build and run CQN queries, see sections [Building CQN Queries](../working-with-cql/query-api) and [Executing CQN Queries](../working-with-cql/query-execution). ::: Keep in mind that _Remote Services_ are simply clients to remote APIs. CAP doesn't automatically forward CQN queries to these services. Developers need to explicitly call and use these _Remote Services_ in their code. However, as _Remote Services_ are based on the common CQN query APIs it's easy to use them in event handlers of your [Application Services](application-services). ::: warning In case data from _Remote Services_ should be combined with data from the database custom coding is required. Refer to the [Integrate and Extend guide](../../guides/using-services#integrate-and-extend) for more details. ::: ## Cloud SDK Integration ### Maven Dependencies {#cloud-sdk-dependencies} The CAP Java SDK only includes the minimum SAP Cloud SDK dependencies required out of the box. In case you want to leverage features from SAP Cloud SDK, like the [programmatic destination registration](#programmatic-destination-registration) or integration with SAP BTP Destination Service, you need to add additional dependencies. It's recommended to add the SAP Cloud SDK BOM to the dependency management section of your application's parent POM. If you're also using the CDS Services BOM or the Spring Boot dependencies BOM, it's recommended to add the SAP Cloud SDK BOM after these: ```xml com.sap.cloud.sdk sdk-bom use-latest-version-here pom import ``` [Learn more about dependency management of **SAP Cloud SDK**.](https://sap.github.io/cloud-sdk/docs/java/guides/manage-dependencies/){.learn-more} To enable [programmatic destination registration](#programmatic-destination-registration), add this additional dependency to your project: ```xml com.sap.cloud.sdk.cloudplatform cloudplatform-connectivity ``` To integrate with SAP BTP Destination Service on Cloud Foundry, add this additional dependency to your project: ```xml com.sap.cloud.sdk.cloudplatform scp-cf ``` ### Configuring Destination Strategies { #destination-strategies } When loading destinations from SAP BTP Destination Service, you can specify a [destination retrieval strategy](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/sdk-connectivity-destination-service#retrieval-strategy-options) and a [token exchange strategy](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/sdk-connectivity-destination-service#token-exchange-options). These strategies can be set in the destination configuration of the _Remote Service_: ```yml [srv/src/main/resources/application.yaml] cds: remote.services: API_BUSINESS_PARTNER: destination: name: "s4-business-partner-api" retrievalStrategy: "AlwaysProvider" tokenExchangeStrategy: "ExchangeOnly" ``` ::: tip Values for destination strategies have to be provided in pascal case. ::: ### Programmatic Destination Registration You can also programmatically build destinations and add them to the `DestinationAccessor` to make them available for _Remote Services_. You can easily register an event handler that is executed during startup of the application and build custom destinations: ```java @Component @ServiceName(ApplicationLifecycleService.DEFAULT_NAME) public class DestinationConfiguration implements EventHandler { @Value("${api-hub.api-key:}") private String apiKey; @Before(event = ApplicationLifecycleService.EVENT_APPLICATION_PREPARED) public void initializeDestinations() { if(apiKey != null && !apiKey.isEmpty()) { DefaultHttpDestination httpDestination = DefaultHttpDestination .builder("https://sandbox.api.sap.com/s4hanacloud") .header("APIKey", apiKey) .name("s4-business-partner-api").build(); DestinationAccessor.prependDestinationLoader( new DefaultDestinationLoader().registerDestination(httpDestination)); } } } ``` [Find out how to create destinations for different authentication types](#programmatic-destinations){.learn-more} [Learn more about using destinations](../../guides/using-services#using-destinations){.learn-more} Note that you can leverage Spring Boot's configuration possibilities to inject credentials into the destination configuration. The same mechanism can also be used for the URL of the destination by also reading it from your application configuration (for example environment variables or _application.yaml_). This is especially useful when integrating micro-services, which may have different URLs in productive environments and test environments. ## Native Service Consumption { #native-consumption } If you need to call an endpoint that you cannot consume as a _Remote Service_, you can fall back to leverage Cloud SDK APIs. Based on the Cloud SDK's `HttpClientAccessor` API, you can resolve an `HttpClient` that you can use to execute plain HTTP requests against the remote API. However, this involves low-level operations like payload de-/serialization. Usage of CAP's _Remote Service_ is encouraged whenever possible to free the developer from these. [Learn more about HttpClientAccessor in the **SAP Cloud SDK documentation**.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/http-client){.learn-more} ### Using Service Bindings { #native-bindings } If the URL and credentials of the remote API are available as a service binding, you can create a Cloud SDK destination for the service binding using the `ServiceBindingDestinationLoader` API. Based on this, it's possible to create an instance of `HttpClient` using the `HttpClientAccessor`: ```java ServiceBinding binding = ...; HttpDestination destination = ServiceBindingDestinationLoader.defaultLoaderChain().getDestination( ServiceBindingDestinationOptions .forService(binding) .onBehalfOf(OnBehalfOf.TECHNICAL_USER_CURRENT_TENANT) .build()); HttpClient httpClient = HttpClientAccessor.getHttpClient(destination); ... ``` [Learn more about HttpClientAccessor in the **SAP Cloud SDK documentation**.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/http-client){.learn-more} To be able to resolve a service binding into a Cloud SDK destination, a `OAuth2PropertySupplier` might need to be registered with Cloud SDK. ```java static { OAuth2ServiceBindingDestinationLoader.registerPropertySupplier( options -> options.getServiceBinding().getTags().contains(""), BizPartnerOAuth2PropertySupplier::new); } ``` [Learn more about registering OAuth2PropertySupplier in the **SAP Cloud SDK documentation**.](https://sap.github.io/cloud-sdk/docs/java/features/connectivity/service-bindings#customization){.learn-more} ### Using Destinations { #native-destinations } If the URL and credentials of the remote API are configured as a destination in SAP BTP Destination Service, you can use Cloud SDK's `DestinationAccessor` API to load the destination based on its name. In a second step, `HttpClientAccessor` is used to create an instance of `HttpClient`: ::: code-group ```java [Cloud SDK v4] HttpDestination destination = DestinationAccessor.getDestination("").asHttp(); HttpClient httpClient = HttpClientAccessor.getHttpClient(destination); ... ``` ```java [Cloud SDK v5] Destination destination = DestinationAccessor.getDestination(""); HttpClient httpClient = HttpClientAccessor.getHttpClient(destination); ... ``` :::: ### Programmatic Destinations { #programmatic-destinations } The following example code snippets show how to programmatically create a destination for different authentication types. You can [register](#programmatic-destination-registration) these destinations with the `DestinationAccessor` to use them with _Remote Services_ or use them natively with the `HttpClientAccessor` to obtain `HttpClient` instances. Use the following example if the remote API supports basic authentication: ```java DefaultHttpDestination .builder("https://example.org") .basicCredentials("user", "password") .name("my-destination").build(); ``` Use the following example if you can directly forward the token from the current security context: ```java DefaultHttpDestination .builder("https://example.org") .authenticationType(AuthenticationType.TOKEN_FORWARDING) .name("my-destination").build(); ``` Use the following example if you want to call the remote API using a technical user: ```java ClientCredentials clientCredentials = new ClientCredentials("clientid", "clientsecret"); OAuth2DestinationBuilder .forTargetUrl("https://example.org") .withTokenEndpoint("https://xsuaa.url") .withClient(clientCredentials, OnBehalfOf.TECHNICAL_USER_CURRENT_TENANT) .property("name", "my-destination") .build(); ``` Use the following example if you need to exchange the token from the security context (that is, user token exchange): ```java ClientCredentials clientCredentials = new ClientCredentials("clientid", "clientsecret"); OAuth2DestinationBuilder .forTargetUrl("https://example.org") .withTokenEndpoint("https://xsuaa.url") .withClient(clientCredentials, OnBehalfOf.NAMED_USER_CURRENT_TENANT) .property("name", "my-destination") .build(); ``` # Event Handlers This section describes how to register event handlers on services. In CAP everything that happens at runtime is an [event](../../about/best-practices#events) that is sent to a [service](../../about/best-practices#services). With event handlers the processing of these events can be extended or overridden. Event handlers can be used to handle CRUD events, implement actions and functions and to handle asynchronous events from a messaging service. ## Introduction to Event Handlers CAP allows you to register event handlers for [events](../../about/best-practices#events) on [services](../../about/best-practices#services). An event handler is simply a Java method. Event handlers enable you to add custom business logic to your application by either extending the processing of an event, or by completely overriding its default implementation. ::: tip Event handlers are a powerful means to extend CAP. Did you know, that most of the built-in features provided by CAP are implemented using event handlers? ::: Common events are the CRUD events (`CREATE`, `READ`, `UPDATE`, `DELETE`), which are handled by the different kinds of [CQN-based services](../cqn-services/#cdsservices). These events are most typically triggered, when an HTTP-based protocol adapter (for example OData V4) executes a CQN statement on an Application Service to fulfill the HTTP request. The CAP Java SDK provides a lot of built-in event handlers (also known as [Generic Providers](../../guides/providing-services)) that handle CRUD operations out of the box and implement the handling of many CDS annotations. Applications most commonly use event handlers on CRUD events to _extend_ the event processing by using the [`Before`](#before) and [`After`](#after) phase. [Actions](../../cds/cdl#actions) and [Functions](../../cds/cdl#actions) that are defined by an Application Service in its model definition are mapped to events as well. Therefore, to implement the business logic of an action or function, you need to register event handlers as well. Event handlers that implement the core processing of an event should be registered using the [`On`](#on) phase. Events in CAP can have parameters and - in case they are synchronous - a return value. The CAP Java SDK uses [Event Contexts](#eventcontext) to provide a type-safe way to access parameters and return values. In the case of CRUD events the corresponding Event Contexts provide for example access to the CQN statement. Event Contexts can be easily obtained in an event handler. ## Event Phases { #phases} Events are processed in three phases that are executed consecutively: `Before`, `On`, and `After`. When registering an event handler the phase in which the event handler should be called, needs to be specified. The CAP Java SDK provides an annotation for each event phase ([`@Before`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/Before.html), [`@On`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/On.html), and [`@After`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/After)). These [annotations](#handlerannotations) can be used on event handler methods to indicate which phase of the event processing the method handles. It's possible to register multiple event handlers for each event phase. Handlers within the same event phase are never executed concurrently. In case concurrency is desired, it needs to be explicitly implemented within an event handler. Note that by default there is no guaranteed order in which the handlers of the same phase are called. The following subsections describe the semantics of the three phases in more detail. ### Before { #before} The `Before` phase is the first phase of the event processing. This phase is intended for filtering, validation, and other types of preprocessing of the incoming parameters of an event. There can be an arbitrary number of `Before` handlers per event. The processing of the `Before` phase is completed when one of the following conditions applies: - All registered `Before` handlers were successfully called. Execution continues with the `On` phase. - A handler [completes the event processing](#eventcompletion) by setting a return value or setting the state of an event to completed. In this case, any remaining registered `Before` and `On` handlers are skipped and execution continues with the `After` phase. - A handler throws an exception. In this case, event processing is terminated immediately. ### On { #on} The `On` phase is started after the `Before` phase, as long as no return value is yet provided and no exception occurred. It's meant to implement the core processing of the event. There can be an arbitrary number of `On` handlers per event, although as soon as the first `On` handler successfully completes the event processing, all remaining `On` handlers are skipped. The `On` phase is completed when one of the following conditions applies: - A handler [completes the event processing](#eventcompletion) by setting a result value or setting the state of an event to completed. In this case, any remaining registered `On` handlers are skipped and execution continues with the `After` phase. - A handler throws an exception. In this case, event processing is terminated immediately. In case of synchronous events, if after the `On` phase, no handler completed the event processing, it's considered an error and the event processing is aborted with an exception. However when registering an `On` handler for an asynchronous event it is not recommended to complete the event processing, as other handlers might not get notified of the event anymore. In that case CAP ensures to auto-complete the event, once all `On` handlers have been executed. ### After { #after} The `After` phase is only started after the `On` phase is completed successfully. Handlers are therefore guaranteed to have access to the result of the event processing. This phase is useful for post-processing of the return value of the event or triggering side-effects. A handler in this phase can also still abort the event processing by throwing an exception. No further handlers of the `After` phase are called in this case. ## Event Contexts { #eventcontext} The [EventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/EventContext.html) is the central interface, that provides information about the event to the event handler. The EventContext interface is a general interface that can be used with every event, it provides: - Name of the event - Entity targeted by the event - Service the event was sent to - Parameters and return value - Request Context: User information, tenant-specific CDS model, headers and query parameters - ChangeSet Context: Transactional boundaries of the event - Service Catalog - CDS Runtime Parameters and the return value can be obtained and stored as key-value pairs in the Event Context using its `get` and `put` methods. ```java EventContext context = EventContext.create("myEvent", null); // set parameters context.put("parameter1", "MyParameter1"); context.put("parameter2", 2); srv.emit(context); // process event // access return value Object result = context.get("result"); ``` Using the `get` and `put` methods has several drawbacks: The API is neither type-safe nor is it clear what the correct keys for different event parameters are. To solve these issues it is possible to overlay the general Event Context with an event-specific Event Context, which provides typed getters and setters for the parameters of a specific event. For each event that the CAP Java SDK provides out-of-the-box (for example the [CRUD events](../cqn-services/application-services#crudevents)) a corresponding Event Context is provided. Let's have a look at an example. The [CdsReadEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CdsReadEventContext.html) interface is the `READ` event-specific Event Context. As one of the parameters of the `READ` event is a [CqnSelect](../../cds/cqn#select) it provides a `CqnSelect getCqn()` method. The return value of a `READ` event is a [Result](../working-with-cql/query-execution#result). The context therefore also provides a `Result getResult()` and a `setResult(Result r)` method. You can use the `as` method provided by the general Event Context to overlay it: ```java CdsReadEventContext context = genericContext.as(CdsReadEventContext.class); CqnSelect select = context.getCqn(); context.setResult(Collections.emptyList()); Result result = context.getResult(); ``` The getter and setter methods, still operate on the simple get/put API shown in the previous example. They just provide a type-safe layer on top of it. The `as` method makes use of Java Proxies behind the scenes. Therefore, an interface definition is all that is required to enable this functionality. ::: tip Use these event-specific type-safe Event Context interfaces whenever possible. ::: For actions or functions defined in the CDS model the [CAP Java SDK Maven Plugin](../developing-applications/building#cds-maven-plugin) can automatically generate Event Context objects, which provide type-safe access to the action or function parameters and allow to set the return values. ### Completing the Event Processing { #eventcompletion} The Event Context also provides means to indicate the completion of the core processing of the event. This is important to finish the [`On`](#on) phase of a synchronous event. In case the synchronous event does not have a return value the `setCompleted()` method should be used to indicate the completion of the core processing of the event. ```java context.setCompleted(); ``` In case the synchronous event has a return value the `setResult(...)` method of the event-specific Event Context automatically triggers the `setCompleted()` method as well. ```java context.setResult(myResult); ``` ### Explicitly Proceeding the On Handler Execution { #proceed-on } An event handler registered to the [`On phase`](#on) can call `proceed()` on the Event Context to explicitly proceed executing the remaining registered [`On`](#on) handlers. This allows the handler to pre- and post-process the Event Context in a single method, without fully overwriting the core processing of the event. It also enables catching and handling exceptions thrown by an underlying handler. ```java @On(event = "myEvent") void wrapMyEvent(EventContext context) { context.put("param", "Adjusted"); // pre-process context.proceed(); // delegate to underlying handler context.put("result", 42); // post-process } ``` Calling `proceed()` from a [`Before`](#before) or [`After`](#after) event handler is not allowed and will raise an exception. If an [`On`](#on) handler has already [completed](#eventcompletion) the event processing, calling `proceed()` will not have any effects. ### Defining Custom EventContext Interfaces { #customeventcontext} In certain cases you might want to define your own custom event-specific Event Context interfaces. Simply define an interface, which extends the general `EventContext` interface. Use the `@EventName` annotation to indicate for which event this context should be used. Getters and setters defined in the interface automatically operate on the `get` and `put` methods of the general Event Context. In case you want to define the key they use for this, you can use the `@CdsName` annotation on the getter and setter method. ```java @EventName("myEvent") public interface MyEventContext extends EventContext { static MyEventContext create() { return EventContext.create(MyEventContext.class, null); } @CdsName("Param") String getParam(); void setParam(String param); void setResult(Integer result); Integer getResult(); } ``` ::: tip For actions or functions defined in the CDS model the [CAP Java SDK Maven Plugin](../developing-applications/building#cds-maven-plugin) can automatically generate Event Context objects, which provide type-safe access to the action or function parameters and allow to set the return values. ::: ## Event Handler Classes { #handlerclasses} Event handler classes contain one or multiple event handler methods. You can use them to group event handlers, for example for a specific service. The class can also define arbitrary methods, which aren't event handler methods, to provide functionality reused by multiple event handlers. In Spring Boot, event handler classes are Spring beans. This enables you to use the full range of Spring Boot features in your event handlers, such as [Dependency Injection](https://www.baeldung.com/spring-dependency-injection) or [Scopes](https://www.baeldung.com/spring-bean-scopes). The following [example](https://github.com/SAP-samples/cloud-cap-samples-java/blob/f1f18b8fd015257d33606864481ac5e6ec082b45/srv/src/main/java/my/bookshop/handlers/AdminServiceHandler.java) defines an event handler class: ::: code-group ```java [AdminServiceHandler.java] import org.springframework.stereotype.Component; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.ServiceName; @Component @ServiceName("AdminService") public class AdminServiceHandler implements EventHandler { // ... } ``` ::: - The annotation `@Component` instructs Spring Boot to create a bean instance from this class. - The `EventHandler` marker interface is required for CAP to identify the class as an event hander class among all beans and scan it for event handler methods. - The optional `@ServiceName` annotation can be used to specify the default service, which event handlers are registered on. It is possible to override this value for specific event handler methods. ::: tip The CAP Java SDK Maven Plugin generates interfaces for services in the CDS model. These interfaces provide String constants with the fully qualified name of the service. In case the service name is based on the CDS model it is recommended to use these constants with the `@ServiceName` annotation. ::: It is possible to specify multiple service names. Event handlers are registered on all of these services. ```java @ServiceName({"AdminService", "CatalogService"}) ``` The `type` attribute of the `@ServiceName` annotation can be used to register event handlers on all services of a certain type: ```java @ServiceName(value = "*", type = ApplicationService.class) ``` ## Event Handler Annotations { #handlerannotations} Event handler methods need to be annotated with one of the following annotations: [`@Before`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/Before.html), [`@On`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/On.html), or [`@After`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/After). The annotation defines, during which [phase](#phases) of the event processing the event handler is called. Each of these annotations can define the following attributes: - `service`: The services the event handler is registered on. It's optional, if a `@ServiceName` annotation is specified on class-level. - `serviceType`: The type of services the event handler is registered on, for example, `ApplicationService.class`. Can be used together with `service = "*"` to register an event handler on all services of a certain type. - `event`: The events the event handler is registered on. The event handler is invoked in case any of the events specified matches the current event. Use `*` to match any event. It's optional, if the event can be inferred through a [Event Context argument](#contextarguments) in the handler signature. - `entity`: The target entities the event handler is registered on. The event handler is invoked in case any of the entities specified matches the current entity. Use `*` to match any entity. It's optional, if the entity can be inferred through a [POJO-based argument](#pojoarguments) in the handler signature. If no value is specified or can be inferred it defaults to `*`. ::: tip The interfaces of different service types provide String constants for the events they support (see for example the [CqnService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/CqnService.html)). The CAP Java SDK Maven Plugin generates interfaces for entities in the CDS model, which provide String constants with their fully qualified name. It is recommended to use these constants with the `event` or `entity` attributes of the annotations. ::: ```java // registers on multiple events @Before(event = { "CREATE", "UPDATE" }, entity = "AdminService.Books") // overrides the default service on class-level // registers on any entity @On(service = "CatalogService", event = "READ") // usage of String constants is recommended @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) ``` ## Event Handler Method Signatures { #handlersignature} The most basic signature of an event handler method is `public void process(EventContext context)`. However event-specific Event Context and entity data arguments and certain return values are supported as well and can be freely combined. It is even valid for event handler methods to have no arguments at all. Handler methods don't necessarily have to be public methods. They can also be methods with protected, private, or package visibility. ### Event Context Arguments { #contextarguments} The [Event Context](#eventcontext) is the central interface that provides information about the event to the event handler. An event handler can get access to the general `EventContext` by simply declaring an argument of that type in its method: ```java @Before(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) public void readBooks(EventContext context) { } ``` It is also possible to directly refer to event-specific Event Context interfaces in your arguments. In that case the general Event Context is automatically overlayed with the event-specific one: ```java @Before(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) public void readBooks(CdsReadEventContext context) { } ``` If an event-specific Event Context argument is used and the event handler annotation declares an event as well, the argument is automatically validated during startup of the application. Alternatively it is possible to let CAP infer the event for the event handler registration from the Event Context argument: ```java @Before(entity = Books_.CDS_NAME) public void readBooks(CdsReadEventContext context) { } ``` ::: tip The mapping between an Event Context interface and an event, is based on the `@EventName` annotation of the Event Context interface. ::: In case an event handler is registered on multiple events only the general Event Context argument can be used. At runtime, the corresponding event-specific Event Context can be overlayed explicitly, if access to event-specific parameters is required: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }, entity = Books_.CDS_NAME) public void changeBooks(EventContext context) { if(context.getEvent().equals(CqnService.EVENT_CREATE)) { CdsCreateEventContext ctx = context.as(CdsCreateEventContext.class); // ... } else { CdsUpdateEventContext ctx = context.as(CdsUpdateEventContext.class); // ... } } ``` ### Entity Data Arguments { #pojoarguments} When adding business logic to an Application Service event handlers most commonly need to access entity data. Entity data can be directly accessed in the event handler method, by using an argument of type `CdsData`: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }, entity = Books_.CDS_NAME) public void changeBooks(List data) { } ``` > The `CdsData` interface extends `Map` with some additional JSON serialization capabilities and therefore provides a generic data access capability. The CAP Java SDK Maven Plugin can generate data accessor interfaces for entities defined in the CDS model. These interfaces allow for a [typed access](../cds-data#typed-access) to data and can be used in arguments as well: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }, entity = Books_.CDS_NAME) public void changeBooks(List books) { } ``` ::: tip To learn more about typed access to data and how entity data is handled in CAP Java SDK, have a look at [Working with Data](../cds-data). ::: If an entity data argument is used and the event handler annotation declares an entity as well, the argument is automatically validated during startup of the application. Alternatively it is possible to let CAP infer the entity for the event handler registration from the entity data argument: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }) public void changeBooks(List books) { } ``` ::: tip The mapping between a data accessor interface and an entity, is based on the `@CdsName` annotation of the accessor interface. ::: Entity data arguments only work on [CRUD events](../cqn-services/application-services#crudevents) of [CQN-based services](../cqn-services/#cdsservices). In addition they work with the [draft-specific CRUD events](../fiori-drafts#draftevents) provided by Draft Services. The origin from which the entity data is provided depends on the phase of the event processing. During the `Before` and `On` phase it is obtained from the CQN statement. The CQN statement contains the entity data that was provided by the service client. However during the `After` phase the entity data is obtained from the `Result` object, which is provided as the return value of the event to the service client. Some CQN statements such as for example `CqnSelect`, which is used with `READ` events, don't allow to carry data. In these cases entity data arguments are set to `null`. There are different flavours of entity data arguments. Besides using `List` it is also possible to use `Stream`: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }) public void changeBooks(Stream books) { } ``` It is also possible to use non-collection-based entity arguments, such as `Books`. However if multiple data rows are available at runtime an exception will be thrown in that case: ```java @Before(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPDATE }) public void changeBook(Books book) { } ``` ::: tip Entity data arguments are safely modifiable. During the `Before` and `On` phase changes affect the data carried by the CQN statement. During the `After` phase changes affect the return value of the event. ::: ### Return Values The return value of an event can be set by returning a value in an event handler method: ```java @On(entity = Books_.CDS_NAME) public Result readBooks(CdsReadEventContext context) { return db.run(context.getCqn()); } ``` In case an event handler method of the `Before` or `On` phase has a return value it automatically [completes the event processing](#eventcompletion), once it is executed. Event handler methods of the `After` phase that have a return value, replace the return value of the event. Only return values that extend `Iterable>` are supported. The `Result` object or a list of entity data (for example `List`) fulfill this requirement. ```java @On(entity = Books_.CDS_NAME) public List readBooks(CdsReadEventContext context) { Books book = Struct.create(Books.class); // ... return Arrays.asList(book); } ``` Event handler methods with return values only work on [CRUD events](../cqn-services/application-services#crudevents) of [CQN-based services](../cqn-services/#cdsservices) or the [draft-specific CRUD events](../fiori-drafts#draftevents) provided by Draft Services. ::: tip To learn how to build your own Result objects, have a look at the [Result Builder API](../cqn-services/application-services#result-builder) ::: ### Ordering of Event Handler Methods You can influence the order in which the event handlers are executed by means of CAP annotation [@HandlerOrder](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/handler/annotations/HandlerOrder.html). It defines the order of handler methods within each phase of events. You may use constants `HandlerOrder.EARLY` or `HandlerOrder.LATE` to place one handler earlier or later relative to the handlers without the annotation. Note that handlers with the same `@HandlerOrder` are executed in a deterministic, but arbitrary sequence. Generic handlers typically are executed by the framework before `HandlerOrder.EARLY` and after `HandlerOrder.LATE`: 1. Generic framework handlers 2. Custom handlers, annotated with `HandlerOrder.EARLY` 3. Custom handlers for phases `@Before`, `@On`, and `@After` 4. Custom handlers, annotated with `HandlerOrder.LATE` 5. Generic framework handlers For example, in the following snippet, several methods are bound to the same phase of the `READ` event for the same entity and are executed one after another: ```java @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) @HandlerOrder(HandlerOrder.EARLY) public void firstHandler(EventContext context) { // This handler is executed first } @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) public void defaultHandler(EventContext context) { // This one is the second } @After(event = CqnService.EVENT_READ, entity = Books_.CDS_NAME) @HandlerOrder(HandlerOrder.LATE) public void lastHandler(EventContext context) { // This one is the last } ``` CAP Java always executes event handlers in the order specified by the annotations, even if the handlers are defined in separate classes. In addition, CAP Java respects the [Spring Framework annotation `@Order`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/core/annotation/Order.html) and executes the handlers, that are registered in such annotated beans, in the order defined by that annotation. If the `@HandlerOrder` annotation is specified, this overrides the order defined by `@Order`. # Indicating Errors Learn about the error handling capabilities provided by the CAP Java SDK. ## Overview The CAP Java SDK provides two different ways to indicate errors: - By throwing an exception: This completely aborts the event processing and rollbacks the transaction. - By using the [Messages](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messages/Messages.html) API: This adds errors, warnings, info, or success messages to the currently processed request, but doesn't affect the event processing or the transaction. The message texts for both exceptions and the Messages API can use formatting and localization. ## Exceptions Any exception that is thrown by an event handler method aborts the processing of the current event and causes any active transaction to be rolled back. To indicate further details about the error, such as a suggested mapping to an HTTP response code, the CAP Java SDK provides a generic unchecked exception class, called [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceException.html). It's recommended to use this exception class, when throwing an exception in an event handler. When creating a new instance of `ServiceException` you can specify an [ErrorStatus](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ErrorStatus.html) object, through which an internal error code and a mapping to an HTTP status code can be indicated. An enum [ErrorStatuses](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ErrorStatuses.html) exists, which lists many useful HTTP error codes already. If no such error status is set when creating the ServiceException, it defaults to an internal server error (HTTP status code 500). ```java // default error status throw new ServiceException("An internal server error occurred", originalException); // specifying an error status throw new ServiceException(ErrorStatuses.CONFLICT, "Not enough stock available"); // specifying an error status and the original exception throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No book title specified", originalException); ``` The OData adapters turn all exceptions into an OData error response to indicate the error to the client. ## Messages The [Messages](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messages/Messages.html) API allows event handlers to add errors, warnings, info, or success messages to the currently processed request. Adding info, warning or success messages doesn't affect the event processing or the transaction. For error messages by default a `ServiceException` is thrown at the end of the `Before` handler phase. You can change this by setting [`cds.errors.combined`](../developing-applications/properties#cds-errors-combined) to `false`. The `Messages` interface provides a logger-like API to collect these messages. Additional optional details can be added to the [Message](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messages/Message.html) using a builder API. You can access the `Messages` API from the Event Context: ```java context.getMessages().success("The order was successfully placed"); ``` In Spring, you can also access it using Dependency Injection: ```java @Autowired Messages messages; messages.warn("No book title specified"); messages.error("The book is no longer available").code("BNA").longTextUrl("/help/book-not-available"); ``` The OData V4 adapter collects these messages and writes them into the `sap-messages` HTTP header by default. However, when an OData V4 error response is returned, because the request was aborted by an exception, the messages are instead written into the `details` section of the error response. Writing the messages into explicitly modeled messages properties isn't yet supported. SAP Fiori uses these messages to display detailed information on the UI. The style how a message appears on the UI depends on the severity of the message. ### Throwing a ServiceException from Error Messages { #throwing-a-serviceexception-from-messages} It is also possible to throw a [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceException.html) from error messages. This can, for example, be useful to cancel a request after collecting multiple validation errors. The individual validation checks will collect error messages in the `Messages` API. After the validation checks have been run, you call the `throwIfError()` method. Only if error messages have been collected, this method cancels the request with a [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceException.html): ```java // throw a ServiceException, if any error messages have been added to the current request messages.throwIfError(); ``` If there are any collected error messages, this method creates a [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceException.html) from _one_ of these error messages. The OData adapter turns this exception into an OData error response to indicate the error to the client. The remaining error messages are written into the `details` section of the error response. If the CDS property [`cds.errors.combined`](../developing-applications/properties#cds-errors-combined) is set to true (default), `Messages.throwIfError()` is automatically called at the end of the `Before` handler phase to abort the event processing in case of errors. It is recommended to use the Messages API for validation errors and rely on the framework calling `Messages.throwIfError()` automatically, instead of throwing a `ServiceException`. ## Formatting and Localization Texts passed to both `ServiceException` and the `Messages` API can be formatted and localized. By default, you can use [SLF4J's messaging formatting style](https://www.slf4j.org/api/org/slf4j/helpers/MessageFormatter.html) to format strings passed to both APIs. ```java // message with placeholders messages.warn("Can't order {} books: Not enough on stock", orderQuantity); // on ServiceException last argument can always be the causing exception throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid number: '{}'", wrongNumber, originalException); ``` You can localize these strings, by putting them into property files and passing the key of the message from the properties file to the API instead of the message text. When running your application on Spring, the CAP Java SDK integrates with [Spring's support for handling text resource bundles](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.internationalization). This handling by default expects translated texts in a `messages.properties` file under `src/main/resources`. The texts defined in the resource bundles can be formatted based on the syntax defined by `java.text.MessageFormat`. When the message or exception text is sent to the client it's localized using the client's locale, as described [in the Localization Cookbook](../../guides/i18n#user-locale). ::: code-group ```properties [messages.properties] my.message.key = This is a localized message with {0} parameters ``` ```properties [messages_de.properties] my.message.key = Das ist ein übersetzter Text mit {0} Parametern ``` ::: ```java // localized message with placeholders messages.warn("my.message.key", paramNumber); // localized message with placeholders and additional exception throw new ServiceException(ErrorStatuses.BAD_REQUEST, "my.message.key", paramNumber, originalException); ``` ### Translations for Validation Error Messages { #ootb-translated-messages } CAP Java provides out-of-the-box translation for error messages that originate from input validation annotations such as `@assert...` or `@mandatory` and security annotations `@requires` and `@restrict`. The error messages are optimized for UI scenarios and avoid any technical references to entity names or element names. Message targets are used where appropriate to allow the UI to show the error message next to the affected UI element. You can enable these translated error messages by setting [cds.errors.defaultTranslations.enabled: true](../developing-applications/properties#cds-errors-defaultTranslations-enabled). ### Exporting the Default Messages As of CAP Java 1.10.0, you can extract the available default messages as a resource bundle file for further processing (for example, translation). Therefore, the delivery artifact [cds-services-utils](https://search.maven.org/artifact/com.sap.cds/cds-services-utils) contains a resource bundle `cds-messages-template.properties` with all available error codes and default messages. Application developers can use this template to customize error messages thrown by the CAP Java SDK in the application. 1. [Download](https://search.maven.org/artifact/com.sap.cds/cds-services-utils) the artifact or get it from the local Maven repository in `~/.m2/repository/com/sap/cds/cds-services-utils//cds-services-utils-.jar`. 1. Extract the file. ```sh jar -f cds-services-utils-.jar -x cds-messages-template.properties ``` ::: tip \ is the version of CAP Java you're using in your project. ::: 1. Rename the extracted file `cds-messages-template.properties` appropriately (for example, to `cds-messages.properties`) and move it to the resource directory of your application. 1. In your Spring Boot application, you have to register this additional [resource bundle](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.internationalization) accordingly. > Now, you're able to customize the stack error messages in your application. With new CAP Java versions, there could be also new or changed error messages in the stack. To identify these changes, export `cds-messages-template.properties` from the new CAP Java version and compare it with the previous version using a diff tool. ## Target When SAP Fiori interprets messages it can handle an additional `target` property, which, for example, specifies which element of an entity the message refers to. SAP Fiori can use this information to display the message along the corresponding field on the UI. When specifying messages in the `sap-messages` HTTP header, SAP Fiori mostly ignores the `target` value. Therefore, specifying the `target` can only correctly be used when throwing a `ServiceException` as SAP Fiori correctly handles the `target` property in OData V4 error responses. A message target is always relative to an input parameter in the event context. For CRUD-based events this is always the `cqn` parameter, which represents and carries the payload of the request. For actions or functions, a message target can either be relative to the entity to which the action or function is bound (represented by the `cqn` parameter) or relative to a parameter of the action or function. In case of actions and functions SAP Fiori also requires the message target to be prefixed with the action or function's binding parameter or parameter names. When creating a message target, the correct parameter needs to be selected to specify what the relative message target path refers to. By default a message target always refers to the CQN statement of the event. In case of CRUD events this is the targeted entity. In case of bound actions and functions this is the entity that the action or function was bound to. As CRUD event handlers are often called from within bound actions or functions (e.g. `draftActivate`), CAP's OData adapter adds a parameter prefix to a message target referring to the `cqn` parameter only when required. ::: info When using the `target(String)` API, which specifices the full target as a `String`, no additional parameter prefixes are added by CAP's OData adapter. The `target` value is used as specified. ::: Let's illustrate this with the following example: ```cds entity Books : cuid, managed { title : localized String(111); descr : localized String(1111); author : Association to Authors; } entity Authors : cuid, managed { name : String(111); dateOfBirth : Date; placeOfBirth : String; books : Association to many Books on books.author = $self; } entity Reviews : cuid, managed { book : Association to Books; rating : Rating; title : String(111); text : String(1111); } service CatalogService { type Reviewer { firstName : String; lastName : String; } entity Books as projection on my.Books excluding { createdBy, modifiedBy } actions { action addReview(reviewer : Reviewer, rating : Integer, title : String, text : String) returns Reviews; }; } ``` Here, we have a `CatalogService` that exposes et al. the `Books` entity and a `Books` bound action `addReview`. ### CRUD Events Within a `Before` handler that triggers on inserts of new books a message target can only refer to the `cqn` parameter: ``` java @Before public void validateTitle(CdsCreateEventContext context, Books book) { // ... // event context contains the "cqn" key // implicitly referring to cqn throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget(b -> b.get("title")); // which is equivalent to explicitly referring to cqn throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget("cqn", b -> b.get("title")); // which is the same as using plain string // assuming direct POST request throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget("title"); // which is the same as using plain string // assuming surrounding bound action request with binding parameter "in", // e.g. draftActivate throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget("in/title"); } ``` Instead of using the generic API for creating the relative message target path, CAP Java SDK also provides a typed API backed by the CDS model: ``` java @Before public void validateTitle(CdsCreateEventContext context, Books book) { // ... // implicitly referring to cqn throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget(Books_.class, b -> b.title()); } ``` This also works for nested paths with associations: ``` java @Before public void validateAuthorName(CdsCreateEventContext context, Books book) { // ... // using un-typed API throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No title specified") .messageTarget(b -> b.to("author").get("name")); // using typed API throw new ServiceException(ErrorStatuses.BAD_REQUEST, "No author name specified") .messageTarget(Books_.class, b -> b.author().name()); } ``` ### Bound Actions and Functions The same applies to message targets that refer to an action or function input parameter: ``` java @Before public void validateReview(BooksAddReviewContext context) { // ... // event context contains the keys "reviewer", "rating", "title", "text", // which are the input parameters of the action "addReview" // referring to action parameter "reviewer", targeting "firstName" throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid reviewer first name") .messageTarget("reviewer", r -> r.get("firstName")); // which is equivalent to using the typed API throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid reviewer first name") .messageTarget(BooksAddReviewContext.REVIEWER, Reviewer_.class, r -> r.firstName()); // targeting "rating" throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid review rating") .messageTarget("rating"); // targeting "title" throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid review title") .messageTarget("title"); // targeting "text" throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid review text") .messageTarget("text"); } ``` If a message target refers to the `cqn` of the event context, for bound actions and functions that means, that the message target path is relative to the bound entity. For the `addReview` action that is the `Books` entity, as in the following example: ``` java @Before public void validateReview(BooksAddReviewContext context) { // ... // referring to the bound entity `Books` throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid book description") .messageTarget(b -> b.get("descr")); // or (using the typed API, referring to "cqn" implicitly) throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid book description") .messageTarget(Books_.class, b -> b.descr()); // which is the same as using plain string throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid book description") .messageTarget("in/descr"); } ``` ::: tip The previous examples showcase the target creation with the `ServiceException` API, but the same can be done with the `Message` API and the respective `target(...)` methods. ::: ## Error Handler { #errorhandler} An [exception](#exceptions) thrown in an event handler will stop the processing of the request. As part of that, protocol adapters trigger the `ERROR_RESPONSE` event of the [Application Lifecycle Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/application/ApplicationLifecycleService.html). By default, this event combines the thrown exception and the [messages](#messages) from the `RequestContext` in a list to produce the error response. OData V4 and V2 protocol adapters will use this list to create an OData error response with the first entry being the main error and the remaining entries in the details section. You can add event handlers using the `@After` phase for the `ERROR_RESPONSE` event to augment or change the error responses: - Method `getException()` of [ErrorResponseEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/application/ErrorResponseEventContext.html) returns the exception that triggered the event. - Method `getEventContexts()` of [ServiceException](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceException.html) contains the list of [event contexts](../event-handlers/#eventcontext), identifying the chain of processed events that led to the error. The first entry in the list is the context closest to the origin of the exception. You can use the exception and the list of events contexts (with service, entity and event name) to selectively apply your custom error response handling. Some exceptions, however, may not be associated with a context and the list of contexts will be empty for them. The list of messages available via `getResult().getMessages()` of the `ErrorResponseEventContext` contains the messages (see [Messages API](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messages/Message.html)) the protocol adapter will use to generate the final error response. You can remove, reorder or add new messages to this list by using `Message.create()` . You can also override the resulting HTTP status with method `getResult().setHttpStatus()`. Use only statuses that indicate errors, meaning status code 400 or higher. ::: warning Don't create new messages in the `Messages` of the `RequestContext` (also available through `context.getMessages()`). They will not be included in the response. Only the result provided by the `ErrorResponseEventContext` is considered by the protocol adapter. ::: In case your implementation of the error handler throws an exception, returns no messages or sets a non-error HTTP status, the error response will default to a generic internal server error with HTTP status 500 and will not display any error details. The following example of a simple error handler overrides the standard message text of authorization errors. Technically, it replaces the first message, that is the main error in OData, in the response with a new message that has a custom text, **only** for exceptions with error code `CdsErrorStatuses.EVENT_FORBIDDEN`. ```java @Component @ServiceName(ApplicationLifecycleService.DEFAULT_NAME) public class SimpleExceptionHandler implements EventHandler { @After public void overrideMissingAuthMessage(ErrorResponseEventContext context) { if (context.getException().getErrorStatus().equals(CdsErrorStatuses.EVENT_FORBIDDEN)) { context.getResult().getMessages().set(0, Message.create(Message.Severity.ERROR, "You cannot execute this action")); } } } ``` The second example shows how to override validation messages triggered by the annotation `@assert.range` for a certain entity. The exception [triggered by CAP](#throwing-a-serviceexception-from-messages) contains a reference to the event context that can be used to identify the target entity. The target of each message can be used to identify the affected field, but keep in mind that targets are always relative to the root entity of the request. That means in case of deep inserts or updates, you need to match not only the entity that has annotations but also the parent entities. ```java @Component @ServiceName(ApplicationLifecycleService.DEFAULT_NAME) public class ExceptionServiceErrorMessagesHandler implements EventHandler { @After public void overrideValidationMessages(ErrorResponseEventContext context) { context.getException().getEventContexts().stream().findFirst().ifPresent(originalContext -> { if (Books_.CDS_NAME.equals(originalContext.getTarget().getQualifiedName())) { // filter by entity List messages = context.getResult().getMessages(); for(int i=0; i")`. To establish type-safe access, additional attributes may also be accessed via custom extensions of `UserInfo`. To map XSUAA users, interface `XsuaaUserInfo` is available by default. You can create `XsuaaUserInfo` instances either by calling `userInfo.as(XsuaaUserInfo.class)` or by Spring injection: ```java @Autowired XsuaaUserInfo xsuaaUserInfo; @Before(event = CqnService.EVENT_READ) public void beforeRead() { boolean isAuthenticated = xsuaaUserInfo.isAuthenticated(); String email = xsuaaUserInfo.getEmail(); String givenName = xsuaaUserInfo.getGivenName(); String familyName = xsuaaUserInfo.getFamilyName(); // ... } ``` The same functionality is provided for arbitrary custom interfaces, which are extensions of `UserInfo`. [ParameterInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/ParameterInfo.html) provides access to request-specific information. For example, if the request is processed by an HTTP-based protocol adapter, `ParameterInfo` provides access to the HTTP request information. It exposes the [correlation ID](../operating-applications/observability#correlation-ids), the locale, the headers, and the query parameters of a request. [AuthenticationInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/authentication/AuthenticationInfo.html) stores the authentication claims of the authenticated user. For instance, if OAuth2-based authentication is used, this is a JWT token (for example, XSUAA or IAS). You can call `is(Class)` to find the concrete `AuthenticationInfo` type. [JwtTokenAuthenticationInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/authentication/JwtTokenAuthenticationInfo.html) represents a JWT token, but [BasicAuthenticationInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/authentication/BasicAuthenticationInfo.html) can be observed on requests with basic authentication (for example, test scenario with mock users). The method `as(Class)` helps to perform the downcast to a concrete subtype. ## Defining New Request Contexts { #defining-requestcontext} The CAP Java SDK allows you to create new Request Contexts and define their scope. This helps you to control, which set of parameters is used when events are processed by services. There are a few typical use cases in a CAP-based, multitenant application on SAP BTP in which creation of new Request Contexts is necessary. These scenarios are identified by a combination of the user (technical or named) and the tenant (provider or subscribed). ![A named user can switch to a technical user in the same/subscriber tenant using the systemUser() method. Also, a named user can switch to a technical user in the provider tenant using the systemUserProvider() method. In addition technical users provider/subscriber tenants can switch to technical users on provider/subscriber tenants using the methods systemUserProvider() or systemUser(tenant).](./assets/requestcontext.drawio.svg) When calling CAP Services, it's important to call them in an appropriate Request Context. Services might, for example, trigger HTTP requests to external services by deriving the target tenant from the current Request Context. The `RequestContextRunner` API offers convenience methods that allow an easy transition from one scenario to the other. | Method | Description | |----------------------|--------------------------------------------------------------------------------------------------------------------------------------| | systemUserProvider() | Switches to a technical user targeting the provider account. | | systemUser() | Switches to a technical user and preserves the tenant from the current `UserInfo` (for example downgrade of a named user Request Context). | | systemUser(tenant) | Switches to a technical user targeting a given subscriber account. | | anonymousUser() | Switches to an anonymous user. | | privilegedUser() | Elevates the current `UserInfo` to by-pass all authorization checks. | ::: info Note The [RequestContextRunner](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/RequestContextRunner.html) API does not allow you to create a Request Context based on a named user. Named user contexts are only created by the CAP Java framework as initial Request Context is based on appropriate authentication information (for example, JWT token) attached to the incoming HTTP request. ::: In the following a few concrete examples are given: - [Switching to Technical User](#switching-to-technical-user) - [Switching to Provider Tenant](#switching-to-provider-tenant) - [Switching to a Specific Technical Tenant](#switching-to-a-specific-technical-tenant) ### Switching to Technical User ![The graphic is explained in the accompanying text.](./assets/nameduser.drawio.svg) The incoming JWT token triggers the creation of an initial RequestContext with a named user. Accesses to the database in the OData Adapter as well as the custom `On` handler are executed within tenant1 and authorization checks are performed for user JohnDoe. An additionally defined `After` handler wants to call out to an external service using a technical user without propagating the named user JohnDoe. Therefore, the `After` handler needs to create a new Request Context. To achieve this, it's required to call `requestContext()` on the current `CdsRuntime` and use the `systemUser()` method to remove the named user from the new Request Context: ```java @After(entity = Books_.CDS_NAME) public void afterHandler(EventContext context){ runtime.requestContext().systemUser().run(reqContext -> { // call technical service ... }); } ``` ### Switching to Technical Provider Tenant {#switching-to-provider-tenant} ![The graphic is explained in the accompanying text.](./assets/switchprovidertenant.drawio.svg) The application offers an action for one of its CDS entities. Within the action, the application communicates with a remote CAP service using an internal technical user from the provider account. The corresponding `on` handler of the action needs to create a new Request Context by calling `requestContext()`. Using the `systemUserProvider()` method, the existing user information is removed and the tenant is automatically set to the provider tenant. This allows the application to perform an HTTP call to the remote CAP service, which is secured using the pseudo-role `internal-user`. ```java @On(entity = Books_.CDS_NAME) public void onAction(AddToOrderContext context){ runtime.requestContext().systemUserProvider().run(reqContext -> { // call remote CAP service ... }); } ``` ### Switching to a Specific Technical Tenant ![The graphic is explained in the accompanying text.](./assets/switchtenant.drawio.svg) The application is using a job scheduler that needs to regularly perform tasks on behalf of a certain tenant. By default, background executions (for example in a dedicated thread pool) aren't associated to any subscriber tenant and user. In this case, it's necessary to explicitly define a new Request Context based on the subscribed tenant by calling `systemUser(tenantId)`. This ensures that the Persistence Service performs the query for the specified tenant. ```java runtime.requestContext().systemUser(tenant).run(reqContext -> { return persistenceService.run(Select.from(Books_.class)) .listOf(Books.class); }); ``` ## Modifying Request Contexts { #modifying-requestcontext} Besides the described common use cases, it's possible to modify parts of an existing Request Context. To manually add, modify or reset specific attributes within the scope of a new Request Context, you can use the [RequestContextRunner](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/RequestContextRunner.html) API. ```java List readBooksNotLocalized(EventContext context) { return context.getCdsRuntime().requestContext() .modifyParameters(param -> param.setLocale(null)) .run(newContext -> { return persistenceService.run(Select.from(Books_.class)) .listOf(Books.class); }); } ``` In the example, executing the CQN Select query on the Persistence Service needs to run inside a Request Context without locale in order to retrieve unlocalized data. Before the execution, the newly created context that wraps the functional code, can be modified arbitrarily: - `modifyParameters()`: Add, modify, or remove (single) parameters. - `clearParameters()`: Resets all parameters. - `providedParameters()`: Resets the parameters according to the registered `ParameterInfoProvider`. Similarly, it's possible to fully control the `UserInfo` instance provided in the RequestContext. It's guaranteed, that the original parameters aren't touched by the nested `RequestContext`. In addition, all original parameter values, which aren't removed or modified are visible in the nested scope. This enables you to either define the parameters from scratch or just to put a modification layer on top. Some more examples: - `modifyUser(user -> user.removeRole("read").setTenant(null).run(...)`: Creates a context with a user that is similar to the outer context but without role `read` and tenant. - `modifyParameters(param -> param.setHeader("MY-HEADER", "my value"))`: Adds or sets a header parameter `MY-HEADER:my value`. The modifications can be combined arbitrarily in fluent syntax. ### Request Context Inheritance When creating a new Request Context all information that is stored in it is obtained through _providers_, see also [Registering Global Providers](#global-providers). Any modifications that you perform are applied on the information obtained by these providers. However: - A new nested Request Context, created within a scope that already has a Request Context, inherits copies of all values from its parent Request Context. - Modifications in that scenario are applied on the inherited information. Special care needs to be taken with regard to the CDS model and feature toggles. - Both of these are _only_ determined in the initial Request Context. - It's not possible to modify the CDS Model and feature toggles when creating a nested Request Context. There's one exception to that rule: When modifying the user's tenant the CDS model is also redetermined. ::: tip When changing the user's tenant it's required to open a new [ChangeSet](./changeset-contexts#changeset-contexts), to ensure that database transactions and connections are directed to the new tenant. In case you miss this step CAP Java SDK detects this error and prevent any database access to avoid leaking information between tenants. ::: ## Registering Global Providers { #global-providers} The CAP Java SDK ensures that each Request Context provides non-null values of the objects stored in it. Hence, if a service is called outside the scope of an existing Request Context, the runtime implicitly creates a Request Context for that service call. To accomplish the initialization of a Request Context, the CAP Java SDK uses provider APIs, such as [UserInfoProvider](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/UserInfoProvider.html) or [ParameterInfoProvider](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/ParameterInfoProvider.html). The default providers registered with the `CdsRuntime` usually derive the required information from the HTTP request, if available. These provider interfaces allow for customization. That means, the way how `UserInfo` or `ParameterInfo` are initially determined, can be modified or replaced. For example, in some scenarios the user information can't be derived from a principal attached to the current thread, as done in the default `UserInfoProvider`. Authentication is done outside the service and user information is passed via dedicated header parameters. A custom provider to support this could look like in this sketch: ```java @Component @Order(1) public class HeaderBasedUserInfoProvider implements UserInfoProvider { @Autowired HttpServletRequest req; // accesses current HTTP request @Override public UserInfo get() { if (RequestContextHolder.getRequestAttributes() != null) { // only within request thread req is available return UserInfo.create() .setTenant(req.getHeader("custom-tenant-header")) .setName(req.getHeader("custom-username-header")); } return UserInfo.create(); } } ``` It's allowed to define several providers of the same type. In Spring, the provider with the lowest `@Order` is first. In plain Java, the order is given by registration order. You can reuse the provider with lower priority and build a modified result. To accomplish this, remember the previous provider instance, which is passed during registration via `setPrevious()` method call. Such a chain of providers can be used to normalize user names or adjust user roles to match specific needs. ```java @Component public class CustomUserInfoProvider implements UserInfoProvider { private UserInfoProvider previousProvider; @Override public UserInfo get() { ModifiableUserInfo userInfo = UserInfo.create(); if (previousProvider != null) { UserInfo previous = previousProvider.get(); if (previous != null) { userInfo = previous.copy(); } } if (userInfo != null) { // Normalize user name userInfo.setName(userInfo.getName().toLowerCase(Locale.ENGLISH)); } return userInfo; } @Override public void setPrevious(UserInfoProvider previous) { this.previousProvider = previous; } } ``` ## Passing Request Contexts to Threads { #threading-requestcontext} CAP service calls can be executed in different threads. In most cases the Request Context from the parent thread - typically the worker thread executing the request - needs to be propagated to one or more child threads. Otherwise, required parameter and user information might be missing, for example, when authorizing CRUD events or creating tenant-specific database connections. To propagate the parent context, create an instance of `RequestContextRunner` in the *parent* thread and open a new `RequestContext` with `run()` method in the *child* thread. This way all parameters from the parent context are also available in the context of one or more spawned threads, as demonstrated in the following example: ```java RequestContextRunner runner = runtime.requestContext(); Future result = Executors.newSingleThreadExecutor().submit(() -> { return runner.run(threadContext -> { return persistenceService.run(Select.from(Books_.class)); }); }); ``` Even though the `threadContext` variable isn't directly used in the example, executing the `run` method takes care of populating the Request Context to the thread-local store of the child thread. The Persistence Service then internally uses the thread-local store to access the Request Context in order to access the currently active tenant. You're free to modify the parameters by means of the API described in [Defining Request Contexts](#defining-requestcontext) in addition. But be aware that `providedParameters()` resp. `providedUser()` might lead to unexpected behavior as typically the [standard providers](#global-providers) require to run in the context of the original worker thread to access request-local data. # ChangeSet Contexts ChangeSet Contexts are an abstraction around transactions. This chapter describes how ChangeSets are related to transactions and how to manage them with the CAP Java SDK. ## Overview ChangeSet Contexts are used in the CAP Java SDK as a light-weight abstraction around transactions. They are represented by the [ChangeSetContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/changeset/ChangeSetContext.html) interface. ChangeSet Contexts only define transactional boundaries, but do not define themselves how a transaction is started, committed or rolled back. They are therefore well suited to plug in different kinds of transaction managers to integrate with different kinds of transactional resources. The currently active ChangeSet Context can be accessed from the [Event Context](../event-handlers/#eventcontext): ```java context.getChangeSetContext(); ``` ## Defining ChangeSet Contexts { #defining-changeset-contexts} When [events](../../about/best-practices#events) are processed on [services](../services) the CAP Java SDK ensures that a ChangeSet Context is opened. If no ChangeSet Context is active the processing of an event ensures to open a new ChangeSet Context. This has the effect, that by default a ChangeSet Context is opened around the outermost event that was triggered on any service. This ensures that every top-level event is executed with its own transactional boundaries. For example, if a `CREATE` event is triggered on an Application Service, which is split into multiple `CREATE` events to different entities on the Persistence Service, the processing of the `CREATE` event on the Application Service ensures to open a new ChangeSet Context around all of these events. All interactions with the Persistence Service and therefore all interactions with the database, happen in a single transaction, which is committed, when the processing of the `CREATE` event on the Application Service finishes. In general, this frees event handler implementations to worry about transactions. Nevertheless you can explicitly define ChangeSet Contexts. It is also possible to nest these ChangeSet Contexts, allowing for suspending previous transactions. The [CdsRuntime](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/CdsRuntime.html) provides an API to define a new ChangeSet Context: ```java runtime.changeSetContext().run(context -> { // executes inside a dedicated ChangeSet Context }); ``` The code that is executed inside the `java.util.function.Function` or `java.util.function.Consumer` that is passed to the `run()` method, is executed in a dedicated ChangeSet Context. ## Reacting on ChangeSets It is possible to register listeners on the ChangeSet Context to perform certain actions shortly before the transaction will be committed or after the transaction was committed or rolled-back. The [ChangeSetListener](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/changeset/ChangeSetListener.html) interface can be used for this case. It allows to register a listener, which is executed shortly before the ChangeSet is closed (`beforeClose()`) or one, that is executed after the ChangeSet was closed (`afterClose(boolean)`). The `afterClose` method has a boolean parameter, which indicates if the ChangeSet was completed successfully (`true`) or failed and rolled-back (`false`). ```java ChangeSetContext changeSet = context.getChangeSetContext(); changeSet.register(new ChangeSetListener() { @Override public void beforeClose() { // do something before changeset is closed } @Override public void afterClose(boolean completed) { // do something after changeset is closed } }); ``` ## Cancelling ChangeSets The ChangeSet Context can be used to cancel a ChangeSet without throwing an exception. All events in the changeset are processed in that case, but the transaction is rolled back at the end. A changeset can still be canceled from within the `beforeClose()` listener method. ```java ChangeSetContext changeSet = context.getChangeSetContext(); // cancel changeset without throwing an exception changeSet.markForCancel(); ``` ## Database Transactions in Spring Boot Database transactions in CAP are always started and initialized lazily during the first interaction with the Persistence Service. When running in Spring Boot, CAP Java completely integrates with Spring's transaction management. As a result you can use Spring's `@Transactional` annotations or the `TransactionTemplate` to control transactional boundaries as an alternative to using the ChangeSet Context. This integration with Spring's transaction management also comes in handy, in case you need to perform plain JDBC connections in your event handlers. This might be necessary, when calling SAP HANA procedures or selecting from tables not covered by CDS and the Persistence Service. When annotating an event handler with `@Transactional`, Spring ensures that a transaction is initialized. CAP in that case ensures, that this transaction is managed as part of an existing ChangeSet Context, for which the transaction wasn't yet initialized. If no such ChangeSet Context exists, a new ChangeSet Context is created. In case the transaction propagation is specified as `REQUIRES_NEW`, Spring, and CAP ensure that a new transaction and ChangeSet Context are initialized. This mechanism suspends existing transactions and ChangeSet Context, until the newly created one is closed. Spring's transaction management can therefore be used to control transactional boundaries and to initialize transactions more eagerly than CAP. This can be combined with Spring's standard capabilities to get access to a plain JDBC connection: ```java @Autowired private JdbcTemplate jdbc; @Autowired private DataSource ds; @Before(event = CqnService.EVENT_CREATE, entity = Books_.CDS_NAME) @Transactional // ensure transaction is initialized public void beforeCreateBooks(List books) { // JDBC template jdbc.queryForList("SELECT 1 FROM DUMMY"); // Connection object Connection conn = DataSourceUtils.getConnection(ds); conn.prepareCall("SELECT 1 FROM DUMMY").executeQuery(); } ``` ### Setting Session Context Variables You can leverage the simplified access to JDBC APIs in Spring Boot to set session context variables on the JDBC connection. When setting these variables this way, they will also influence statements executed by CAP itself through the Persistence Service APIs. The following example shows how to set session context variables by means of a custom event handler that is called on all interactions with the Persistence Service. If setting session context variables is needed only for specific queries, it is also possible to narrow down the invocation of the event handler by providing a more specific `@Before` annotation: ```java @Component @ServiceName(value = "*", type = PersistenceService.class) public class SessionContextHandler implements EventHandler { private final static Set handled = Collections.synchronizedSet(new HashSet<>()); @Autowired private DataSource dataSource; @Before protected void setSessionContextVariables(EventContext context) { ChangeSetContext changeSet = context.getChangeSetContext(); // handle every transaction only once if(handled.add(changeSet)) { // set the session variable setSessionContextVariable("foo", "bar"); changeSet.register(new ChangeSetListener(){ @Override public void beforeClose() { // clear the session variable setSessionContextVariable("foo", null); handled.remove(changeSet); } }); } } private void setSessionContextVariable(String name, String value) { Connection con = null; try { // obtains the transaction connection con = DataSourceUtils.getConnection(dataSource); con.setClientInfo(name, value); } catch (SQLClientInfoException e) { // handle appropriately } finally { // only releases the obtained connection // the transaction connection is still kept open with the // session variables set DataSourceUtils.releaseConnection(con, dataSource); } } } ``` ## Avoiding Transactions for Select { #avoid-transactions } CAP ensures that every interaction with a service is inside of a ChangeSet Context. However transactions are not started at that point in time yet. By default, any kind of first interaction with the Persistence Service will begin the transaction. Once a transaction has been started, a connection for that transaction is reserved from the connection pool. This connection is only returned to the connection pool on commit or rollback of the transaction. However, `READ` events which run simple Select queries don't actually require transactions in most cases. When setting the property `cds.persistence.changeSet.enforceTransactional` to `false` most Select queries do not cause a transaction to be started any longer. A connection for these queries is obtained from the connection pool and returned immediately after executing the queries on the database. This can increase throughput of an application, by making connections available for concurrent requests faster. As soon as a modifying statement is executed on the Persistence Service, a transaction is started. All subsequent Select queries will participate in that transaction. Note, that this behaviour is only transparent when using the default transaction isolation level "Read Committed". A ChangeSet Context can always be marked as requiring a transaction, by calling the `markTransactional` on the `ChangeSetContext` or `ChangeSetContextRunner`. The next interaction with the Persistence Service will guarantee to start a transaction in that case. Alternatively, Spring Boot annotations `@Transactional` can be used to eagerly start a transaction. Some Select queries will still require a transaction: - Select queries with a lock: These are treated like a modifying statement and will start a transaction. - Select queries reading streamed media data: These are currently not automatically detected. The surrounding ChangeSet Context needs to be marked as transactional explicitly. If not done, `InputStream`s might be corrupted or closed when trying to read them after the connection was returned to the connection pool already. # Fiori Drafts This section describes which events occur in combination with SAP Fiori Drafts. ## Overview { #draftevents} See [Cookbook > Serving UIs > Draft Support](../advanced/fiori#draft-support) for an overview on SAP Fiori Draft support in CAP. ## Reading Drafts When enabling an entity for draft, an additional set of database tables is created for the entity composition tree. These database tables are used to store the drafts. When reading draft-enabled entities, data from the active entity and the drafts is merged into a joint result. As part of this, draft-specific elements like `IsActiveEntity`, `HasActiveEntity` or `HasDraftEntity` are calculated. The standard `READ` event of a `CqnService` orchestrates the delegation of the query to the active entity and the drafts. It might execute multiple queries for this internally. As part of this orchestration additional events `ACTIVE_READ` and `DRAFT_READ` are triggered. They allow custom handlers to override reading of active entities or reading of drafts: | HTTP / OData request | Event constant name | Default implementation | | ----------------------- | ---------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | GET | `CqnService.EVENT_READ` | Reads and merges data from active entities with their drafts. Internally triggers `ACTIVE_READ` and `DRAFT_READ`. | | n/a | `DraftService.EVENT_ACTIVE_READ` | Reads data from active entities. | | n/a | `DraftService.EVENT_DRAFT_READ` | Reads data from drafts. | ::: tip `@Before` or `@After` handlers which modify queries or read data are best registered on the `READ` event. Events `ACTIVE_READ` or `DRAFT_READ` are preferrable for custom `@On` handlers of draft-enabled entities. ::: By default queries executed internally by the `READ` event are optimized for performance. In certain scenarios queries will rely on the possibility of joining between tables of the active entity and drafts on the database. Active entity data and draft data is usually stored in tables on the same database schema. However, it is also possible to enable remote entities or entities stored in a different persistence for drafts. In that case set the property `cds.drafts.persistence` to `split` (default: `joint`). This enforces the following behavior: - Queries strictly separate active entities and drafts. - Queries to active entities don't contain draft-specific elements like `IsActiveEntity`. You can then delegate reading of active entities, for example to a remote S/4 system: ```java @On(entity = MyRemoteDraftEnabledEntity_.CDS_NAME) public Result delegateToS4(ActiveReadEventContext context) { return remoteS4.run(context.getCqn()); } ``` > Note that this is only useful when also delegating `CREATE`, `UPDATE` and `DELETE` events, which only operate on active entities always, to the remote S/4 system as well. ::: warning When setting `cds.drafts.persistence` to `split` only queries that are specified by the SAP Fiori draft orchestration are supported. ::: ## Editing Drafts When users edit a draft-enabled entity in the frontend, the following requests are sent to the CAP Java backend. As an effect, draft-specific events are triggered, as described in the following table. The draft-specific events are defined by the [DraftService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/draft/DraftService.html) interface. ::: tip Draft-enabled entities have an extra key `IsActiveEntity` by which you can access either the active entity or the draft (inactive entity). ::: | HTTP / OData request | Event constant name | Default implementation | | -------------------------------------- | ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | | POST | `DraftService.EVENT_DRAFT_NEW` | Creates a new empty draft. Internally triggers `DRAFT_CREATE`. | | PATCH with key `IsActiveEntity=false` | `DraftService.EVENT_DRAFT_PATCH` | Updates an existing draft | | DELETE with key `IsActiveEntity=false` | `DraftService.EVENT_DRAFT_CANCEL` | Deletes an existing draft | | DELETE with key `IsActiveEntity=true` | `CqnService.EVENT_DELETE` | Deletes an active entity *and* the corresponding draft | | POST with action `draftPrepare` | `DraftService.EVENT_DRAFT_PREPARE` | Empty implementation | | POST with action `draftEdit` | `DraftService.EVENT_DRAFT_EDIT` | Creates a new draft from an active entity. Internally triggers `DRAFT_CREATE`. | | POST with action `draftActivate` | `DraftService.EVENT_DRAFT_SAVE` | Activates a draft and updates the active entity. Triggers an `CREATE` or `UPDATE` event on the affected entity. | | n/a | `DraftService.EVENT_DRAFT_CREATE` | Stores a new draft in the database. | You can use these events to add custom logic to the SAP Fiori draft flow, for example to interact with drafts or to validate user data. The following example registers a `@Before` handler to fill in default-values into a draft before the user starts editing: ```java @Before public void prefillOrderItems(DraftNewEventContext context, OrderItems orderItem) { // Pre-fill fields with default values } ``` The `DRAFT_CREATE` is an internal event that is not triggered by OData requests directly. It can be used to set default or calculated values on new drafts, regardless if they were created from scratch (`DRAFT_NEW` flow) or based on an existing active entity (`DRAFT_EDIT` flow). For more examples, see the [Bookshop sample application](https://github.com/SAP-samples/cloud-cap-samples-java/tree/master/srv/src/main/java/my/bookshop/handlers/AdminServiceHandler.java). ## Activating Drafts When you finish editing drafts by pressing the *Save* button, a draft gets activated. That means, either a single `CREATE` or `UPDATE` event is triggered to create or update the active entity with all of its compositions through a deeply structured document. You can register to these events to validate the activated data. The following example shows how to validate user input right before an active entity gets created: ```java @Before public void validateOrderItem(CdsCreateEventContext context, OrderItems orderItem) { // Add validation logic } ``` During activation the draft data is deleted from the database. This happens before the active entity is created or updated within the same transaction. In case the create or update operation raises an error, the transaction is rolled back and the draft data is restored. ## Working with Draft-Enabled Entities When deleting active entities that have a draft, the draft is deleted as well. In this case, a `DELETE` and `DRAFT_CANCEL` event are triggered. To read an active entity, send a `GET` request with key `IsActiveEntity=true`, for example: ```http GET /v4/myservice/myentity(IsActiveEntity=true,ID=); ``` Likewise, to read the corresponding draft, call: ```http GET /v4/myservice/myentity(IsActiveEntity=false,ID=); ``` To get all active entities, you could use a filter as illustrated by the following example: ```http GET /v4/myservice/myentity?$filter=IsActiveEntity eq true ``` ## Bypassing the SAP Fiori Draft Flow { #bypassing-draft-flow } It's possible to create and update data directly without creating intermediate drafts. For example, this is useful when prefilling draft-enabled entities with data or in general, when technical components deal with the API exposed by draft-enabled entities. To achieve this, use the following requests. You can register event handlers for the corresponding events to validate incoming data: | HTTP / OData request | Event constant name | Default implementation | | ----------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------- | | POST with `IsActiveEntity: true` in payload | `CqnService.EVENT_CREATE` | Creates the active entity | | PUT with key `IsActiveEntity=true` in URI | `CqnService.EVENT_CREATE`
`CqnService.EVENT_UPDATE` | Creates or updates the active entity (full update) | | PATCH with key `IsActiveEntity=true` in URI | `CqnService.EVENT_UPDATE` | Creates or updates the active entity (sparse update) | These events have the same semantics as described in section [Handling CRUD events](./cqn-services/application-services#crudevents). ## Draft Lock { #draft-lock } An entity with a draft is locked from being edited by other users until either the draft is saved or a timeout is hit (15 minutes by default). You can configure this timeout by the following application configuration property: ```yaml cds.drafts.cancellationTimeout: 1h ``` You can turn off this feature completely by means of the application configuration property: ```yaml cds.security.draftProtection.enabled: false ``` ## Draft Garbage Collection { #draft-gc } Stale drafts are automatically deleted after a timeout (30 days default). You can configure the timeout with the following application configuration property: ```yaml cds.drafts.deletionTimeout: 8w ``` In this example, the draft timeout is set to 8 weeks. This feature can be also turned off completely by setting the application configuration: ```yaml cds.drafts.gc.enabled: false ``` ::: tip To get notified when a particular draft-enabled entity is garbage collected, you can register an event handler on the `DRAFT_CANCEL` event. ::: ## Overriding SAP Fiori's Draft Creation Behaviour { #fioridraftnew} By default SAP Fiori triggers a POST request with an empty body to the entity collection to create a new draft. This behavior can be overridden [by implementing a custom action](./cqn-services/application-services#actions), which SAP Fiori will trigger instead. 1. Define an action bound to the draft-enabled entity with an explicitly binding parameter typed with `many $self`. This way, the action used to create a new draft is bound to the draft-enabled entity collection. 1. Annotate the draft-enabled entity with `@Common.DraftRoot.NewAction: ''`. This indicates to SAP Fiori that this action should be used when creating a new draft. 1. Implement the action in Java. The implementation of the action must trigger the `newDraft(CqnInsert)` method of the `DraftService` interface to create the draft. In addition, it must return the created draft entity. The following code summarizes all of these steps in an example: ```cds service AdminService { @odata.draft.enabled @Common.DraftRoot.NewAction: 'AdminService.createDraft' entity Orders as projection on my.Orders actions { action createDraft(in: many $self, orderNo: String) returns Orders; }; } ``` ```java @On(entity = Orders_.CDS_NAME) public void createDraft(CreateDraftContext context) { Orders order = Orders.create(); order.setOrderNo(context.getOrderNo()); context.setResult(adminService.newDraft(Insert.into(Orders_.class).entry(order)).single(Orders.class)); } ``` ## Consuming Draft Services { #draftservices} If an [Application Service](cqn-services/application-services#application-services) is created based on a service definition, that contains a draft-enabled entity, it also implements the [DraftService](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/draft/DraftService.html) interface. This interface provides an API layer around the [draft-specific events](fiori-drafts#draftevents), and allows to create new draft entities, patch, cancel or save them, and put active entities back into edit mode. The Draft-Service-specific APIs only operate on entities in draft-mode. The CQN Query APIs (`run` methods) provided by any Application Service, operate on active entities only. However, there's one exception from this behavior, which is the `READ` event: When reading from a Draft Service, active entities and draft entities are both queried and the results are combined. ::: warning Persistence Services aren't draft-aware. Use the respective Draft Service or Application Service, when running draft-aware queries. ::: The following example, shows the usage of the Draft-Service-specific APIs: ```java import static bookshop.Bookshop_.ORDERS; DraftService adminService = ...; // create draft Orders order = adminService.newDraft(Insert.into(ORDERS)).single(Orders.class); // set values order.setOrderNo("DE-123456"); // patch draft adminService.patchDraft(Update.entity(ORDERS).data(order) .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(false)))); // save draft CqnSelect orderDraft = Select.from(ORDERS) .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(false))); adminService.saveDraft(orderDraft); // read draft Orders draftOrder = adminService.run(orderDraft).single().as(Order.class); // put draft back to edit mode CqnSelect orderActive = Select.from(ORDERS) .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(true))); adminService.editDraft(orderActive, true); // read entities in draft mode and activated entities adminService.run(Select.from(ORDERS).where(o -> o.ID().eq(order.getId()))); ``` CAP Messaging provides support for publish-subscribe-based messaging, which is an asynchronous communication pattern well suited for scenarios where a sender wants to send out information to one or many receivers that are potentially unknown and/or unavailable at the time of sending. In contrast, the nature of synchronous communication between services can be disadvantageous depending on the desired information flow, for example, sender and receiver need to be available at the time of the request. The sender needs to know the receiver and how to call it, and that communication per request is usually point-to-point only. In the following, we provide a basic introduction to publish-subscribe-based messaging and then explain how to use it in CAP. If you're already familiar with publish-subscribe-based messaging, feel free to skip the following introduction section. ## Pub-Sub Messaging In a publish-subscribe-based messaging scenario (pub-sub messaging), senders send a message tagged with a topic to a message broker. Receivers can create queues at the message broker and subscribe these queues to the topics they're interested in. The message broker will then copy incoming messages matching the subscribed topics to the corresponding queues. Receivers can now consume these messages from their queues. If the receiver is offline, no messages will be lost as the message broker safely stores messages in the queue until a receiver consumes the messages. After the receiver acknowledges the successful processing of a message, the message broker will delete the acknowledged message from the queue. ![The graphic is explained in the accompanying text.](./assets/messaging_foundation.png){} CAP makes sending and receiving messages easy by providing an API agnostic from specific message brokers, and taking care of broker-specific mechanics like connection handling, protocols to use, creating queues, subscriptions, etc. The API seamlessly blends into the common event API of CAP services, so that event messages can be sent using `emit` and handlers to execute when receiving event messages can be declared with the `@On` annotation. CAP provides support for different message brokers by providing several messaging services implementing the API for different message brokers. Messaging support as such is built into the core of CAP, as well as a "file-based message broker" for local testing, that mocks a message broker and persists messages to a file on the local file system. Support for other message brokers can be added by including separate Maven dependencies specifically for that broker. See [Supported Message Brokers](#supported-message-brokers) for more details. In the following, we'll first describe how to send and receive messages in general, before we explain more complex scenarios and configuration options. ## Sending For a quick start in a local development scenario, use the file-based messaging service (mocking a message broker on the local file system), so you do not need to set up a real message broker first. Later, the configured messaging service can be changed to a real one, without the need of changing the code. CAP services can be configured in the file _application.yaml_. Here, enable the `file-based-messaging` message service with a specific file to store messages in - like shown in the following example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" kind: "file-based-messaging" binding: "/any/path/to/file.txt" ``` ::: With the availability of a messaging service, you can now use it to send messages, as illustrated in the following example: ```java @Autowired MessagingService messagingService; // Sending via the technical API of the messaging service messagingService.emit("My/Destination/Messaging/Topic", "raw message payload"); // Sending by emitting a context via CAP service API TopicMessageEventContext context = TopicMessageEventContext.create("My/Destination/Messaging/Topic"); context.setData("raw message payload"); messagingService.emit(context); ``` As shown in the example, there are two flavors of sending messages with the messaging service: - Directly using the `emit` method of the technical messaging service API. This is a convenient way of sending messages, in case you shouldn't need the context object as such and quickly want to send a message with given topic and payload. - A CAP messaging service is also a normal CAP service, and as such provides an EventContext-based `emit` method. Using the standard way of creating and dispatching a context via `emit` can also be used to send a message. As shown, create a `TopicMessageEventContext` with the desired topic (in terms of CAP the topic represents an event) for the message, set the payload of the message, and emit the context. In section [CDS-Declared Events](#cds-declared-events), we show how to declare events in CDS models and by this let CAP generate EventContext interfaces especially tailored for the defined payload, that allows type safe access to the payload. ::: tip Using an outbox The messages are sent once the transaction is successful. Per default, an in-memory outbox is used, but there's also support for a [persistent outbox](./outbox#persistent). You can configure a [custom outbox](./outbox#custom-outboxes) for a messaging service by setting the property `cds.messaging.services..outbox.name` to the name of the custom outbox. This specifically makes sense when [using multiple channels](../guides/messaging/#using-multiple-channels). ::: ## Receiving To receive messages matching a desired topic from a message broker, you just need to define a custom handler for the topic on the messaging service. Example: ```java @On(service = "messaging-name", event = "My/Destination/Messaging/Topic") public void receiveMyTopic(TopicMessageEventContext context) { // get ID and payload of message String msgId = context.getMessageId(); String payload = context.getData(); // ... } ``` As you can see in the example, the event context not only provides access to the raw message, but also to a unique message ID. ::: tip For messaging services, the `@On` handlers don't need to be completed by the `context.setCompleted()` method. The reason for that is because CAP wants to support the parallel handling of the messaging events and completes the context automatically. There could be numerous use cases where different components of the CAP application want to be notified by messaging events. Even more, you should not complete the context in the handler manually. Otherwise, not all registered handlers can be notified. ::: ::: warning _❗ Warning_ If any exceptions occur in the handler, the messaging service will not acknowledge the message as successfully processed to the broker. In consequence, the broker will deliver this message again. ::: ## CDS-Declared Events In CDS models, services can declare events and the structure of their payload. When compiling, CAP will automatically generate interfaces to access the event message and its payload in a type safe fashion. Example: ```cds service ReviewService { // ... event reviewed : { subject: String; rating: Decimal(2,1) } // ... } ``` **Sending** The `ReviewService` of the example is now able to construct the payload of the event message and emit the event in a type safe fashion. In the following example, the `ReviewService` does this whenever a review was changed: ```java @Component @ServiceName(ReviewService_.CDS_NAME) public class ReviewServiceHandler implements EventHandler { @Autowired @Qualifier(ReviewService_.CDS_NAME) CqnService reviewService; @After(event = { CqnService.EVENT_CREATE, CqnService.EVENT_UPSERT, CqnService.EVENT_UPDATE }) public void afterReviewChanged(Stream reviews) { reviews.forEach(review -> { // Calculate the new average rating BigDecimal avg = ...; // Set event payload Reviewed event = Reviewed.create(); event.setSubject(review.getSubject()); event.setRating(avg); // Set event context to emit ReviewedContext evContext = ReviewedContext.create(); evContext.setData(event); // Emit event context reviewService.emit(evContext); }); } } ``` Note that the `ReviewService` itself emits the event context. The `ReviewService` does not explicitly need to use a technical messaging service. If no messaging service has been bound to your application, then the event will be dispatched purely within the runtime of this service. As soon as a messaging service has been bound, the event message will also be sent via the respective message broker, to allow other, external consumers to subscribe to this event message. When sending the event message, CAP chooses the fully qualified name (FQN) of the event according to the CDS model as the default topic name to use. In the case of the example this would be `ReviewService.reviewed`. If you want to manually override the automatically derived topic name, you can use the `@topic` annotation in the CDS model. Example: ```cds service ReviewService { // ... @topic: 'sap.cap.reviews.v1.ReviewService.changed.v1' event reviewed : { subject: String; rating: Decimal(2,1) } // ... } ``` **Receiving** Other CAP services are able to define handlers on this declared event using the `@On` annotation that references the `ReviewsService` to a method that has the automatically generated `ReviewedContext` as input parameter. Example: ```java @On(service = ReviewService_.CDS_NAME) private void ratingChanged(ReviewedContext context) { // Extract payload from message Reviewed event = context.getData(); // Access payload structure in typed fashion System.out.println("Rating changed for: '" + event.getSubject() + "'"); } ``` When a CAP service declares such a handler on the `reviewed` event of the `ReviewService` and this service is served within the same runtime, then the event message will be dispatched within this runtime, without being technically transported via a message broker. As soon as the `ReviewService` is declared to be a remote service, and a messaging service is declared, a subscription for this event message will automatically be created at the configured message broker. Example (excerpt of _application.yaml_): ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: remote.services: - name: ReviewService messaging.services: messaging-em: kind: enterprise-messaging subscribePrefix: '$namespace/' ``` ::: In this example, the `ReviewService` is declared as a remote service and thus will not be served by the current runtime. SAP Event Mesh is declared as the message broker to use. In addition, the configuration parameter `subscribePrefix` was set to `'$namespace/'` to define, that whenever doing subscriptions to topics at SAP Event Mesh, to always prefix the technical topic used for subscribing with the namespace of the bound SAP Event Mesh instance. SAP Event Mesh instances may define their own rules for valid topic names, and a common pattern is to require all topics to start with the namespace of the used instance, which would be automatically fulfilled in the example by always prefixing its namespace. [Learn more about **Topic Prefixing**.](#topic-prefixing){.learn-more} ## Supported Message Brokers ### Local Testing The local messaging service is the simplest way to test messaging in a single process. It is especially useful for automated tests, as the emitting of an event blocks until all receivers have processed the event. ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging.services: - name: "messaging-name" kind: "local-messaging" ``` ::: Alternatively you can use the file-based messaging service, which mocks a message broker on the local file system. With it emitting of the event is completely decoupled from the receivers of the event, like with real message brokers. In case you want two services served in different processes to exchange messages, you can achieve this by configuring both services to use the same file for storing and receiving messages. The file is defined by the parameter `binding`, as can be seen in the following example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging.services: - name: "messaging-name" kind: "file-based-messaging" binding: "/any/path/to/file.txt" ``` ::: ::: tip In local testing scenarios it might be useful to manually inject messages into the `file-based message broker`, by manually editing its storage file and adding lines to it. New lines will be interpreted as new messages and will be consumed by services subscribed to matching topics. Have a look at the contents of the storage file with a normal text editor after having sent a few messages to get an example of the syntax, which should be self-explanatory. ::: ### Using Real Brokers Besides the built-in support for `local-messaging` and `file-based-messaging`, all other implementations of technical messaging services are provided as separate CAP features, that can be included as Maven dependencies. This way you can include only the implementations needed for the message brokers you want to address. #### Configuring SAP Event Mesh Support: { #configuring-sap-event-mesh-support} ::: code-group ```xml [srv/pom.xml] com.sap.cds cds-feature-enterprise-messaging runtime ``` ```yaml [srv/src/main/resources/application.yaml] cds: messaging.services: - name: "messaging-name" kind: "enterprise-messaging" ``` ::: #### Configuring Redis PubSub Support : { #configuring-redis-pubsub-support-beta} ::: warning This is a beta feature. Beta features aren't part of the officially delivered scope that SAP guarantees for future releases. ::: ::: code-group ```xml [srv/pom.xml] com.sap.cds cds-feature-redis runtime ``` ```yaml [srv/src/main/resources/application.yaml] cds: messaging.services: - name: "messaging-name" kind: "redis-pubsub" ``` ::: ::: tip In contrast to SAP Event Mesh the Redis feature is a PubSub service which means that the Redis events are broadcasted and delivered to all subscribed clients simultaneously. That means, each instance of your application receives the same event send by the Redis service. And it's not guaranteed that really all events are delivered by the infrastructure, for example: If an application instance is not connected to the Redis service, the emitted events are going to be lost. ::: #### Injecting Messaging Services The included broker dependencies create technical CAP messaging services at the time of application start, corresponding to the bound platform services. You can access these messaging services at runtime either via the CAP service catalog, or conveniently use autowiring for injecting an instance. If you're using autowiring and there is only one messaging service bound, then this single instance can be injected without further parameterization. But if several messaging services are bound to your application, you need to define which of these to inject. In the following, different ways of declaring CAP messaging service instances are explained and how they relate to specific message broker service instances. After that we show how to inject a specific CAP messaging service using its name, to avoid ambiguity in case of multiple of such services should exist. Example: You have two SAP Event Mesh service instances on Cloud Foundry named `messaging-01` and `messaging-02`. If you bind both of these instances to your application, then CAP will automatically detect these at the time of startup and create two CAP messaging services with the same name as the broker service instance has on the platform. This mechanism works automatically, even without declaring any of these service instances in the _application.yaml_. The result is equivalent to declaring the CAP messaging services in _application.yaml_ using the names of their broker service instances on the platform, like this: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-01" - name: "messaging-02" ``` ::: In this case, you do not even need to provide configuration parameters for the kind of the services, as CAP will simply check of which kind these service instances are. After startup, the technical CAP messaging services matching these services will be available under the names `messaging-01` and `messaging-02`. If you want to abstract from the technical names of the services on the used platform, then you might introduce your own names for the CAP messaging service instances in the configuration and reference the concrete service instances on the platform to use via the configuration parameter `binding`: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging1" binding: "messaging-01" - name: "messaging2" binding: "messaging-02" ``` ::: In this case, the technical CAP messaging services will be available under the names `messaging1` and `messaging2`, which will be using the services named `messaging-01` and `messaging-02` on the platform. This way you can easily switch the names of the used platform services, while keeping the names of the technical CAP services stable. If you want to configure the usage of a single messaging service that is of a special kind, but do not want to specify its service name on the platform, this can be done like this: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" kind: "enterprise-messaging" ``` ::: In this case, CAP searches for a SAP Event Mesh service instance that is bound to the application, and will provide a CAP messaging service to use it under the name `messaging`, regardless of the name the service instance has on the platform. But if CAP should find multiple SAP Event Mesh service instances bound to your application at runtime, you will get an Exception, as CAP cannot decide which of these to use. In this case you need to be more explicit and define the service instance name to use in one of the ways shown above. Regardless of the way you have chosen to define the name of the technical CAP messaging service, you can always specify which CAP messaging service you want to inject, by using the `@Qualifier` annotation in your code. When considering the second example above and you want to explicitly inject the CAP messaging service named `messaging2` (that uses the platform service named `messaging-02` - regardless of its kind), then you can do this as follows: ```java @Autowired @Qualifier("messaging2") MessagingService messagingService; ``` ### Using Message Brokers in Cloud Foundry In the Cloud Foundry environment, you can create service instances for message brokers and bind these to your application. CAP will map all bound messaging service instances to technical CAP services that can then be used within your CAP application. ::: tip If you want to use message broker services you created in the Cloud Foundry environment while testing on your local machine, then you need to manually provide binding information when starting your application. How this works will be explained in the following section. ::: As a prerequisite of using message brokers from the Cloud Foundry environment, you need to include the Maven dependency for CAP's Cloud Foundry support to your _pom.xml_ file, as well as the dependency for the desired message broker, and a dependency for a messaging adapter if you not only want to send, but also receive messages. #### Maven Dependency for Cloud Foundry Support: ```xml com.sap.cds cds-feature-cloudfoundry runtime ``` The Cloud Foundry environment provides information about bound services to the application via the `VCAP_SERVICES` environment variable. #### Running on the Local System For a local development scenario, it would be annoying to deploy the application to the cloud after each change to test messaging with real brokers. You can let your local application use the message broker services in Cloud Foundry by mocking the `VCAP_SERVICES` and `VCAP_APPLICATION` environment variables locally which Cloud Foundry uses to parametrize your application and bound services. In `VCAP_APPLICATION` environment variable, you need to set `application_id` and `application_name` as this information will be used when automatically generating queue names in case no names have been configured explicitly. You can set these values to arbitrary values for testing, for example, as shown here: ```sh VCAP_APPLICATION = { "application_id" : "any unique id", "application_name" : "any name" } ``` Cloud Foundry message broker services that should be bound to your application need to be provided in the `VCAP_SERVICES` environment variable. In the following sections, you will find templates for specific message brokers that are supported. You can define multiple service bindings there. The concrete values to set in the templates can be found by creating a service key for the desired service in Cloud Foundry and then viewing the service key and extracting relevant information from there. Instead of setting the `VCAP_SERVICES` and `VCAP_APPLICATION` environment variables manually, you can provide a file called _default-env.json_ in a service project's root directory to define their values. Example: ```json { "VCAP_SERVICES": { ... }, "VCAP_APPLICATION": { ... } } ``` [Learn more about _default-env.json_.](../node.js/cds-env#in-default-env-json){.learn-more} #### VCAP_SERVICES Template for SAP Event Mesh ```sh VCAP_SERVICES = { "enterprise-messaging": [{ "label": "enterprise-messaging", "credentials": { ... Insert content of service key here! ... } }] } ``` ## Composite Messaging Service In some scenarios you need to deal with multiple message brokers, and thus have multiple messaging services bound to your application. Unfortunately, at the time of development, it is not always known how many and what kind of messaging services will be bound to the application later, and which messages should be sent to or received from which of these brokers. In such scenarios, the "Composite Messaging Service" can be used, which allows you to change the routing of messages to/from different brokers later on by the means of pure configuration, without the need of changing your code. Let's start with a configuration example (excerpt from _application.yaml_): ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: routes: - service: "em-instance-01" events: - "My/Destination/Messaging/A" - service: "em-instance-02" events: - "My/Destination/Messaging/B" - "My/Destination/Messaging/C*" ``` ::: To use such a configuration, you need to use the composite messaging service for message handling. You can get hold of an instance of such a service in your code, by using the qualifier `MessagingService.COMPOSITE_NAME` when autowiring the messaging service – as shown in the following example: ```java @Autowired @Qualifier(MessagingService.COMPOSITE_NAME) MessagingService messagingService; messagingService.emit("My/Destination/Messaging/A", "raw message payload to em-instance-01"); messagingService.emit("My/Destination/Messaging/B", "raw message payload to em-instance-02"); ``` As you can see in the configuration, the usage and routing of two messaging services is defined (`em-instance-01`, and `em-instance-02`), each with different topics that should be routed via the service (for example, topic `My/Destination/Messaging/A` will be sent/received via `em-instance-01`, and topic `My/Destination/Messaging/B` will be sent/received via `em-instance-02`). The composite service uses the routing configuration in order to dispatch messages as well as subscriptions to the appropriate messaging service. As shown in the sample code, you can simply use the composite message service and submit messages to topics as desired. The messages will be routed to according messaging services as defined in the configuration automatically. To change the routing of messages you can simply change the configuration, without the need of changing your code. ::: tip If you emit messages with a topic to the composite messaging service that isn't defined in its routing configuration, then the delivery will fail. Consider careful review of your configuration, when you start sending/receiving messages from/to new topics. ::: Example for receiving messages with a given topic via the composite messaging service: ```java @On(service = MessagingService.COMPOSITE_NAME, event = "My/Destination/Messaging/A") public void receiveA(TopicMessageEventContext context) { ... } @On(service = MessagingService.COMPOSITE_NAME, event = "My/Destination/Messaging/B") public void receiveB(TopicMessageEventContext context) { ... } ``` The configuration of the composite service is used to determine for which messaging service the handlers are registered, and thus in which message broker the subscription for that topic is made. ## Details and Advanced Concepts ### Queue Configuration By default, each messaging service uses one queue in the broker for all its subscriptions. If no queue exists, a queue with an autogenerated name will be created. If you want a service to use a specific queue name, you can configure it in your _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" queue: name: "my-custom-queue" ``` ::: If a queue with the given name already exists on the broker, then this queue will be used. If a queue with that name doesn't exist yet, it will be created. ::: tip Depending on the used message broker, there can be restrictions to what names can be used for queues. Check the documentation of the used broker to ensure you're using a valid name. See [Syntax for Naming Queues, Topics, and Topic Patterns](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/72ac1fad2dd34c4886d672e66b22b54b.html) in the SAP Event Mesh documentation for more details. ::: At the time of queue creation, configuration parameters can be passed to the queue. As options and parameters that can be set for a queue depend on the used message broker, custom key value pairs can be defined that will be passed as queue configuration to the broker at time of queue creation. Check the documentation of the used message broker to see which options can be set. Here is an example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" queue: name: "my-custom-queue" config: accessType: "EXCLUSIVE" ``` ::: [Learn more about SAP Event Mesh configuration options.](https://hub.sap.com/api/SAPEventMeshDefaultManagementAPIs/path/putQueue){.learn-more}
### Queue Configuration Changes Depending on the configuration, queues can be set up on the broker with autogenerated names or given names. It is possible to manually create queues on the broker and then configure their name, so that already existing queues can be used. When changing the configuration of CAP messaging it is impossible for CAP to do an automated cleanup on the broker, for example, remove previously used, but now unused queues. ::: warning Queues will not be deleted when removed from the configuration, as you might have configured a manually created queue that should not be deleted and messages in the queue must not be deleted. ::: At startup of your application CAP messaging will make sure that all configured queues exist, and only if not, that they're created with specified configuration options. Then subscriptions to the queues will be made as defined. As a queue will never be automatically deleted, renamed, or messages within deleted automatically (as this could cause unintentional, catastrophic loss of messages), unused queues and their subscriptions need to be manually removed. SAP Event Mesh provides a management console to manage queues on the broker as well as a REST API to do so. See [Use REST APIs to Manage Queues and Queue Subscriptions](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/00160292a8ed445daa0185589d9b43c5.html) in the SAP Event Mesh documentation for more details. ### Using Multiple Queues Each CAP messaging service instance uses by default one queue for all its topic subscriptions. For some use cases, you might want to separate incoming messages into different queues. This can be achieved by configuring multiple CAP messaging services for one message broker instance – each using its own queue. Example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "first-messaging" binding: "cf-messaging-service-instance-name" - name: "second-messaging" binding: "cf-messaging-service-instance-name" ``` ::: ### Consuming from a Queue All handlers registered to a messaging service cause a subscription to a specified handler event. But in some scenarios, the broker sends the messages straight to the queue without topic or event origin. In this case, you can register a handler using the queue name as a handler event to receive queue messages. Example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" queue: name: "my-custom-queue" ``` ::: ```java @On(service = "messaging-name", event = "my-custom-queue") public void receiveMyCustomQueueMessage(TopicMessageEventContext context) { // access the message as usual String payload = context.getData(); } ``` Furthermore, some messaging brokers support forwarding of messages to another queue. A typical use case is a dead-letter queue that receives messages that could not be delivered from another queue. For those messages you can't register a queue event as the messages have a different topic or event origin. To receive messages from a dead-letter queue, you need to register a `*`-handler in order to receive all topic or event messages. As a `*`-handler is not an explicit subscription it doesn't start queue listening by default. You need to explicitly enable queue listening and set property `forceListening` to `true`. Example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" queue: forceListening: true ``` ::: ```java @On(service = "messaging-name") public void receiveMyCustomQueueAllMessages(TopicMessageEventContext context) { // access the message as usual String payload = context.getData(); // ... } ``` ### Dedicated Connections To keep the number of simultaneous connections to message brokers as low as possible, only one connection for all CAP messaging services bound to the same message broker service instance is used. All the incoming and outgoing messages are handled via this single connection to the broker. There can be scenarios in which using multiple queues with dedicated connections for each of them is desired, for example, to balance data throughput. The following example shows how you can use the `dedicated: true` parameter to create a second messaging service bound to the same message broker, but using a different queue with a dedicated connection to it (instead of sharing the same connection with the first messaging service). Example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "first-messaging" binding: "cf-messaging-service-instance-name" queue: name: "my-first-queue" - name: "second-messaging" binding: "cf-messaging-service-instance-name" connection: dedicated: true queue: name: "my-second-queue" ``` ::: In this example, the `first-messaging` service uses the default connection and the `second-messaging` service uses a new, separate connection. ### Error Handling To ensure successful delivery of messages, some message brokers require that consumers acknowledge successfully received and processed messages. Otherwise, they redeliver the message. By default, messages are only acknowledged if they have been successfully processed by the CAP handler. Hence, if the message handling fails with an exception, it's redelivered by the messaging broker which can end up in an endless loop. To avoid this, you can register an error handler on the corresponding messaging service. The error handler is called when an exception is thrown during message processing and it allows you to explicitly control whether the message should be acknowledged or not. The following example demonstrates how the error handler can be used to catch messaging errors. Based on the error code you can differentiate between the CAP infrastructure errors and application errors in order to inspect them and decide whether the message should be acknowledged by the broker or not. ```java @On(service = "messaging-name") private void handleError(MessagingErrorEventContext ctx) { String errorCode = ctx.getException().getErrorStatus().getCodeString(); if (errorCode.equals(CdsErrorStatuses.NO_ON_HANDLER.getCodeString()) || errorCode.equals(CdsErrorStatuses.INVALID_DATA_FORMAT.getCodeString())) { // error handling for infrastructure error ctx.setResult(false); // no acknowledgement } else { // error handling for application errors // how to access the event context of the raised exception: // ctx.getException().getEventContexts().stream().findFirst().ifPresent(e -> { // String event = e.getEvent()); // String payload = e.get("data")); // }); ctx.setResult(true); // acknowledge } } ``` In a multi-tenant setup with several microservices, messages of a tenant not yet subscribed to the own microservice would be already received from the message queue. In this case, the message cannot be processed for the tenant because the tenant context is not yet available. By default, the standard error handler still acknowledges the message to prevent it from getting stuck in the message sequence. To change this behavior, the custom error handler from the example above can be extended by checking the exception type of the unknown tenant. ```java @On(service = "messaging") private void handleError(MessagingErrorEventContext ctx) { String errorCode = ctx.getException().getErrorStatus().getCodeString(); if (errorCode.equals(CdsErrorStatuses.NO_ON_HANDLER.getCodeString()) || errorCode.equals(CdsErrorStatuses.INVALID_DATA_FORMAT.getCodeString())) { // error handling for infrastructure error ctx.setResult(false); // no acknowledgement } else if (errorCode.equals(CdsErrorStatuses.TENANT_NOT_EXISTS.getCodeString())) { // error handling for unknown tenant context // tenant of the received message String tenant = ctx.getTenant(); // received message Map headers = ctx.getMessageHeaders(); Map message = ctx.getMessageData(); ctx.setResult(true); // acknowledge } else { // error handling for application errors // how to access the event context of the raised exception: // ctx.getException().getEventContexts().stream().findFirst().ifPresent(e -> { // String event = e.getEvent()); // String payload = e.get("data")); // }); ctx.setResult(true); // acknowledge } } ``` ::: warning _❗ Warning_ The way how unsuccessfully delivered messages are treated, fully depends on the messaging broker. Please check in section [Acknowledgement Support](#acknowledgement-support) whether the messaging broker you are using is suitable for your error handler implementation. ::: #### Acknowledgement Support Not all messaging brokers provide the acknowledgement support. This means, the result of the error handler has no effect for the messaging broker. | Messaging Broker | Support | Cause | | ------------------------------------------------------ | :-----: | :--------------------: | | [File Base Messaging](#local-testing) | | | | [Event Mesh](#configuring-sap-event-mesh-support) | | removed from the queue | | [Message Queuing](#configuring-sap-event-mesh-support) | | removed from the queue | | [Redis PubSub](#configuring-redis-pubsub-support-beta) | | | ::: tip If a broker supports the message acknowledgement and a message is not acknowledged by the application, it will be redelivered. ::: ### Sending and Receiving in the Same Instance Consider the following scenario: You send messages with a certain topic in your application. Now, you are also registering an `@On` handler for the same event or message topic in the same application. In such a situation the message will be sent to the broker, and once it is received back from the message broker, your registered handler will be called. If you want to consume the message purely locally, and prevent the message from being sent out to the message broker, you need to register your handler with a higher priority than the default handler that sends out the message, and set the message's context to completed, so it won't be processed further (and thus sent out by the default handler). You can register your handler with a higher priority than the default handler like this: ```java @On(service = "messaging-name", event = "My/Destination/Messaging/Topic") @HandlerOrder(HandlerOrder.EARLY) public void receiveMyTopic(TopicMessageEventContext context) { // Check if message is outgoing to message broker, or incoming from message broker if (!context.getIsInbound()) { // Process message locally … // Prevent further processing of message (i.e. prevent sending to broker) context.setCompleted(); } } ``` In the handler, we now receive the message regardless if it is incoming from a real broker, or if it is outgoing because it was emitted in the local application. We can now check if the message is incoming or outgoing, and set the context to completed, if we want to stop further processing (that means "sending") of the message. ### Topic Prefixing Before a CAP messaging service finally submits the topic of an event message to the message broker, it provides the configuration option to prefix the topic with an arbitrary string. Example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] ... cds: messaging.services: messaging-em: kind: "enterprise-messaging" publishPrefix: '$namespace/' subscribePrefix: '$namespace/' ... ``` ::: `publishPrefix` will prefix the topic when sending messages, while `subscribePrefix` will prefix the topic when subscriptions to a topic are made. If a service is only sending events, defining `publishPrefix` would be sufficient. If a service is only receiving events, defining `subscribePrefix` would be sufficient. When using SAP Event Mesh, the placeholder `$namespace` can be used to dynamically use the namespace of the bound SAP Event Mesh instance. In the case of subscribing to messages `+/+/+/` can be used as a prefix to subscribe to all SAP Event Mesh namespaces (needs to be allowed in the defined "topic rules" of the used SAP Event Mesh instance). Regardless of the used messaging service, a fixed string, like `default/my.app/1/` can be used for prefixing. Besides these kinds of topic manipulations, additional topic manipulations might occur, depending on the used message broker or the chosen format of the event message. ### Enhanced Messages Representation The configuration property `structured` determines if messages are represented as a plain String (`false`) or always structured as two separate maps, representing data and headers (`true`). Setting this property enables handling of message headers, like `cloudevents` headers, separately from the message itself. This works for all messaging brokers supported by CAP. If using a message broker that supports native headers, for example Kafka, the headers are separated from the business data. On incoming messages the flag determines the internal representation of the message either as a plain string or two maps of message data and message headers. Having header data separated, avoids adding extra information or metadata as part of the business data when sending them to the message broker. Additionally the header data is clearly separated on the consumer side, because they provided by different data and headers maps. The default value for the configuration property `structured` is `true`. Configuration example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging.services: - name: "messaging-name" kind: "enterprise-messaging" structured: true ``` ::: #### Emitting Events The interface `MessagingService` provides a new method for emitting events with a data map and a headers map: ```java void emit(String topic, Map data, Map headers); ``` This method takes a (`cloudevents`) message, separated into data and headers and sends it to the specified topic of this message broker. It produces the same final message, regardless of the structured flag. Usually data and headers are combined into a final JSON string message following the rule: `{...headers, data: data}`. Brokers that natively support headers, for example Kafka, are able to separate headers from data, when using this method. ```java String topic; MessagingService messagingService; messagingService.emit(topic, Map.of("firstname", "John", "lastname", "Doe"), Map.of("timestamp", Instant.now())); ``` The semantics of the method `MessagingService.emit(String topic, String message)` has been changed depending on the structured flag: If the service is not configured with the structured flag (default), the message is sent to the specified topic of this message broker as is. If the service is configured with the structured flag, the message string is converted into a map following the rule: `{message: message}`. The map is then interpreted as data map and passed to `MessagingService.emit(String topic, Map dataMap)`. Usually this results in a final message like: `{data: {message: message}}`. Example: ```java String topic; MessagingService messagingService; messagingService.emit(topic, "hello world"); ``` If the service is not configured with the structured flag, the message is sent as is and on the consumer side `TopicMessageEventContext.getData()` returns: ``` hello world ``` If the service is configured with the structured flag, the message is converted to a map and on the consumer side `TopicMessageEventContext.getData()` returns: ```json {"data": {"message": "hello world"}} ``` #### Handling events The structured flag of the consumer determines how the event payload is provided by the event context. If set to `false`, the event payload can be accessed using `getData()` as a string; `getDataMap()` and `getHeadersMap()` return `null`: ```java @On(event = "myEvent") public void handleMyEvent(EventContext context) { TopicMessageEventContext ctx = context.as(TopicMessageEventContext.class); String data = ctx.getData(); // ... } ``` If set to `true`, the event payload can be accessed using `getDataMap()` and `getHeadersMap()`; `getData()` returns null: ```java @On(event = "myEvent") public void handleMyEvent(EventContext context) { TopicMessageEventContext ctx = context.as(TopicMessageEventContext.class); Map data = ctx.getDataMap(); Map headers = ctx.getHeadersMap(); // ... } ``` ::: tip Handling of CDS-defined events is independent of value of the `structured` property. ::: ### CloudEvents CAP is able to produce event messages compatible with the [CloudEvents](https://cloudevents.io/) standard in JSON format. To enable this feature, set the configuration parameter `format` of the messaging service to `cloudevents`, for example, like: Excerpt from _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: messaging: services: - name: "messaging-name" kind: [...] format: "cloudevents" ``` ::: With this setting, basic header fields (like `type`, `source`, `id`, `datacontenttype`, `specversion`, `time`) of the JSON-based message format will be populated with sensible data (if they have not been set manually before). The event name will be used as-is (without prefixing or any other modifications) as `type` and set in the according CloudEvents header field. When using CloudEvents format with SAP Event Mesh, the following default prefixing of topics will be applied (if not manually declared differently): Default for `publishPrefix` is set to `$namespace/ce/` and default for `subscribePrefix` is set to `+/+/+/ce/`. Make sure that these prefixes are allowed topic prefixes in your SAP Event Mesh service configuration (especially its topic rules section). When using a CAP service that has [events declared in its CDS model](#cds-declared-events), then the event's payload structure will automatically be embedded in the `data` attribute of a valid JSON-based CloudEvents message. ::: tip Headers of the CloudEvents message can be accessed using the EventContext generated for this event by using its generic `get(String)` API. ::: The following example shows how to access headers of a CloudEvents message: ```java @On(service = ReviewService_.CDS_NAME) private void ratingChanged(ReviewedContext context) { // Access the CloudEvents header named "type" String eventType = (String) context.get("type"); // Or access in a type-safe way (only for common CloudEvents headers) eventType = context.as(CloudEventMessageEventContext.class).getType(); } ``` When using a CAP messaging service directly to emit the raw message payload as a String, please make sure to emit a valid JSON object representation in this String that has the message payload embedded in the `data` attribute (for example `messagingService.emit("sap.cap.reviews.v1.ReviewService.changed.v1", "{"data":{"subject":"4711","rating":3.6}}");`). Then all missing header fields that are required for a valid CloudEvents message will be extended. If you emit a String that is not a valid JSON, then the message cannot be extended and the String will be emitted as-is as the event message content. [Learn more about **CloudEvents**.](../guides/messaging/#cloudevents){.learn-more} # Audit Logging Find here information about the AuditLog service in CAP Java. ## AuditLog Service ### Overview As of CAP Java 1.18.0, an AuditLog service is provided for CAP Java applications. The AuditLog service can be used to emit AuditLog related events to registered handlers. The following events can be emitted with the [AuditLogService](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/AuditLogService.html) to the registered handlers: - [Personal data accesses](#data-access) - [Personal data modifications](#data-modification) - [Configuration changes](#config-change) - [Security events](#security-event) AuditLog events typically are bound to business transactions. In order to handle the events transactionally and also to decouple the request from outbound calls to a consumer, for example a central audit log service, the AuditLog service leverages the [outbox](./outbox) service internally which allows [deferred](#deferred) sending of events. ### Use AuditLogService #### Get AuditLogService Instance The `AuditLogService` can be injected into a custom handler class, if the CAP Java project uses Spring Boot: ```java import com.sap.cds.services.auditlog.AuditLogService; @Autowired private AuditLogService auditLogService; ``` Alternatively the AuditLog service can be retrieved from the `ServiceCatalog`: ```java ServiceCatalog catalog = context.getServiceCatalog(); auditLogService = (AuditLogService) catalog.getService(AuditLogService.DEFAULT_NAME); ``` [See section **Using Services** for more details about retrieving services.](./services#using-services){.learn-more} #### Emit Personal Data Access Event { #data-access} To emit a personal data access event use method [logDataAccess](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/AuditLogService.html#logDataAccess-java.util.List-java.util.List-) of the `AuditLogService`. ```java List accesses = new ArrayList<>(); Access access = Access.create(); // fill access object with data accesses.add(access); auditLogService.logDataAccess(accesses); ``` #### Emit Personal Data Modification Event { #data-modification} To emit a personal data modification event use method [logDataModification](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/AuditLogService.html#logDataModification-java.util.List-) of the `AuditLogService`. ```java List dataModifications = new ArrayList<>(); DataModification modification = DataModification.create(); // fill data modification object with data dataModifications.add(modification); auditLogService.logDataModification(dataModifications); ``` #### Emit Configuration Change Event { #config-change} To emit a configuration change event use method [logConfigChange](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/AuditLogService.html#logConfigChange-java.lang.String-java.util.List-) of the `AuditLogService`. ```java List configChanges = new ArrayList<>(); ConfigChange configChange = ConfigChange.create(); // fill config change object with data configChanges.add(configChange); auditLogService.logConfigChange(Action.UPDATE, configChanges); ``` #### Emit Security Event { #security-event} Use method [logSecurityEvent](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/AuditLogService.html#logSecurityEvent-java.lang.String-java.lang.String-) of the `AuditLogService` to emit an security event. ```java String action = "login"; String data = "user-name"; auditLogService.logSecurityEvent(action, data); ``` ### Deferred AuditLog Events { #deferred} Instead of processing the audit log events synchronously in the [audit log handler](#auditlog-handlers), the `AuditLogService` can store the event in the [outbox](./outbox). This is done in the *same* transaction of the business request. Hence, a cancelled business transaction will not send any audit log events that are bound to it. To gain fine-grained control, for example to isolate a specific event from the current transaction, you may refine the transaction scope. See [ChangeSetContext API](./event-handlers/changeset-contexts#defining-changeset-contexts) for more information. As the stored events are processed asynchronously, the business request is also decoupled from the audit log handler which typically sends the events synchronously to a central audit log service. This improves resilience and performance. By default, the outbox comes in an [in-memory](./outbox#in-memory) flavour which has the drawback that it can't guarantee that the all events are processed after the transaction has been successfully closed. To close this gap, a sophisticated [persistent outbox](./outbox#persistent) service can be configured. By default, not all events are send asynchronously via (persistent) outbox. * [Security events](#security-event) are always send synchronously. * All other events are stored to persistent outbox, if available. The in-memory outbox acts as a fallback otherwise. ::: warning _❗ Compliance & Data Privacy_ * It is up to the application developer to make sure that audit log events stored in the persistent outbox don't violate given **compliance rules**. For instance, it might be appropriate not to persist audit log events triggered by users who have operator privileges. Such logs could be modified on DB level by the same user afterward. * For technical reasons, the AuditLog service temporarily stores audit log events enhanced with personal data such as the request's _user_ and _tenant_. In case of persistent outbox, this needs to be handled individually by the application to comply with **data privacy rules**. ::: ## AuditLog Handlers { #auditlog-handlers} ### Default Handler By default, the CAP Java SDK provides an AuditLog handler that writes the AuditLog messages to the application log. This default handler is registered on all AuditLog events and writes `DEBUG` log entries. However, the application log does not log `DEBUG` entries by default. To enable audit logging to the application log, the log level of the default handler needs to be set to `DEBUG` level: ::: code-group ```yaml [srv/src/main/resources/application.yaml] logging: level: com.sap.cds.auditlog: DEBUG ``` ::: ### AuditLog v2 Handler { #handler-v2} Additionally, the CAP Java SDK provides an _AuditLog v2_ handler that writes the audit messages to the SAP Audit Log service via its API version 2. To enable this handler, an additional feature dependency must be added to the `srv/pom.xml` of the CAP Java project: ```xml com.sap.cds cds-feature-auditlog-v2 runtime ``` Also a service binding to the AuditLog v2 service has to be added to the CAP Java application, then this handler is activated. The Auditlog v2 handler supports the `premium` plan of the AuditLog Service as described [here](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-write-api-for-customers?#prerequisites-for-using-the-audit-log-write-api-for-customers).
If it's required to disable the AuditLog v2 handler for some reason, this can be achieved by setting the CDS property [`cds.auditLog.v2.enabled`](../java/developing-applications/properties#cds-auditLog-v2-enabled) to `false` in _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: auditlog.v2.enabled: false ``` ::: The default value of this parameter is `true` and the AuditLog v2 handler is automatically enabled, if all other requirements are fulfilled.
### Custom AuditLog Handler CAP Java applications can also provide their own AuditLog handlers to implement custom processing of AuditLog events. The custom handler class has to implement the interface `EventHandler` and needs to be annotated with `@ServiceName(value = "*", type = AuditLogService.class)`. If the CAP Java project uses Spring Boot, the class can be annotated with `@Component` to register the handler at the CDS runtime. For each of the four supported AuditLog events, a handler method can be registered. Depending on the event type, the method signature has to support the corresponding argument type: | Event Type | Argument Type | | --- | --- | | [Personal Data Access](#data-access) | [com.sap.cds.services.auditlog.DataAccessLogContext](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/DataAccessLogContext.html) | | [Personal Data Modification](#data-modification) | [com.sap.cds.services.auditlog.DataModificationLogContext](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/DataModificationLogContext.html) | | [Configuration Change](#config-change) | [com.sap.cds.services.auditlog.ConfigChangeLogContext](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/ConfigChangeLogContext.html) | | [Security Event](#security-event) | [com.sap.cds.services.auditlog.SecurityLogContext](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/auditlog/SecurityLogContext.html) | With one of the annotations `@Before`, `@On`, and `@After` the handler method needs to be annotated to indicate in which phase of the event processing this method gets called. The following example defines an AuditLog event handler class with methods for all event types: ```java import com.sap.cds.services.auditlog.*; import com.sap.cds.services.handler.*; import com.sap.cds.services.handler.annotations.*; import org.springframework.stereotype.*; @Component @ServiceName(value = "*", type = AuditLogService.class) class CustomAuditLogHandler implements EventHandler { @On public void handleDataAccessEvent(DataAccessLogContext context) { // custom handler code } @On public void handleDataModificationEvent(DataModificationLogContext context) { // custom handler code } @On public void handleConfigChangeEvent(ConfigChangeLogContext context) { // custom handler code } @On public void handleSecurityEvent(SecurityLogContext context) { // custom handler code } } ``` [Learn more about implementing an event handler in **Event Handler Classes**.](./event-handlers/#handlerclasses){.learn-more} # Change Tracking The feature tracks the changes of all modifying operations executed via CQN statements, which are indirectly triggered by the protocol adapters or directly by a custom code. Changes made through the native SQL, JDBC, or other means that bypass the CAP Java runtime or that are forwarded to the remote services aren't tracked. ## Enabling Change Tracking To use the change tracking feature, you need to add a dependency to [cds-feature-change-tracking](https://central.sonatype.com/artifact/com.sap.cds/cds-feature-change-tracking) in the `srv/pom.xml` file of your service: ```xml com.sap.cds cds-feature-change-tracking runtime ``` - Your POM must also include the goal to resolve the CDS model delivered from the feature. See [Reference the New CDS Model in an Existing CAP Java Project](/java/building-plugins#reference-the-new-cds-model-in-an-existing-cap-java-project). - If you use SAP Fiori elements as your UI framework and intend to use the built-in UI, update your SAP UI5 version to 1.121.2 or higher. ### Annotating Entities To capture changes for an entity, you need to extend it with a technical aspect and annotate it with the annotation `@changelog` that declares the elements whose changes are to be logged. Given the following entity that represents a book on the domain level: ```cds namespace model; entity Books { key ID: UUID; title: String; stock: Integer; } ``` And the corresponding service definition with the projection of the entity: ```cds namespace srv; using {model} from '../db/schema'; // Our domain model service Bookshop { entity Books as projection on model.Books; } ``` Include the change log model that is provided by this feature: ```cds using {sap.changelog as changelog} from 'com.sap.cds/change-tracking'; ``` Extend **the domain entity** with the aspect `changelog.changeTracked` like this: ```cds extend model.Books with changelog.changeTracked; ``` This aspect adds the association `changes` that lets you consume the change log both programmatically via CQN statements and in the UI. This implies that every projection of the entity `Books` has this association and the changes will be visible in all of them. Annotate elements of the entity that you want to track with the `@changelog` annotation: ```cds annotate Bookshop.Books { title @changelog; stock @changelog; }; ``` Your complete service definition should look like this: ```cds namespace srv; using {sap.changelog as changelog} from 'com.sap.cds/change-tracking'; using {model} from '../db/schema'; // The domain entity extended with change tracking aspect. extend model.Books with changelog.changeTracked; service Bookshop { entity Books as projection on model.Books; } // Projection is annotated to indicate which elements are change tracked. annotate Bookshop.Books { title @changelog; stock @changelog; }; ``` :::warning Personal data is ignored Elements with [personal data](../guides/data-privacy/annotations#personaldata), that is, elements that are annotated with @PersonalData and hence subject to audit logging, are ignored by the change tracking. ::: The level where you annotate your elements with the annotation `@changelog` is very important. If you annotate the elements on the _domain_ level, every change made through every projection of the entity is tracked. If you annotate the elements on the _service_ level, only the changes made through that projection are tracked. In case of the books example above, the changes made through the service entity `Bookshop.Books` are tracked, but the changes made on the domain entity are omitted. That can be beneficial if you have a service that is used for data replication or mass changes where change tracking can be a very expensive operation, and you do not want to generate changes from such operations. Change tracking also works with the entities that have compositions and tracks the changes made to the items of the compositions. For example, if you have an entity that represents the order with a composition that represents the items of the order, you can annotate the elements of both and track the changes made through the order and the items in a deep update. ```cds entity OrderItems { key ID: UUID; [...] quantity: Integer @changelog; } entity Orders { key ID: UUID; customerName: String @changelog; [...] items: Composition of many OrderItems; } ``` ### Identifiers for Changes You can store some elements of the entity together with the changes in the change log to produce a user-friendly identifier. You define this identifier by annotating the entity with the `@changelog` annotation and including the elements that you want to store together with the changed value: ```cds annotate Bookshop.Book with @changelog: [ title ]; ``` This identifier can contain the elements of the entity or values of to-one associations that are reachable via path. For example, for a book you can store an author name if you have an association from the book to the author. When you define the identifier for an entity, keep in mind that the projections of the annotated entity will inherit the annotation `@changelog`. If you change the structure of the projection, for example, exclude or rename the elements that are used in the identifier, you must annotate the projection again to provide updated element names in the identifier. The best candidates for identifier elements are the elements that are insert-only or that don't change often. :::warning Stored as-is The values of the identifier are stored together with the change log as-is. They are not translated and some data types might not be formatted per user locale or some requirements, for example, different units of measurement or currencies. You should consider this when you decide what to include in the identifier. ::: ### Identifiers for Associated Entities When your entity has an association to an other entity, you might want to log the changes in their relationship. Given the `Orders` entity with an association to a `Customer` instead of the element with customer name: ```cds entity Orders { key ID: UUID; customer: Association to Customer; [...] } ``` If you annotate such an association with `@changelog`, by default, the change log stores the value of the associated entity key. If you want, you can store some human-readable identifier instead. You define this by annotating the association with an own identifier: ```cds annotate Orders { customer @changelog: [ customer.name ] } ``` Elements from the `@changelog` annotation value must always be prefixed by the association name. The same caveats as for the identifiers for the entities apply here. If you annotate a composition with an identifier, the change log will contain an entry with the identifier's value. Additionally, it will include change log entries for all annotated elements of the composition's target entity. :::warning Validation required If the target of the association is missing, for example, when an entity is updated with the ID for a customer that does not exists, the changelog entry will not be created. You need to validate such cases in the custom code or use annotations, for example, [`@assert.target`](/guides/providing-services#assert-target). ::: This feature can also be used for to-many compositions, when you don't need to track the deep changes, but still want to track the additions and removals in the composition. With association identifiers you also must consider the changes in your entities structure along the projections. In case your target entity is exposed using different projections with removed or renamed elements, you also need to adjust the identifier accordingly in the source entity. ### Displaying Changes The changes of the entity are exposed as an association `changes` that you can use to display the change log in the UI. By default, the entity `Changes` is auto-exposed, but it won't be writable via OData requests. If you want to display the change log together with the overview of your entity, you need to add the facet to the object page that displays the changes: ```cds annotate Bookshop.Books with @( UI : { ... Facets : [ ... { $Type : 'UI.ReferenceFacet', ID : 'ChangeHistoryFacet', Label : '{i18n>ChangeHistory}', Target : 'changes/@UI.PresentationVariant', ![@UI.PartOfPreview]: false } ... ] ... } ...); ``` If you want to have a common UI for all changes, you need to expose the change log as a projection and define your own presentation for it as the changes are exposed only as part of the change-tracked entity. This projection must be read-only and shouldn't be writable via OData requests. The change log is extended with the texts for your entities from the `@title` annotation and the element. Otherwise, the change log contains only the technical names of the entities and the elements. Titles are translated, if they're annotated as translatable. See [Externalizing Texts Bundles](../guides/i18n#localization-i18n) for more information. ## How Changes are Stored The namespace `sap.changelog` defines an entity `Changes` that reflects each change, so the changes are stored in a flat table for all entities together. Each entry in the `Changes` entity contains the following information: - A marker that represents the nature of the change: addition, modification, or deletion. - The qualified name of the entity that was changed and the qualified name of the root entity. They depend on the projection that was used to change the entity and reflect the root and a target of the modifying operation. For flat entities, they're the same. - The attribute of the target projection that was changed. - The new and old values as strings. - The user who made the change and the timestamp of the change. - The data type of the changed attribute. - The technical path from the root entity to the tracked target entity. ## Detection of Changes The change tracking intercepts the modifying CQL statements (`Insert`, `Upsert`, `Update`, and `Delete`) and requires additional READ events to retrieve the old and the new image of the entity. These two images are compared and differences are stored in the change log. The nature of the change is determined by comparing the old and new values of the entity: data that weren't present in the old values are considered as added whereas data that aren't present in the new values are considered as deleted. Elements that are present in both old and new values but have different values are considered as modified. Each change detected by the change tracking feature is stored in the change log as a separate entry. In the case of the deeply structured documents, for example, entities with the compositions, the change tracking feature detects the changes across the complete document and stores them in the change log with the metadata reflecting the structure of the change. For example, given the order and item model from above, if you change values for the tracked elements with the deep update, for example, the customer name in the order and the quantity of the item, the change log contains two entries: one for the order and one for the item. The change log entry for the item will also reflect that the root of the change is an order. :::warning Prefer deep updates for change tracked entities If you change the values of the `OrderItems` entity directly via an OData request or a CQL statement, the change log contains only one entry for the item and won't be associated with an order. ::: ## Reacting on Changes You can write an event handler to observe the change log entries. Keep in mind, that the change log entries are created for each statement and this event will not be bound to any kind of transaction or a batch operation. ```java import cds.gen.sap.changelog.Changes; @Component @ServiceName("ChangeTrackingService$Default") public class ChangeTrackingHandler implements EventHandler { @After(event = "createChanges") void afterCreate(EventContext context) { Result result = (Result) context.get("result"); result.listOf(Changes.class).forEach(c -> { // Do something with the change log entry }); } } ``` You can query the change log entries via CQN statements, as usual. ## Things to Consider when Using Change Tracking - Consider the storage costs of the change log. The change log can grow very fast and can consume a lot of space in case of frequent changes. You should consider the retention policy of the change log as it won't be deleted when you delete the entities. - Consider the performance impact. Change tracking needs to execute additional reads during updates to retrieve and compare updated values. This can slow down the update operations and can be very expensive in the case of updates that affect a lot of entities. - Consider the ways your entities are changed. You might want to track the changes only on the service projection level that are used for the user interaction and not on the domain level (for instance during data replication). - If you want to expose the complete change log to the user, you need to consider the security implications of this. If your entities have complex access rules, you need to consider how to extend these rules to the change log. # Transactional Outbox Find here information about the Outbox service in CAP Java. ## Concepts Usually the emit of messages should be delayed until the main transaction succeeded, otherwise recipients also receive messages in case of a rollback. To solve this problem, a transactional outbox can be used to defer the emit of messages until the success of the current transaction. ## In-Memory Outbox (Default) { #in-memory} The in-memory outbox is used per default and the messages are emitted when the current transaction is successful. Until then, messages are kept in memory. ## Persistent Outbox { #persistent} The persistent outbox requires a persistence layer to persist the messages before emitting them. Here, the to-be-emitted message is stored in a database table first. The same database transaction is used as for other operations, therefore transactional consistency is guaranteed. Once the transaction succeeds, the messages are read from the database table and are emitted. - If an emit was successful, the respective message is deleted from the database table. - If an emit wasn't successful, there will be a retry after some (exponentially growing) waiting time. After a maximum number of attempts, the message is ignored for processing and remains in the database table. Even if the app crashes the messages can be redelivered after successful application startup. To enable the persistence for the outbox, you need to add the service `outbox` of kind `persistent-outbox` to the `cds.requires` section in the _package.json_ or _cdsrc.json_, which will automatically enhance your CDS model in order to support the persistent outbox. ```jsonc { // ... "cds": { "requires": { "outbox": { "kind": "persistent-outbox" } } } } ``` ::: warning _❗ Warning_ Be aware that you need to migrate the database schemas of all tenants after you've enhanced your model with an outbox version from `@sap/cds` version 6.0.0 or later. ::: For a multitenancy scenario, make sure that the required configuration is also done in the MTX sidecar service. Make sure that the base model in all tenants is updated to activate the outbox. ::: info Option: Add outbox to your base model Alternatively, you can add `using from '@sap/cds/srv/outbox';` to your base model. In this case, you need to update the tenant models after deployment but you don't need to update MTX Sidecar. ::: If enabled, CAP Java provides two persistent outbox services by default: - `DefaultOutboxOrdered` - is used by default by messaging services - `DefaultOutboxUnordered` - is used by default by the AuditLog service The default configuration for both outboxes can be overridden using the `cds.outbox.services` section, for example in the _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: outbox: services: DefaultOutboxOrdered: maxAttempts: 10 storeLastError: true # ordered: true DefaultOutboxUnordered: maxAttempts: 10 storeLastError: true # ordered: false ``` ::: You have the following configuration options: - `maxAttempts` (default `10`): The number of unsuccessful emits until the message is ignored. It still remains in the database table. - `storeLastError` (default `true`): If this flag is enabled, the last error that occurred, when trying to emit the message of an entry, is stored. The error is stored in the element `lastError` of the entity `cds.outbox.Messages`. - `ordered` (default `true`): If this flag is enabled, the outbox instance processes the entries in the order they have been submitted to it. Otherwise, the outbox may process entries randomly and in parallel, by leveraging outbox processors running in multiple application instances. This option can't be changed for the default persistent outboxes. ### Configuring Custom Outboxes { #custom-outboxes} Custom persistent outboxes can be configured using the `cds.outbox.services` section, for example in the _application.yaml_: ::: code-group ```yaml [srv/src/main/resources/application.yaml] cds: outbox: services: MyCustomOutbox: maxAttempts: 5 storeLastError: false MyOtherCustomOutbox: maxAttempts: 10 storeLastError: true ``` ::: Afterward you can access the outbox instances from the service catalog: ```java OutboxService myCustomOutbox = cdsRuntime.getServiceCatalog().getService(OutboxService.class, "MyCustomOutbox"); OutboxService myOtherCustomOutbox = cdsRuntime.getServiceCatalog().getService(OutboxService.class, "MyOtherCustomOutbox"); ``` Alternatively it's possible to inject them into a Spring component: ```java @Component public class MySpringComponent { private final OutboxService myCustomOutbox; public MySpringComponent(@Qualifier("MyCustomOutbox") OutboxService myCustomOutbox) { this.myCustomOutbox = myCustomOutbox; } } ``` ::: warning When removing a custom outbox ... ... it must be ensured that there are no unprocessed entries left. Removing a custom outbox from the `cds.outbox.services` section doesn't remove the entries from the `cds.outbox.Messages` table. The entries remain in the `cds.outbox.Messages` table and isn't processed anymore. ::: ### Outbox Event Versions In scenarios with multiple deployment versions (blue/green), situations may arise in which the outbox collectors of the older deployment cannot process the events generated by a newer deployment. In this case, the event can get stuck in the outbox, with all the resulting problems. To avoid this problem, you can configure the outbox to use an event version that prevents the outbox collectors from using the newer events. For this purpose, you can set the parameter [cds.environment.deployment.version: 2](../java/developing-applications/properties#cds-environment-deployment-version). ::: warning Ascending Versions The configured deployment versions must be in ascending order. The messages are only processed by the outbox collector if the event version is less than or equal to the deployment version. ::: To make things easier, you can automate versioning by using the Maven app version. This requires you to increment the version for each new deployment. To do this, the Maven `resource.filtering` configuration in the `srv/pom.xml` must be activated as follows, so that the app version placeholder `${project.version}` can be used in [cds.environment.deployment.version: ${project.version}](../java/developing-applications/properties#cds-environment-deployment-version). ::: code-group ```xml [srv/pom.xml] ... src/main/resources true ... ``` ::: To be sure that the deployment version has been set correctly, you can find a log entry at startup that shows the configured version: ```bash 2024-12-19T11:21:33.253+01:00 INFO 3420 --- [main] cds.serviceces.impl.utils.BuildInfo : application.deployment.version: 1.0.0-SNAPSHOT ``` And finally, if for some reason you don't want to use a version check for a particular outbox collector, you can switch it off via the outbox configuration [cds.outbox.services.MyCustomOutbox.checkVersion: false](../java/developing-applications/properties#cds-outbox-services--checkVersion). ## Outboxing CAP Service Events Outbox services support outboxing of arbitrary CAP services. A typical use case is to outbox remote OData service calls, but also calls to other CAP services can be decoupled from the business logic flow. The API `OutboxService.outboxed(Service)` is used to wrap services with outbox handling. Events triggered on the returned wrapper are stored in the outbox first, and executed asynchronously. Relevant information from the `RequestContext` is stored with the event data, however the user context is downgraded to a system user context. The following example shows you how to outbox a service: ```java OutboxService myCustomOutbox = ...; CqnService remoteS4 = ...; CqnService outboxedS4 = myCustomOutbox.outboxed(remoteS4); ``` If a method on the outboxed service has a return value, it will always return `null` since it is executed asynchronously. A common example for this are the `CqnService.run(...)` methods. To improve this the API `OutboxService.outboxed(Service, Class)` can be used, which wraps a service with an asynchronous suited API while outboxing it. This can be used together with the interface `AsyncCqnService` to outbox remote OData services: ```java OutboxService myCustomOutbox = ...; CqnService remoteS4 = ...; AsyncCqnService outboxedS4 = myCustomOutbox.outboxed(remoteS4, AsyncCqnService.class); ``` The method `AsyncCqnService.of()` can be used alternatively to achieve the same for CqnServices: ```java OutboxService myCustomOutbox = ...; CqnService remoteS4 = ...; AsyncCqnService outboxedS4 = AsyncCqnService.of(remoteS4, myCustomOutbox); ``` ::: tip Custom asynchronous suited API When defining your own custom asynchronous suited API, the interface must provide the same method signatures as the interface of the outboxed service, except for the return types which should be `void`. ::: The outboxed service is thread-safe and can be cached. Any service that implements the `Service` interface can be outboxed. Each call to the outboxed service is asynchronously executed, if the API method internally calls the method `Service.emit(EventContext)`. A service wrapped by an outbox can be unboxed by calling the API `OutboxService.unboxed(Service)`. Method calls to the unboxed service are executed synchronously without storing the event in an outbox. ::: warning Java Proxy A service wrapped by an outbox is a [Java Proxy](https://docs.oracle.com/javase/8/docs/technotes/guides/reflection/proxy.html). Such a proxy only implements the _interfaces_ of the object it is wrapping. This means an outboxed service proxy can't be casted to the class implementing the underlying service object. ::: ::: tip Custom outbox for scaling The default outbox services can be used for outboxing arbitrary CAP services. If you detect a scaling issue, you can define custom outboxes that can be used for outboxing. ::: ## Technical Outbox API { #technical-outbox-api } Outbox services provide the technical API `OutboxService.submit(String, OutboxMessage)` that can be used to outbox custom messages for an arbitrary event or processing logic. When submitting a custom message, an `OutboxMessage` that can optionally contain parameters for the event needs to be provided. As the `OutboxMessage` instance is serialized and stored in the database, all data provided in that message must be serializable and deserializable to/from JSON. The following example shows the submission of a custom message to an outbox: ```java OutboxService outboxService = runtime.getServiceCatalog(OutboxService.class, ""); OutboxMessage message = OutboxMessage.create(); message.setParams(Map.of("name", "John", "lastname", "Doe")); outboxService.submit("myEvent", message); ``` A handler for the custom message must be registered on the outbox service. This handler performs the processing logic when the message is published by the outbox: ```java @On(service = "", event = "myEvent") void processMyEvent(OutboxMessageEventContext context) { OutboxMessage message = context.getMessage(); Map params = message.getParams(); String name = (String) param.get("name"); String lastname = (String) param.get("lastname"); // Perform processing logic for myEvent context.setCompleted(); } ``` You must ensure that the handler is completing the context, after executing the processing logic. [Learn more about event handlers.](./event-handlers/){.learn-more} ## Handling Outbox Errors { #handling-outbox-errors } The outbox by default retries publishing a message, if an error occurs during processing, until the message has reached the maximum number of attempts. This behavior makes applications resilient against unavailability of external systems, which is a typical use case for outbox message processing. However, there might also be situations in which it is not reasonable to retry publishing a message. For example, when the processed message causes a semantic error - typically due to a 400 Bad request - on the external system. Outbox messages causing such errors should be removed from the outbox message table before reaching the maximum number of retry attempts and instead application-specific counter-measures should be taken to correct the semantic error or ignore the message altogether. A simple try-catch block around the message processing can be used to handle errors: - If an error should cause a retry, the original exception should be (re)thrown (default behavior). - If an error should not cause a retry, the exception should be suppressed and additional steps can be performed. ```java @On(service = "", event = "myEvent") void processMyEvent(OutboxMessageEventContext context) { try { // Perform processing logic for myEvent } catch (Exception e) { if (isUnrecoverableSemanticError(e)) { // Perform application-specific counter-measures context.setCompleted(); // indicate message deletion to outbox } else { throw e; // indicate error to outbox } } } ``` In some situations, the original outbox processing logic is not implemented by you but the processing needs to be extended with additional error handling. In that case, wrap the `EventContext.proceed()` method, which executes the underlying processing logic: ```java @On(service = OutboxService.PERSISTENT_ORDERED_NAME, event = AuditLogService.DEFAULT_NAME) void handleAuditLogProcessingErrors(OutboxMessageEventContext context) { try { context.proceed(); // wrap default logic } catch (Exception e) { if (isUnrecoverableSemanticError(e)) { // Perform application-specific counter-measures context.setCompleted(); // indicate message deletion to outbox } else { throw e; // indicate error to outbox } } } ``` [Learn more about `EventContext.proceed()`.](./event-handlers/#proceed-on){.learn-more} ## Troubleshooting To manually delete entries in the `cds.outbox.Messages` table, you can either expose it in a service or programmatically modify it using the `cds.outbox.Messages` database entity. ::: tip Use paging logic Avoid to read all entries of the `cds.outbox.Messages` table at once, as the size of an entry is unpredictable and depends on the size of the payload. Prefer paging logic instead. ::: # Multitenancy { #multitenancy} CAP applications can be run as software as a service (SaaS). That means, multiple customers (subscriber tenants) can use the application at the same time in an isolated manner. Optionally, subscriber tenants may also extend their CDS models being served. ## Setup Overview This chapter describes how CAP Java applications can deal with multiple business tenants. To add multitenancy flavour seamlessly, the CAP Java backend can be enriched with CAP multitenancy services as described in detail in general [Multitenancy](../guides/multitenancy/?impl-variant=java) guide. The overall setup is sketched in the following figure: ![This is a technical architecture modeling diagram, which shows all involved components and how they interact. The involved components are: SaaS Provisioning Service, CAP Java backend, MTX sidecar, SAP BTP Service Manager, and a database.](./assets/architecture-mt.drawio.svg) The **MTX sidecar** services provide basic functionality such as: - Deploying or undeploying database containers during subscription or unsubscription of business tenants. - Managing CDS model [extensions](../guides/extensibility/customization#extending-saas-applications) for tenants. - Managing CDS models for [feature toggles](../guides/extensibility/feature-toggles#feature-toggles). There are different web adapters available in the Java backend to integrate with **platform services for tenant lifecycle** such as: - SAP BTP [SaaS Provisioning service](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ed08c7dcb35d4082936c045e7d7b3ecd.html) for XSUAA tenants - SAP BTP Subscription Manager Service for IAS tenants. This chapter describes the APIs available in the **Java backend**, most notably the technical [DeploymentService](#custom-logic), which can be used to add custom handlers that influence or react on subscription or unsubscription events ## React on Tenant Events { #custom-logic } CAP Java can automatically react on tenant lifecycle events sent by platform services such as [SaaS Provisioning service](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ed08c7dcb35d4082936c045e7d7b3ecd.html). For these requests, CAP Java internally generates CAP events on the technical service [`DeploymentService`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/DeploymentService.html). [For a general introduction to CAP events, see Event Handlers.](../java/event-handlers/){.learn-more} Register event handlers for the following CAP events to add custom logic for requests sent by the platform service. Each event passes a special type of `EventContext` object to the event handler method and provides event-specific information: | Event Name | Event Context | Use Case | | -----------------| -------------------------------------------------------------------------------| --------------- | | `SUBSCRIBE` | [SubscribeEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/SubscribeEventContext.html) | Add a tenant | | `UNSUBSCRIBE` | [UnsubscribeEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/UnsubscribeEventContext.html) | Remove a tenant | | `DEPENDENCIES` | [DependenciesEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/DependenciesEventContext.html) | Dependencies | You only need to register event handlers to override the default behavior. Default behaviors: - A new tenant-specific database container is created through the [Service Manager](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/3a27b85a47fc4dff99184dd5bf181e14.html) during subscription. - The tenant-specific database container _is deleted_ during unsubscription. The following sections describe how to register to these events in more detail. ### Subscribe Tenant Subscription events are generated when a new tenant is added. By default, subscription creates a new database container for a newly subscribed tenant. This happens during the `@On` phase of the `SUBSCRIBE` event. You can add additional `@On` handlers to perform additional subscription steps. Note that these `@On` handlers should not call `setCompleted()`, as the event processing is auto-completed. The following examples show how to register custom handlers for the `SUBSCRIBE` event: ```java @Before public void beforeSubscription(SubscribeEventContext context) { // Activities before tenant database container is created } @After public void afterSubscribe(SubscribeEventContext context) { // For example, send notification, ... } ``` #### Defining a Database ID When you've registered exactly one SAP HANA instance in your SAP BTP space, a new tenant-specific database container is created automatically. However, if you've registered more than one SAP HANA instance in your SAP BTP space, you have to pass the target database ID for the new database container in a customer handler, as illustrated in the following example: ```java @Before public void beforeSubscription(SubscribeEventContext context) { context.getOptions().put("provisioningParameters", Collections.singletonMap("database_id", "")); } ``` ### Unsubscribe Tenant By default, the tenant-specific database container _is deleted_ during offboarding. This happens during the `@On` phase of the `UNSUBSCRIBE` event. You can add additional `@On` handlers to perform additional unsubscription steps. Note that these `@On` handlers should not call `setCompleted()`, as the event processing is auto-completed. The following example shows how to add custom logic for the `UNSUBSCRIBE` event: ```java @Before public void beforeUnsubscribe(UnsubscribeEventContext context) { // Activities before offboarding } @After public void afterUnsubscribe(UnsubscribeEventContext context) { // Notify offboarding finished } ``` ::: warning If you are accessing the tenant database container during unsubscription, you need to wrap the access into a dedicated ChangeSetContext or transaction. This ensures that the transaction to the tenant database container is committed, before the container is deleted. ::: #### Skipping Deletion of Tenant Data By default, tenant-specific resources (for example, database containers) are deleted during removal. However, you can register a customer handler to change this behavior. This is required, for example, in case a tenant is subscribed to your application multiple times and only the last unsubscription should remove its resources. ```java @Before public void beforeUnsubscribe(UnsubscribeEventContext context) { if (keepResources(context.getTenant())) { context.setCompleted(); // avoid @On handler phase } } ``` ### Define Dependent Services The event `DEPENDENCIES` fires when the platform service calls the [`getDependencies` callback](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ff540477f5404e3da2a8ce23dcee602a.html). Hence, if your application consumes any reuse services provided by SAP, you must implement the `DEPENDENCIES` event to return the service dependencies of the application. The event must return a list of all of the dependent services' `xsappname` values. CAP automatically adds dependencies of services to the list, for which it provides dedicated integrations. This includes AuditLog and Event Mesh. ::: tip The `xsappname` of an SAP reuse service that is bound to your application can be found as part of the `VCAP_SERVICES` JSON structure under the path `VCAP_SERVICES..credentials.xsappname`. ::: The following example shows this in more detail: ```java @Value("${vcap.services..credentials.xsappname}") private String xsappname; @On public void onDependencies(DependenciesEventContext context) { List> dependencies = new ArrayList<>(); dependencies.add(SaasRegistryDependency.create(xsappname)); context.setResult(dependencies); } ``` ### Database Schema Update { #database-update } When shipping a new application version with an updated CDS model, the database schema for each subscribed tenant needs an update. The database schema update needs to be triggered explicitly. When the database schema update is triggered, the following CAP event is sent: | Event Name | Event Context | | ---------------- | ---------------------------------------------------------------------------| | `UPGRADE` | [UpgradeEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/UpgradeEventContext.html) | By registering custom handlers for these events, you can add custom logic to influence the deployment and upgrade process of a tenant. By default, the CAP Java SDK notifies the _MTX Sidecar_ to perform any schema upgrade if necessary. > It's often desired to update the whole service in a zero downtime manner. This section doesn't deal with the details about updating a service productively, but describes tool support the CAP Java SDK offers to update tenants. The following sections describe how to trigger the update for tenants, including the database schema upgrade. #### Deploy Main Method { #deploy-main-method } The CAP Java SDK offers a `main` method in the class `com.sap.cds.framework.spring.utils.Deploy` that can be called from the command line while the CAP Java application is still stopped. This way, you can run the update for all tenants before you start a new version of the Java application. This prevents new application code to access database artifacts that aren't yet deployed. In order to register all handlers of the application properly during the execution of a tenant operation `main` method, the component scan package must be configured. To set the component scan, the property `cds.multitenancy.component-scan` must be set to the package name of your application. The handler registration provides additional information that is used for the tenant upgrade, for example, messaging subscriptions that are created. ::: warning While the CAP Java backend might be stopped when you call this method, the _MTX Sidecar_ application must be running! :::
This synchronization can also be automated, for example using [Cloud Foundry Tasks](https://docs.cloudfoundry.org/devguide/using-tasks.html) on SAP BTP and [Module Hooks](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/b9245ba90aa14681a416065df8e8c593.html) in your MTA. The `main` method takes an optional list of tenant IDs as input arguments. If tenant IDs are specified, only these tenants are updated. If no input parameters are specified, all tenants are updated. The method waits until all deployments are finished and then prints the result. The method returns the following exit codes | Exit Code | Result | | --------- | ----------------------------------------------------------------------------------------------------- | | 0 | All tenants updated successfully. | | 1 | Failed to update at least one tenant. Re-run the procedure to make sure that all tenants are updated. | To run this method locally, use the following command where `` is the one of your application: ::: code-group ```sh [>= Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.launch.PropertiesLauncher [] ... [] ``` ```sh [< Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.PropertiesLauncher [] ... [] ``` ::: For local development you can create a launch configuration in your IDE. For example in case of VS Code it looks like this: ```json { "type": "java", "name": "MTX Update tenants", "request": "launch", "mainClass": "com.sap.cds.framework.spring.utils.Deploy", "args": "", // optional: specify the tenants to upgrade, defaults to all "projectName": "", "vmArgs": "-Dspring.profiles.active=local-mtxs" // or any other profile required for MTX } ``` In the SAP BTP, Cloud Foundry environment it can be tricky to construct such a command. The reason is, that the JAR file is extracted by the Java Buildpack and the place of the Java executable isn't easy to determine. Also, the place differs for different Java versions. Therefore, we recommend to adapt the start command that is generated by the buildpack and run the adapted command: ::: code-group ```sh [>= Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.launch.JarLauncher/org.springframework.boot.loader.launch.PropertiesLauncher/g' /home/vcap/staging_info.yml && \ sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Deploy/g' /home/vcap/staging_info.yml && \ jq -r .start_command /home/vcap/staging_info.yml | bash ``` ```sh [< Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.JarLauncher/org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && \ sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Deploy/g' /home/vcap/staging_info.yml && \ jq -r .start_command /home/vcap/staging_info.yml | bash ``` ```sh [Java 8] sed -i 's/org.springframework.boot.loader.JarLauncher/-Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && \ jq -r .start_command /home/vcap/staging_info.yml | bash ``` ::: To run the command manually or automated, you can for example use [Cloud Foundry Tasks](https://docs.cloudfoundry.org/devguide/using-tasks.html) on SAP BTP and [Module Hooks](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/b9245ba90aa14681a416065df8e8c593.html) in your MTA. To trigger it as part of a Cloud Foundry Task, login to the Cloud Foundry landscape using the Cloud Foundry command line client and execute: ```sh cf run-task "" ``` `` needs to be replaced with the name of a Cloud Foundry application, typically the srv module of your CAP project. You can find the name for example in the section `modules` in your `mta.yaml`. `` represents the adapted start command. The output of the command will be logged to the application logs of the application you have specified in ``. ## Development Aspects ### Working with Tenants You can override the tenant ID that is set in the current `RequestContext`. This enables accessing data of arbitrary tenants programmatically. This might be useful for example: - To access configuration data stored by means of the [provider tenant](#switching-provider-tenant) while processing the request of a business tenant. - To access [subscriber tenant](#switching-subscriber-tenant) data in asynchronously scheduled jobs, where no tenant information is present in the `RequestContext`. #### Switching to Provider Tenant { #switching-provider-tenant } `RequestContextRunner` API provides convenient methods to switch to the underlying provider tenant: ```java runtime.requestContext().systemUserProvider().run(context -> { // call technical service ... }); ``` [Learn more about how to switch to a technical tenant.](../java/event-handlers/request-contexts#switching-to-provider-tenant){.learn-more} #### Switching to Subscriber Tenants { #switching-subscriber-tenant } You can set a particular tenant and access it by running your code in a nested `RequestContext` as explained [here](../java/event-handlers/request-contexts#switching-to-a-specific-technical-tenant) and demonstrated by the following example: ```java runtime.requestContext().systemUser(tenant).run(context -> { // call technical service ... }); ``` Note that switching the tenant in the context is a quite expensive operation as CDS model data might need to be fetched from MTX sidecar in case of tenant extensions. Hence, avoid setting the context for all subscribed tenants iteratively as this might overload the sidecar and also could flood the local CDS model cache. ::: warning _❗ Warning_ If an application deviates from default behaviour and switches the tenant context internally, it needs to ensure data privacy and proper isolation! ::: #### Enumerating Subscriber Tenants Dealing with multiple business tenants is usually on behalf of the provider tenant. You can use the [`TenantProviderService`](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/TenantProviderService.html) to get a list of available tenants: ```java @Autowired TenantProviderService tenantProvider; ... List tenantInfo = tenantProvider.readTenants(); ``` ::: warning _❗ Warning_ Retrieving the tenants is an expensive operation. It might be a good idea to cache the results if appropriate. ::: ### DB Connection Pooling ::: tip Pretty much everything in this section depends on your modeling, the load, and also on the sizing (JVM, HTTP server, DB etc.). As there's no one-size-fits-all recommendation, the mentioned configuration parameters are a good starting point. ::: Data source pool configuration is a tradeoff between resources and latency: #### Pool per tenant - less latency, more resources The dedicated pools per tenant approach creates a dedicated connection pool for each tenant. In it's default configuration this strategy uses a static sizing approach: the number of configured connections (defaults to 10) is opened on startup of the application and kept open. This has the lowest possible latency and the highest resource consumption. The application will need a static number of connections per subscribed client. In case you need low latency but a bit less resource consumption you can [configure dynamic pool sizing](#configure-data-pools) for your tenants' connection pools. Then the application will need at least the minimum number of connections per subscribed clients. Depending on the concurrent load the number can increase per client until the configured maximum number of connections is reached. #### Pool per database - less resources, more latency { #combine-data-pools} The combined pool approach uses only one pool for all tenants holding a fixed number of connections. This approach, however, needs to switch connections to the correct schema and user of the tenant, before handing out the connection. This causes some additional latency compared to the pools per tenant, where a connection, once opened for a given schema and user, can be reused until it's retired. For the combined pool you can, as for the dedicated pools, decide between static sizing (the default) and [dynamic sizing](#configure-data-pools). For the latter the resource consumption can be even more reduced while adding a bit more latency because new database connections might be opened upon incoming requests. In order to activate the combined pool approach set the property `cds.multiTenancy.datasource.combinePools.enabled = true`. ::: warning _❗ Warning_ Since the pool is shared among all tenants, one tenant could eat up all available connections, either intentionally or by accident. Applications using combined pools need to take adequate measures to mitigate this risk, for example by introducing rate-limiting. ::: #### Dynamic Data Source Pooling { #configure-data-pools} If not configured differently both, the dedicated pool and the combined pool approaches use static sizing strategies by default. The connections are kept open regardless of whether the application is currently serving requests for the given tenant. If you expect a low number of tenants, this shouldn't be an issue. With a large number of active tenants, this might lead to resource problems, for example, too many database connections or out-of-memory situations. Once you have an increased number of tenants, or run short of connections on the database side, you need to adjust the [configuration of the CDS datasource](./cqn-services/persistence-services#datasource-configuration) for HikariCP as described in the following section. We're using three parameters for the configuration: - `cds.dataSource..hikari.minimum-idle` - `cds.dataSource..hikari.maximum-pool-size` - `cds.dataSource..hikari.idle-timeout` Keep in mind that `` is the placeholder for the service manager instance bound to your CAP Java application. |Parameter |Description | |--------------------|-------------| |`minimum-idle` | The minimum number of connections kept in the pool after being considered as idle. This helps to adjust the usage of resources to the actual load of a given tenant at runtime. In order to save resources (Java heap and DB connections), this value should be kept rather small (for example `1`). | |`maximum-pool-size` | The maximum number of connections in the pool. Here, the value needs to be balanced. Counter-intuitively a bigger value doesn't necessarily lead to higher response/time or throughput. Closely monitor your application under load in order to find a good value. As a starting point you can just leave the default value `10`. | |`idle-timeout` | The time span after which a connection is considered as _idle_. It controls how fast the size of the pool is adjusted to the load of the application, until `minimum-idle` is reached. Keep in mind that opening a connection comes at a latency cost, too. Don't retire connections too soon. | See section [Multitenancy Configuration Properties](#mtx-properties) for more details. ### Logging Support { #app-log-support} Logging service support gives you the capability to observe properly correlated requests between the different components of your CAP application in Kibana. This is especially useful for multi-tenant aware applications that use the `MTX sidecar`. Just enable either [`application-logs`](../java/operating-applications/observability#logging-service) service or [`cloud-logging`](../java/operating-applications/observability#open-telemetry) service for both, the Java service as well as for the `MTX sidecar`, to get correlated log messages from these components. The logs can be inspected in the corresponding front ends such as `Kibana`, `Cloud Logging Service` or `Dynatrace`. ### Configuration Properties { #mtx-properties } A number of multitenancy settings can be configured through application configuration properties. See section [Application Configuration](./developing-applications/configuring#profiles-and-properties) for more details. All properties can be found in the [properties overview](./developing-applications/properties#cds-multiTenancy). The prefix for multitenancy-related settings is `cds.multitenancy`. # Multitenancy (Classic) { #multitenancy-classic} CAP applications can be run as software as a service (SaaS). That means, multiple customers (subscriber tenants) can use the application at the same time in an isolated manner. This section explains how to configure multitenancy for the CAP Java. ::: warning The multitenancy services (`@sap/cds-mtx`) described in this chapter are in maintenance mode and only supported until CAP Java 2.x. If you start a new multitenancy project, it's highly recommended to make use of [Multitenancy](multitenancy) based on CAP Java 3.x and streamlined MTX (`@sap/cds-mtxs`). ::: ## Overview For a general overview on this topic, see the [Multitenancy guide](../guides/multitenancy/?impl-variant=java). In CAP Java, the Node.js based [*cds-mtx* module](../guides/multitenancy/?impl-variant=java) is reused to handle tenant provisioning. This reuse has the following implications: - Java applications need to run and maintain the *cds-mtx* module as a sidecar application (called *MTX sidecar* in this documentation). The following sections describe the setup as a [Multitarget Application](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/d04fc0e2ad894545aebfd7126384307c.html) using a [Multitarget Application Development Descriptor](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/c2d31e70a86440a19e47ead0cb349fdb.html) (*mta.yaml*) file. It can be packaged by means of the [MTA Build Tool](https://sap.github.io/cloud-mta-build-tool) and deployed to the SAP BTP by means of the Deploy Service. - Multitenant CAP Java applications automatically expose the tenant provisioning API called by the SaaS Provisioning service so that [custom logic during tenant provisioning](#custom-logic) can be written in Java. The following figure describes the basic setup: ![This is a technical architecture modeling diagram, which shows all involved components and how they interact. The involved components are: SaaS Provisioning Service, CAP Java backend, MTX sidecar, SAP BTP Service Manager, and a database.](./assets/architecture-mt.drawio.svg) ## Maven Dependencies Multitenancy support is available as a so called optional [application feature](developing-applications/building#starter-bundles#application-features) of the CAP Java SDK. It's already included when you use the `cds-starter-cloudfoundry` dependency. Otherwise, you can add the following Maven dependency to apply the feature: ```xml com.sap.cds cds-feature-mt ``` ::: tip When you add this dependency to your project, it becomes active when certain conditions are fulfilled, for example, [when your application is deployed to SAP BTP](#required-services-mt). This condition check lets you test your application locally without multitenancy turned on. ::: ## Tenant Subscription Events { #custom-logic } The [SaaS Provisioning service](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ed08c7dcb35d4082936c045e7d7b3ecd.html) (`saas-registry`) in SAP BTP sends [specific requests](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ff540477f5404e3da2a8ce23dcee602a.html) to applications when tenants are subscribed or unsubscribed. For these requests, the CAP Java SDK internally generates CAP events on the technical service [`MtSubscriptionService`](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtSubscriptionService.html). [For a general introduction to CAP events, see Service Provisioning API.](event-handlers/){.learn-more} Register event handlers for the following CAP events to add custom logic for requests sent by the SaaS Provisioning service. Each event passes a special type of `EventContext` object to the event handler method and provides event-specific information: | Event Name | Event Context | Use Case | | ------------------------ | ------------------------------------------------------------------------------------- | --------------- | | `EVENT_SUBSCRIBE` | [MtSubscribeEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtSubscribeEventContext.html) | Add a tenant | | `EVENT_UNSUBSCRIBE` | [MtUnsubscribeEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtUnsubscribeEventContext.html) | Remove a tenant | | `EVENT_GET_DEPENDENCIES` | [MtGetDependenciesEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtGetDependenciesEventContext.html) | Dependencies |
You only need to register event handlers to override the default behavior. Default behaviors: - A new tenant-specific database container is created through the [Service Manager](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/3a27b85a47fc4dff99184dd5bf181e14.html) during subscription. - A tenant-specific database container *isn't* deleted during unsubscription. The following sections describe how to register to these events in more detail. ## Subscribe Tenant Subscription events are generated when a new tenant is added. By default, subscription creates a new database container for a newly subscribed tenant. ### Synchronous Tenant Subscription By default an `EVENT_SUBSCRIBE` event is sent when a tenant is added. The following example shows how to register to this event: ```java package com.sap.cds.demo.spring.handler; import org.springframework.stereotype.Component; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.Before; import com.sap.cds.services.handler.annotations.ServiceName; import com.sap.cds.services.mt.MtSubscriptionService; import com.sap.cds.services.mt.MtUnsubscribeEventContext; @Component @ServiceName(MtSubscriptionService.DEFAULT_NAME) public class SubscriptionHandler implements EventHandler { @Before(event = MtSubscriptionService.EVENT_SUBSCRIBE) public void beforeSubscription(MtSubscribeEventContext context) { // Activities before tenant database container is created } } ``` To send notifications when a subscription was successful, you could register an `@After` handler: ```java @After(event = MtSubscriptionService.EVENT_SUBSCRIBE) public void afterSubscription(MtSubscribeEventContext context) { // For example, send notification, … } ``` ### Returning a Database ID When you've registered exactly one SAP HANA instance in your SAP BTP space, a new tenant-specific database container is created automatically. However, if you've registered more than one SAP HANA instance in your SAP BTP space, you have to pass the target database ID for the new database container in a customer handler, as illustrated in the following example: ```java @Before(event = MtSubscriptionService.EVENT_SUBSCRIBE) public void beforeSubscription(MtSubscribeEventContext context) { context.setInstanceCreationOptions( new InstanceCreationOptions().withProvisioningParameters( Collections.singletonMap("database_id", ""))); } ``` ### Returning a Custom Application URL The following example shows how to return a custom application URL that is shown in SAP BTP Cockpit: ```java @After(event = MtSubscriptionService.EVENT_SUBSCRIBE) public void afterSubscribe(MtSubscribeEventContext context) { if (context.getResult() == null) { context.setResult( "https://" + context.getSubscriptionPayload().subscribedSubdomain + ".myapp.com"); } } ``` By default, the application URL is constructed by configuration as described in [Wiring It Up](#binding-it-together). ### Returning Dependencies The event `EVENT_GET_DEPENDENCIES` fires when the SaaS Provisioning calls the [`getDependencies` callback](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/ff540477f5404e3da2a8ce23dcee602a.html). Hence, if your application consumes any reuse services provided by SAP, you must implement the `EVENT_GET_DEPENDENCIES` to return the service dependencies of the application. The callback must return a `200` response code and a JSON file with the dependent services' `appName` and `appId`, or just the `xsappname`. ::: tip The `xsappname` of an SAP reuse service that is bound to your application can be found as part of the `VCAP_SERVICES` JSON structure under the path `VCAP_SERVICES..credentials.xsappname`. ::: The following example shows this in more detail: ```java import com.sap.cloud.mt.subscription.json.ApplicationDependency; @Value("${vcap.services..credentials.xsappname}") private String xsappname; @On(event = MtSubscriptionService.EVENT_GET_DEPENDENCIES) public void onGetDependencies(MtGetDependenciesEventContext context) { ApplicationDependency dependency = new ApplicationDependency(); dependency.xsappname = xsappname; List dependencies = new ArrayList<>(); dependencies.add(dependency); context.setResult(dependencies); } ``` ## Unsubscribe Tenant Unsubscription events are generated, when a tenant is offboarded. By default, the tenant-specific database container is *not* deleted during offboarding. You can change this behavior by registering a custom event handler as illustrated in the following examples. ### Synchronous Tenant Unsubscription By default an `EVENT_UNSUBSCRIBE` is sent when a tenant is removed. The following example shows how to add custom logic for this event: ```java @Before(event = MtSubscriptionService.EVENT_UNSUBSCRIBE) public void beforeUnsubscribe(MtUnsubscribeEventContext context) { // Activities before offboarding } ``` You can also register an `@After` handler, for example to notify when removal is finished: ```java @After(event = MtSubscriptionService.EVENT_UNSUBSCRIBE) public void afterUnsubscribe(MtUnsubscribeEventContext context) { // Notify offboarding finished } ``` ### Deleting Tenant Containers During Tenant Unsubscription By default, tenant-specific database containers aren't deleted during removal. However, you can register a customer handler change this behavior. For example: ```java @Before(event = MtSubscriptionService.EVENT_UNSUBSCRIBE) public void beforeUnsubscribe(MtUnsubscribeEventContext context) { // Trigger deletion of database container of offboarded tenant context.setDelete(true); } ``` ## Configuring the Required Services { #required-services-mt} To enable multitenancy on SAP BTP, three services are involved: - XSUAA - Service Manager - SaaS Provisioning service (`saas-registry`) Only when these services are bound to your application, the multitenancy feature is turned on. You can either create and configure these services manually. See section [Developing Multitenant Applications in SAP BTP, Cloud Foundry Environment](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/5e8a2b74e4f2442b8257c850ed912f48.html) for more details. The following sections describe how to configure and bind these services by means of an *mta.yaml* file. ### XSUAA { #xsuaa-mt-configuration } A special configuration of an XSUAA service instance is required to enable authorization between the SaaS Provisioning service, CAP Java application, and MTX sidecar. The service can be configured in the *mta.yaml* by adding an `xsuaa` resource as follows: ```yaml resources: […] - name: xsuaa type: com.sap.xs.uaa parameters: service-plan: application path: ./xs-security.json config: xsappname: ``` Choose a value for property `xsappname` that is unique globally. Also, you have to create an [Application Security Descriptor (*xs-security.json*)](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/517895a9612241259d6941dbf9ad81cb.html) file, which must include two scopes: - `mtcallback` - `mtdeployment` > You can also use custom scope names by configuring them. Use the following application configuration properties: > > - mtcallback: `cds.multitenancy.security.subscription-scope` > - mtdeployment: `cds.multitenancy.security.deployment-scope` The `mtcallback` scope is required by the onboarding process. The `mtdeployment` scope is required to redeploy database artifacts at runtime. An example *xs-security.json* file looks like this: ```json { "xsappname": "", "tenant-mode": "shared", "scopes": [ { "name": "$XSAPPNAME.mtcallback", "description": "Multi Tenancy Callback Access", "grant-as-authority-to-apps": [ "$XSAPPNAME(application, sap-provisioning, tenant-onboarding)" ] }, { "name": "$XSAPPNAME.mtdeployment", "description": "Scope to trigger a re-deployment of the database artifacts" } ], "authorities": [ "$XSAPPNAME.mtdeployment" ] } ``` In this example, the `grant-as-authority-to-apps` section is used to grant the `mtcallback` scope to the applications *sap-provisioning* and *tenant-onboarding*. These are services provided by SAP BTP involved in the onboarding process. It isn't necessary to have the security configuration in a separate file. It can also be added to the *mta.yaml* file directly. ::: warning *❗ Warning* The `mtcallback` and `mtdeployment` scopes **must not be exposed** to any business user, for example, using a role template. Else a malicious user could update or even delete the artifacts of arbitrary tenants. In addition, if you implement a service broker in order to expose your service API for (technical) users of SaaS tenants, you must ensure that both scopes **cannot be consumed as authorities** in cloned service instances created by clients. To achieve that, set `authorities-inheritance: false`. It is **strongly recommended** to explicitly enumerate all authorities that should be exposed in the the broker configuration (allow-list). ::: ### Service Manager A service instance of the [Service Manager](https://help.sap.com/docs/SERVICEMANAGEMENT/09cc82baadc542a688176dce601398de/3a27b85a47fc4dff99184dd5bf181e14.html) (`service-manager`) is required that the CAP Java SDK can create database containers per tenant at application runtime. It doesn't require special parameters and can be added as a resource in *mta.yaml* as follows: ```yaml resources: […] - name: service-manager type: org.cloudfoundry.managed-service parameters: service: service-manager service-plan: container ``` ### SaaS Provisioning Service (saas-registry) { #saas-registry } A `saas-registry` service instance is required to make your application known to the SAP BTP Provisioning Service and to register the endpoints that should be called when tenants are added or removed. The service can be configured as a resource in *mta.yaml* as follows. See section [Register the Multitenant Application to the SaaS Provisioning Service](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/3971151ba22e4faa9b245943feecea54.html) for more details. ```yaml resources: […] - name: saas-registry type: org.cloudfoundry.managed-service parameters: service: saas-registry service-plan: application config: appName: xsappname: appUrls: getDependencies: ~{srv/url}/mt/v1.0/subscriptions/dependencies onSubscription: ~{srv/url}/mt/v1.0/subscriptions/tenants/{tenantId} requires: - name: srv ``` It's required to configure the parameters: - `appName`: Choose an appropriate application display name. - `xsappname`: Use the value for `xsappname` you configured at your [UAA service instance](#xsuaa-mt-configuration). - `appUrls`: Configure the callback URLs used by the SaaS Provisioning service to get the dependencies of the application and to trigger a subscription. In the above example, the property `~{srv/url}` that is provided by the `srv` module is used. See section [Wiring It Up](#binding-it-together) for more details. If you use different module and property names for your CAP Java backend module, you have to adapt these properties here accordingly. ## Adding the MTX Sidecar Application { #mtx-sidecar-server } This section describes how to use the `cds-mtx` Node.js module and add the MTX sidecar microservice to the *mta.yaml* file. In a dedicated project subfolder named *mtx-sidecar*, create a Node.js start script in a file named *server.js* to bootstrap the `cds-mtx` library: ```js const app = require('express')(); const cds = require('@sap/cds'); const main = async () => { await cds.connect.to('db'); const PORT = process.env.PORT || 4004; await cds.mtx.in(app); app.listen(PORT); } main(); ``` ::: tip By default, this script implements authorization and checks for the scope `mtcallback`. If you use a custom scope name for requests issued by the SaaS Provisioning Service in your application security descriptor (*xs-security.json*), you have to configure the custom scope name at the MTX sidecar as well. Use the environment variable `CDS_MULTITENANCY_SECURITY_SUBSCRIPTIONSCOPE`, for example, by specifying it in the *mta.yaml* file. ::: To define the dependencies and start command, also create a file *package.json* like this: ```json { "name": "deploy", "engines": { "node": ">=12" }, "scripts": { "start": "node server.js" } } ``` Next, add the required dependencies: ```sh npm add @sap/cds @sap/cds-mtx @sap/xssec hdb express ``` Because the MTX sidecar will build the CDS model, you need to configure the build by means of two *.cdsrc.json* files. The first *.cdsrc.json* file goes into the root folder of your project and specifies from which location the CDS files should be collected. The following example demonstrates this: ```json { "build": { "target": ".", "tasks": [ { "for": "java-cf" }, { "for": "mtx", "src": ".", "dest": "mtx-sidecar" }, { "for": "hana" } ] }, "requires": { "db": { "kind": "hana-mt" } }, "odata": { "version": "v4" } } ``` ::: tip You only need to change this configuration if you named your project folders, `app`, `db`, `srv`, and `mtx-sidecar` differently. ::: A detailed description of this configuration file can be found in section [Build Configuration](../guides/deployment/custom-builds#build-config). In the following, you find a short summary of this example: The `build` section defines the build `tasks` that should be executed. Three build tasks are defined in this example: | Task | Description | | --------- | ------------------------------------------------------------------------------- | | `java-cf` | Generates *csn.json* and EDMX files | | `mtx` | Collects *.cds* files to copy to *mtx-sidecar* directory, generates *i18n.json* | | `hana` | Generates SAP HANA artifacts | In the previous example, the `options` section specifies the source directories for each build task. ::: tip The `hana` build task is optional because the SAP HANA artifacts are also generated by the *mtx-sidecar* directly. However, the generated SAP HANA artifacts enable you to test your application in a single tenant scenario. ::: The second *.cdsrc.json* file goes into the *mtx-sidecar* directory. The following example demonstrates this: ```json { "hana": { "deploy-format": "hdbtable" }, "build": { "tasks": [ { "for": "hana" }, { "for": "java-cf" } ] }, "odata": { "version": "v4" }, "requires": { "db": { "kind": "hana-mt", }, "auth": { "kind": "xsuaa" }, "multitenancy": true } } ``` ::: tip You only need to change this configuration if you named your project folders, `app`, `db`, `srv`, and `mtx-sidecar` differently. ::: ::: warning If you have configured a location for your i18n files as described in the [Localization Section](../guides/i18n#where-to-place-text-bundles), please make sure to add the same CDS configuration in both, the *.cdsrc.json* of the SaaS application and in the *.cdsrc.json* of the `mtx-sidecar`. ::: In this file, the `requires` section configures the service instances that should be used by the *mtx-sidecar*. In this case, it's an instance of the UAA Service, to enable authentication and authorization, as well as the Service Manager, that enables multitenancy. Now, add the `mtx-sidecar` module to your *mta.yaml* file: ```yaml modules: […] - name: mtx-sidecar type: nodejs path: mtx-sidecar parameters: memory: 256M disk-quota: 512M requires: - name: xsuaa - name: service-manager provides: - name: mtx-sidecar properties: url: ${default-url} ``` The `mtx-sidecar` module requires the XSUAA and Service Manager services. Also you need to provide its URL to be able to configure the URL in the service module as shown in the previous *mta.yaml*. The authentication works through token validation. ## Wiring It Up { #binding-it-together } To bind the previously mentioned services and the MTX sidecar to your CAP Java application, you could use the following example of the `srv` module in the *mta.yaml* file: ```yaml modules: […] - name: srv type: java path: srv parameters: […] requires: - name: service-manager - name: xsuaa - name: mtx-sidecar properties: CDS_MULTITENANCY_SIDECAR_URL: ~{url} - name: app properties: CDS_MULTITENANCY_APPUI_URL: ~{url} CDS_MULTITENANCY_APPUI_TENANTSEPARATOR: "." provides: - name: srv properties: url: '${default-url}' ``` The environment variable `CDS_MULTITENANCY_SIDECAR_URL` of the `srv` module is internally mapped to property `cds.multitenancy.sidecar.url`. This URL is required by the runtime to connect to the [MTX Sidecar application](#mtx-sidecar-server) and it's derived from property `url` of the mtx-sidecar [module](#mtx-sidecar-server) (Note that `${default-url}` is a placeholder for the own URL). Similarly, `CDS_MULTITENANCY_APPUI_URL` configures the URL that is shown in the SAP BTP Cockpit. Usually it's pointing to the app providing the UI, which is the module `app` in this example. As value for `CDS_MULTITENANCY_APPUI_TENANTSEPARATOR` only `"."` is supported at the moment. The actual URL shown in the SAP BTP Cockpit is then composed of: ```txt https://. ```
## Adding Logging Service Support { #app-log-support } Logging service support gives you the capability to observe properly correlated requests between the different components of your CAP application in Kibana. This is especially useful for `multi-tenant aware applications` that use the `MTX sidecar`. As described in the section [Observability > Logging](./operating-applications/observability#logging-service), in order to enable the Cloud Foundry `application-logs` service support with CAP Java SDK, it's recommended to use the [cf-java-logging-support](https://github.com/SAP/cf-java-logging-support). Information about configuration options is provided there, as well. ### Adding the Service Bindings Aside from that, a service binding to the `application-logs` service in the Cloud Foundry environment is required. This can be set up manually with the SAP BTP cockpit for both, the CAP application and the MTX sidecar, or more easily with an `mta` deployment. The *mta.yaml* file needs a new resource definition for the `application-logs` service which is required by both the `srv` module of the CAP Java application and the `mtx-sidecar` module. Building and deploying from this manifest then creates the necessary `application-logs` service instance if not existing yet and the `service bindings`: ```yaml modules: […] - name: srv type: java path: srv parameters: […] requires: […] - name: cf-logging […] provides: - name: srv properties: url: '${default-url}' […] - name: sidecar type: nodejs path: mtx-sidecar parameters: […] requires: […] - name: cf-logging […] provides: - name: sidecar properties: url: ${default-url} […] resources: […] - name: cf-logging type: org.cloudfoundry.managed-service parameters: service: application-logs service-plan: lite […] ``` ::: tip In our example, we use the service-plan `lite` of the `application-logs` service, but you might require one with larger quota limits. ::: ::: tip Complete examples for *mta.yaml* files can be found in the [CAP Java bookshop samples](https://github.com/SAP-samples/cloud-cap-samples-java/). ::: ### Configuring the MTX Sidecar Application To properly correlate requests in the `mtx-sidecar`, a `correlate()` function needs to be added to the `Express` app that acts as a middleware that either reads the `correlation id` from the request headers if provided or generates a new one if not. One way to do so, is to modify the Node.js start script `server.js`, that was introduced in [Adding the MTX Sidecar Application](#mtx-sidecar-server), as follows: ```js const app = require('express')(); const cds = require('@sap/cds'); const main = async () => { app.use(defaults.correlate); await cds.connect.to('db'); const PORT = process.env.PORT || 4004; await cds.mtx.in(app); app.listen(PORT); } const defaults = { get correlate() { return (req, res, next) => { const id = req.headers['x-correlation-id'] || req.headers['x-correlationid'] || req.headers['x-request-id'] || req.headers['x-vcap-request-id'] || cds.utils.uuid() // new intermediate cds.context, if necessary cds.context = { id } // guarantee x-correlation-id going forward and set on res req.headers['x-correlation-id'] = id res.set('x-correlation-id', id) // guaranteed access to cds.context._.req -> REVISIT if (!cds.context._) cds.context._ = {} if (!cds.context._.req) cds.context._.req = req next() } } } main(); ``` The final piece of configuration required for the MTX sidecar is to enable the Kibana formatter feature. The following object literal needs be added to the json object within the *package.json* file in the *mtx-sidecar* subfolder of your CAP Java application: ```json "cds": { "features": { "kibana_formatter": true } } ``` ::: tip For the Kibana formatter feature it is recommended to use *@sap/cds* in version *5.4.3 or higher*, *@sap/cds-mtx* in version *2.2.0 or higher* and *node* in version *16.2.0 or higher*. ::: ### Correlated Application Logs With a successful deployment of the CAP application with all the previously mentioned configuration in place, application logs from both, the `srv` and `mtx sidecar` modules, will be properly correlated by their `correlation id`. This can easily be seen in `Kibana`, which is part of the ELK (*Elasticsearch/Logstash/Kibana*) stack on Cloud Foundry and available by default with the `application-logs` service: ![Kibana screenshot](./assets/kibana.png) ## Database Schema Update { #database-update } When shipping a new application version with an updated CDS model, the database schema for each subscribed tenant needs an update. The database schema update needs to be triggered explicitly, as described in the following sections. When the database schema update is triggered, the following CAP events are sent. By registering custom handlers for these events, you can add custom logic to influence the deployment process. By default, the CAP Java SDK notifies the *MTX Sidecar* to perform any schema upgrade if necessary. | Event Name | Event Context | | --------------------------- | ----------------------------------------------------------------------------------------- | | `EVENT_ASYNC_DEPLOY` | [MtAsyncDeployEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtAsyncDeployEventContext.html) | | `EVENT_ASYNC_DEPLOY_STATUS` | [MtAsyncDeployStatusEventContext](https://www.javadoc.io/doc/com.sap.cds/cds-feature-mt/latest/com/sap/cds/services/mt/MtAsyncDeployStatusEventContext.html) | It's often desired to update the whole service in a zero downtime manner. This section doesn't deal with the details about updating a service productively, but describes tool support the CAP Java SDK offers to update database schemas. The following sections describe how to trigger the database schema upgrade for tenants. ### Deploy Endpoint { #deploy-endpoint } When multitenancy is configured, the CAP Java SDK exposes a REST endpoint to update database schemata. ::: warning *❗ Warning* You must use the scope `mtdeployment` for the following requests! ::: #### Deployment Request Send this request when a new version of your application with an updated database schema was deployed. This call triggers updating the persistence of each tenant. ##### Route ```http POST /mt/v1.0/subscriptions/deploy/async ``` ::: tip This is the default endpoint. One or more endpoints might differ if you configure different endpoints through properties. ::: ##### Body The `POST` request must contain the following body: ```json { "tenants": [ "all" ] } ``` Alternatively, you can also update single tenants: ```json { "tenants": ["", "", …] } ``` ##### Response The deploy endpoint is asynchronous, so it returns immediately with status code `202` and JSON structure containing a `jobID` value: ```json { "jobID": "" } ``` #### Job Status Request You can use this `jobID` to check the progress of the operation by means of the following REST endpoint: ##### Route ```http GET /mt/v1.0/subscriptions/deploy/async/status/ HTTP/1.1 ``` ##### Response The server responds with status code `200`. During processing, the response looks like: ```json { "error": null, "status": "RUNNING", "result": null } ``` Once a job is finished, the collective status is reported like this: ```json { "error": null, "status": "FINISHED", "result": { "tenants": { "": { "status": "SUCCESS", "message": "", "buildLogs": "" }, "": { "status": "FAILURE", "message": "", "buildLogs": "" } } } } ``` ::: tip Logs are persisted for a period of 30 minutes before they get deleted automatically. If you're requesting the job status after the 30-minute period expired, you get a *404 Not Found* response. ::: ### Deploy Main Method As an alternative to calling the [deploy REST endpoints](#deploy-endpoint), the CAP Java SDK also offers a `main` method in the class `com.sap.cds.framework.spring.utils.Deploy` that can be called from the command line while the CAP Java application is still stopped. This way, you can run the database deployment for all tenants before you start a new version of the Java application. This prevents new application code to access database artifacts that aren't yet deployed. ::: warning While the CAP Java backend might be stopped when you call this method, the *MTX Sidecar* application must be running! :::
This synchronization can also be automated, for example using [Cloud Foundry Tasks](https://docs.cloudfoundry.org/devguide/using-tasks.html) on SAP BTP and [Module Hooks](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/b9245ba90aa14681a416065df8e8c593.html) in your MTA. The `main` method takes an optional list of tenant IDs as input arguments. If tenant IDs are specified, only these tenants are updated. If no input parameters are specified, all tenants are updated. The method waits until all deployments are finished and then prints the result. The method returns the following exit codes | Exit Code | Result | | --------- | ------------------------------------------------------------------------------------------------ | | 0 | All tenants updated successfully. | | 1 | Failed to update at least one tenant. Re-run the procedure to make sure that all tenants are updated. | To run this method locally, use the following command where `` is the one of your application: ::: code-group ```sh [>= Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.launch.PropertiesLauncher [] … [] ``` ```sh [< Spring Boot 3.2.0] java -cp -Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.PropertiesLauncher [] … [] ``` ::: In the SAP BTP, Cloud Foundry environment it can be tricky to construct such a command. The reason is, that the JAR file is extracted by the Java Buildpack and the place of the Java executable isn't easy to determine. Also the place differs for different Java versions. Therefore, we recommend to adapt the start command that is generated by the buildpack and run the adapted command: ::: code-group ```sh [>= Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.launch.JarLauncher/org.springframework.boot.loader.launch.PropertiesLauncher/g' /home/vcap/staging_info.yml && sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Deploy/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | bash ``` ```sh [< Spring Boot 3.2.0] sed -i 's/org.springframework.boot.loader.JarLauncher/org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && sed -i 's/-Dsun.net.inetaddr.negative.ttl=0/-Dsun.net.inetaddr.negative.ttl=0 -Dloader.main=com.sap.cds.framework.spring.utils.Deploy/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | bash ``` ```sh [Java 8] sed -i 's/org.springframework.boot.loader.JarLauncher/-Dloader.main=com.sap.cds.framework.spring.utils.Deploy org.springframework.boot.loader.PropertiesLauncher/g' /home/vcap/staging_info.yml && jq -r .start_command /home/vcap/staging_info.yml | bash ``` ::: ## Developing Multitenant CAP Applications ### Local Development A multitenant CAP application can still be started and tested locally, for example with SQLite. In this case, the CAP Java SDK simply disables the multitenancy feature (as there is no Service Manager service binding present) to enable local testing of the general business logic. Another option is to access cloud services from the local development machine (hybrid scenario). You can decide whether you want to access just one fixed SAP HANA service binding or access all available SAP HANA service bindings that were created through the Service Manager binding, which is described by the following sections. #### Static SAP HANA Binding For the static case, just copy the credentials of the SAP HANA service binding you want to use into the *default-env.json*. You can, for example, see all application-managed service instances in the SAP BTP Cockpit. The app behaves like in the single tenant case. #### Service Manager Binding If you want to test multitenancy locally, just copy the complete Service Manager binding into the *default-env.json*. If you have extensibility enabled, you also need to set the property `cds.multitenancy.sidecar.url` to the URL of the deployed MTX sidecar app. Now you can access the data of different tenants locally, if user information is set for the requests to your locally running server. You can locally authenticate at your app either through mock users or the UAA. The configuration of mock users is described in section [Security](./security). For a mock user, you can also set the `tenant` property. The value needs to be the subaccount ID, which can be found in SAP BTP Cockpit in the *Subaccount* details. You can then authenticate at your app using basic authentication. If you already secured your services, the browser asks you automatically for credentials. Otherwise, you can also set username and password explicitly, for example, in Postman. If you want to authenticate using the XSUAA, just copy the XSUAA service binding into the *default-env.json*. You then need to have a valid token for the tenant to authenticate. This can be obtained through client-credential-flow, for example, using Postman. ::: warning *❗ Warning* Requests without user information fail! ::: ::: tip Currently you need to push the changes to Cloud Foundry, to update the database artifacts. If you're working on the data model, it's recommended to use a static SAP HANA binding. ::: ### Accessing Arbitrary Tenants You can override the tenant ID that is set in the current `RequestContext`. This enables accessing data of arbitrary tenants programmatically. This might be useful for example: - To access configuration data stored by means of a "technical" tenant while processing the request of a business tenant. - To access tenant data in asynchronously scheduled jobs, where no tenant information is present in the `RequestContext`, yet (for example, a startup task prefilling tables of tenants with certain data). You can use the [`TenantProviderService`](https://javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/mt/TenantProviderService.html) to get a list of available tenants. You can set a particular tenant and access it by running your code in [nested `RequestContext`](event-handlers/request-contexts#defining-requestcontext) as demonstrated by the following example: ```java TenantProviderService tenantProvider = runtime.getServiceCatalog() .getService(TenantProviderService.class, TenantProviderService.DEFAULT_NAME); List tenants = tenantProvider.readTenants(); tenants.forEach(tenant -> { runtime.requestContext().privilegedUser().modifyUser(user -> user.setTenant(tenant)).run(context -> { // ... your code }); }); ``` ::: warning *❗ Warning* If an application overrides the default behavior of the CAP Java SDK this way, it's responsible of ensuring data privacy and isolation! ::: ### Data Source Pooling Configuration > Pretty much everything in this section depends on your modeling, the load, and also on the sizing (JVM/DB). As there's no one-size-fits-all recommendation, the mentioned configuration parameters are a good starting point. Data source pool configuration is a tradeoff between resources and latency: ##### Pool per tenant - less latency, more resources The dedicated pools per tenant approach creates a dedicated connection pool for each tenant. In it's default configuration this strategy uses a static sizing approach: the number of configured connections (defaults to 10) is opened on startup of the application and kept open. This has the lowest possible latency and the highest resource consumption. The application will need a static number of connections per subscribed client. In case you need low latency but a bit less resource consumption you can [configure dynamic pool sizing](#configure-data-pools) for your tenants' connection pools. Then the application will need at least the minimum number of connections per subscribed clients. Depending on the concurrent load the number can increase per client until the configured maximum number of connections is reached. ##### Pool per database - less resources, more latency { #combine-data-pools} The combined pool approach uses only one pool for all tenants holding a fixed number of connections. This approach, however, needs to switch connections to the correct schema and user of the tenant, before handing out the connection. This causes some additional latency compared to the pools per tenant, where a connection, once opened for a given schema and user, can be reused until it's retired. For the combined pool you can, as for the dedicated pools, decide between static sizing (the default) and [dynamic sizing](#configure-data-pools). For the latter the resource consumption can be even more reduced while adding a bit more latency because new database connections might be opened upon incoming requests. In order to activate the combined pool approach set the property `cds.multiTenancy.datasource.combinePools.enabled = true`. ::: warning *❗ Warning* Since the pool is shared among all tenants, one tenant could eat up all available connections, either intentionally or by accident. Applications using combined pools need to take adequate measures to mitigate this risk, for example by introducing rate-limiting. ::: #### Dynamic Data Source Pooling { #configure-data-pools} If not configured differently both, the dedicated pool and the combined pool approaches use static sizing strategies by default. The connections are kept open regardless of whether the application is currently serving requests for the given tenant. If you expect a low number of tenants, this shouldn't be an issue. With a large number of active tenants, this might lead to resource problems, for example, too many database connections or out-of-memory situations. Once you have an increased number of tenants, or run short of connections on the database side, you need to adjust the [configuration of the CDS datasource](./cqn-services/persistence-services#datasource-configuration) for HikariCP as described in the following section. We're using three parameters for the configuration: - `cds.dataSource..hikari.minimum-idle` - `cds.dataSource..hikari.maximum-pool-size` - `cds.dataSource..hikari.idle-timeout` Keep in mind that `` is the placeholder for the service manager instance bound to your CAP Java application. |Parameter |Description | |--------------------|-------------| |`minimum-idle` | The minimum number of connections kept in the pool after being considered as idle. This helps to adjust the usage of resources to the actual load of a given tenant at runtime. In order to save resources (Java heap and DB connections), this value should be kept rather small (for example `1`). | |`maximum-pool-size` | The maximum number of connections in the pool. Here, the value needs to be balanced. Counter-intuitively a bigger value doesn't necessarily lead to higher response/time or throughput. Closely monitor your application under load in order to find a good value. As a starting point you can just leave the default value `10`. | |`idle-timeout` | The time span after which a connection is considered as *idle*. It controls how fast the size of the pool is adjusted to the load of the application, until `minimum-idle` is reached. Keep in mind that opening a connection comes at a latency cost, too. Don't retire connections too soon. | See section [Multitenancy Configuration Properties](#mtx-properties) for more details. ## Multitenancy Configuration Properties { #mtx-properties } A number of multitenancy settings can be configured through application configuration properties. See section [Application Configuration](./developing-applications/configuring#profiles-and-properties) for more details. All properties can be found in the [properties overview](./developing-applications/properties#cds-multiTenancy). The prefix for multitenancy-related settings is `cds.multitenancy`. Describes authentication and authorization in CAP Java. { #security} ## Overview With respect to web services, authentication is the act of proving the validity of user claims passed with the request. This typically comprises verifying the user's identity, tenant, and additional claims like granted roles. Briefly, authentication controls _who_ is using the service. In contrast, authorization makes sure that the user has the required privileges to access the requested resources. Hence, authorization is about controlling _which_ resources the user is allowed to handle. Hence both, authentication and authorization, are essential for application security: * [Authentication](#authentication) describes how to configure authentication. * [Authorization](#auth) describes how to configure access control. ::: warning Without security configured, CDS services are exposed to public. Proper configuration of authentication __and__ authorization is required to secure your CAP application. ::: ## Authentication { #authentication} User requests with invalid authentication need to be rejected as soon as possible, to limit the resource impact to a minimum. Ideally, authentication is one of the first steps when processing a request. This is one reason why it's not an integral part of the CAP runtime and needs to be configured on application framework level. In addition, CAP Java is based on a [modular architecture](./developing-applications/building#modular_architecture) and allows flexible configuration of the authentication method. For productive scenarios, [XSUAA and IAS](#xsuaa-ias) authentication is supported out of the box, but a [custom authentication](#custom-authentication) can be configured as well. For the local development and test scenario, there's a built-in [mock user](#mock-users) support. ### Configure XSUAA and IAS Authentication { #xsuaa-ias} To enable your application for XSUAA or IAS-authentication we recommend to use the `cds-starter-cloudfoundry` or the `cds-starter-k8s` starter bundle, which covers all required dependencies. :::details Individual Dependencies These are the individual dependencies that can be explicitly added in the `pom.xml` file of your service: * `com.sap.cloud.security:resourceserver-security-spring-boot-starter` that brings [spring-security library](https://github.com/SAP/cloud-security-services-integration-library/tree/main/spring-security) * `org.springframework.boot:spring-boot-starter-security` * `cds-feature-identity` ::: In addition, your application needs to be bound to corresponding service instances depending on your scenario. The following list describes which service needs to be bound depending on the tokens your applications should accept: * only accept tokens issued by XSUAA --> bind your application to an [XSUAA service instance](../guides/security/authorization#xsuaa-configuration) * only accept tokens issued by IAS --> bind your application to an [IAS service instance](https://help.sap.com/docs/IDENTITY_AUTHENTICATION) * accept tokens issued by XSUAA and IAS --> bind your application to service instances of both types. ::: tip Specify Binding CAP Java picks only a single binding of each type. If you have multiple XSUAA or IAS bindings, choose a specific binding with property `cds.security.xsuaa.binding` respectively `cds.security.identity.binding`. Choose an appropriate XSUAA service plan to fit the requirements. For instance, if your service should be exposed as technical reuse service, make use of plan `broker`. ::: #### Proof-Of-Possession for IAS { #proof-of-possession} Proof-Of-Possession is a technique for additional security where a JWT token is **bound** to a particular OAuth client for which the token was issued. On BTP, Proof-Of-Possession is supported by IAS and can be used by a CAP Java application. Typically, a caller of a CAP application provides a JWT token issued by IAS to authenticate a request. With Proof-Of-Possession in place, a mutual TLS (mTLS) tunnel is established between the caller and your CAP application in addition to the JWT token. Clients calling your CAP application need to send the certificate provided by their `identity` service instance in addition to the IAS token. On Cloud Foundry, the CAP application needs to be exposed under an additional route which accepts client certificates and forwards them to the application as `X-Forwarded-Client-Cert` header (for example, the `.cert.cfapps.` domain).
The Proof-Of-Possession also affects approuter calls to a CAP Java application. The approuter needs to be configured to forward the certificate to the CAP application. This can be achieved by setting `forwardAuthCertificates: true` on the destination pointing to your CAP backend (for more details see [the `environment destinations` section on npmjs.org](https://www.npmjs.com/package/@sap/approuter#environment-destinations)). When authenticating incoming requests with IAS, the Proof-Of-Possession is activated by default. This requires using at least version `3.5.1` of the [SAP BTP Spring Security Client](https://github.com/SAP/cloud-security-services-integration-library/tree/main/spring-security) library. You can disable the Proof-Of-Possession enforcement in your CAP Java application by setting the property `sap.spring.security.identity.prooftoken` to `false` in the `application.yaml` file. ### Automatic Spring Boot Security Configuration { #spring-boot} Only if **both, the library dependencies and an XSUAA/IAS service binding are in place**, the CAP Java SDK activates a Spring security configuration, which enforces authentication for all endpoints **automatically**: * Protocol adapter endpoints (managed by CAP such as OData V4/V2 or custom protocol adapters) * Remaining custom endpoints (not managed by CAP such as custom REST controllers or Spring Actuators) The security auto configuration authenticates all endpoints by default, unless corresponding CDS model is not explicitly opened to public with [pseudo-role](../guides/security/authorization#pseudo-roles) `any` (configurable behaviour). Here's an example of a CDS model and the corresponding authentication configuration: ```cds service BooksService @(requires: 'any') { @readonly entity Books @(requires: 'any') {...} entity Reviews {...} entity Orders @(requires: 'Customer') {...} } ``` | Path | Authenticated ? | |:--------------------------|:----------------:| | `/BooksService` | | | `/BooksService/$metadata` | | | `/BooksService/Books` | | | `/BooksService/Reviews` | 1 | | `/BooksService/Orders` | | > 1 Since version 1.25.0 ::: tip For multitenant applications, it's required to authenticate all endpoints as the tenant information is essential for processing the request. ::: There are several application parameters in section `cds.security.authentication` that influence the behaviour of the auto-configuration: | Configuration Property | Description | Default | :---------------------------------------------------- | :----------------------------------------------------- | ------------ | `mode` | Determines the [authentication mode](#auth-mode): `never`, `model-relaxed`, `model-strict` or `always` | `model-strict` | `authenticateUnknownEndpoints` | Determines, if security configurations enforce authentication for endpoints not managed by protocol-adapters. | `true` | `authenticateMetadataEndpoints` | Determines, if OData $metadata endpoints enforce authentication. | `true` The following properties can be used to switch off automatic security configuration at all: | Configuration Property | Description | Default | :---------------------------------------------------- | :----------------------------------------------------- | ------------ | `cds.security.xsuaa.enabled` | Whether automatic XSUAA security configuration is enabled. | `true` | `cds.security.identity.enabled` | Whether automatic IAS security configuration is enabled. | `true` #### Setting the Authentication Mode { #auth-mode} The property `cds.security.authentication.mode` controls the strategy used for authentication of protocol-adapter endpoints. There are four possible values: - `never`: No endpoint requires authentication. All protocol-adapter endpoints are considered public. - `model-relaxed`: Authentication is derived from the authorization annotations `@requires` and `@restrict`. If no such annotation is available, the endpoint is considered public. - `model-strict`: Authentication is derived from the authorization annotations `@requires` and `@restrict`. If no such annotation is available, the endpoint is authenticated. An explicit `@requires: 'any'` makes the endpoint public. - `always`: All endpoints require authentication. By default the authentication mode is set to `model-strict` to comply with secure-by-default. In that case you can use the annotation `@requires: 'any'` on service-level to make the service and its entities public again. Please note that it's only possible to make an endpoint public, if the full endpoint path is considered public as well. For example you can only make an entity public, if the service that contains it is also considered public. ::: tip Please note that the authentication mode has no impact on the *authorization* behaviour. ::: #### Customizing Spring Boot Security Configuration { #custom-spring-security-config} If you want to explicitly change the automatic security configuration, you can add an _additional_ Spring security configuration on top that overrides the default configuration by CAP. This can be useful, for instance, if an alternative authentication method is required for *specific endpoints* of your application. ```java @Configuration @EnableWebSecurity @Order(1) // needs to have higher priority than CAP security config public class AppSecurityConfig { @Bean public SecurityFilterChain appFilterChain(HttpSecurity http) throws Exception { return http .securityMatcher(AntPathRequestMatcher.antMatcher("/public/**")) .csrf(c -> c.disable()) // don't insist on csrf tokens in put, post etc. .authorizeHttpRequests(r -> r.anyRequest().permitAll()) .build(); } } ``` Due to the custom configuration, all URLs matching `/public/**` are opened for public access. ::: tip The Spring `SecurityFilterChain` requires CAP Java SDK [1.27.x](../releases/archive/2022/aug22#minimum-spring-boot-version-2-7-x) or later. Older versions need to use the deprecated `WebSecurityConfigurerAdapter`. ::: ::: warning _❗ Warning_ Be cautious with the configuration of the `HttpSecurity` instance in your custom configuration. Make sure that only the intended endpoints are affected. ::: Another typical example is the configuration of [Spring Actuators](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.enabling). For example a custom configuration can apply basic authentication to actuator endpoints `/actuator/**`: ```java @Configuration @EnableWebSecurity @Order(1) public class ActuatorSecurityConfig { @Bean public SecurityFilterChain actuatorFilterChain(HttpSecurity http) throws Exception { return http .securityMatcher(AntPathRequestMatcher.antMatcher("/actuator/**")) .httpBasic(Customizer.withDefaults()) .authenticationProvider(/* configure basic authentication users here with PasswordEncoder etc. */) .authorizeHttpRequests(r -> r.anyRequest().authenticated()) .build(); } } ``` ### Custom Authentication { #custom-authentication} You're free to configure any authentication method according to your needs. CAP isn't bound to any specific authentication method or user representation such as introduced with XSUAA, it rather runs the requests based on a [user abstraction](../guides/security/authorization#user-claims). The CAP user of a request is represented by a [UserInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/UserInfo.html) object that can be retrieved from the [RequestContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/RequestContext.html) as explained in [Enforcement API & Custom Handlers](#enforcement-api). Hence, if you bring your own authentication, you have to transform the authenticated user and inject as `UserInfo` to the current request. This is done by means of [UserInfoProvider](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/UserInfoProvider.html) interface that can be implemented as Spring bean as demonstrated in [Registering Global Parameter Providers](../java/event-handlers/request-contexts#global-providers). More frequently you might have the requirement to just adapt the request's `UserInfo` which is possible with the same interface: ```java @Component public class CustomUserInfoProvider implements UserInfoProvider { private UserInfoProvider defaultProvider; @Override public UserInfo get() { ModifiableUserInfo userInfo = UserInfo.create(); if (defaultProvider != null) { UserInfo prevUserInfo = defaultProvider.get(); if (prevUserInfo != null) { userInfo = prevUserInfo.copy(); } } if (userInfo != null) { XsuaaUserInfo xsuaaUserInfo = userInfo.as(XsuaaUserInfo.class); userInfo.setName(xsuaaUserInfo.getEmail() + "/" + xsuaaUserInfo.getOrigin()); // adapt name } return userInfo; } @Override public void setPrevious(UserInfoProvider prev) { this.defaultProvider = prev; } } ``` In the example, the `CustomUserInfoProvider` defines an overlay on the default XSUAA-based provider (`defaultProvider`). The overlay redefines the user's name by a combination of email and origin. ### Mock User Authentication with Spring Boot { #mock-users} By default, CAP Java creates a security configuration, which accepts _mock users_ for test purposes. #### Preconfigured Mock Users For convenience, the runtime creates default mock users reflecting the [pseudo roles](../guides/security/authorization#pseudo-roles). They are named `authenticated`, `system` and `privileged` and can be used with an empty password. For instance, requests sent during a Spring MVC unit test with annotation `@WithMockUser("authenticated")` will pass authorization checks that require `authenticated-user`. The privileged user will pass any authorization checks. `cds.security.mock.defaultUsers = false` prevents the creation of default mock users at startup. #### Explicitly Defined Mock Users You can also define mock users explicitly. This mock user configuration only applies if: * The service runs without an XSUAA service binding (non-productive mode) * Mock users are defined in the active application configuration Define the mock users in a Spring profile, which may be only active during testing, as in the following example: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: test cds: security: mock: users: - name: Viewer-User password: viewer-pass tenant: CrazyCars roles: - Viewer attributes: Country: [GER, FR] additional: email: myviewer@crazycars.com features: - cruise - park - name: Privileged-User password: privileged-pass privileged: true features: - "*" ``` ::: - Mock user with name `Viewer-User` is a typical business user with SaaS-tenant `CrazyCars` who has assigned role `Viewer` and user attribute `Country` (`$user.Country` evaluates to value list `[GER, FR]`). This user also has the additional attribute `email`, which can be retrieved with `UserInfo.getAdditionalAttribute("email")`. The [features](../java/reflection-api#feature-toggles) `cruise` and `park` are enabled for this mock user. - `Privileged-User` is a user running in privileged mode. Such a user is helpful in tests that bypasses all authorization handlers. Property `cds.security.mock.enabled = false` disables any mock user configuration. A setup for Spring MVC-based tests based on the given mock users and the CDS model from [above](#spring-boot) could look like this: ```java @RunWith(SpringRunner.class) @SpringBootTest @AutoConfigureMockMvc public class BookServiceOrdersTest { String ORDERS_URL = "/odata/v4/BooksService/Orders"; @Autowired private MockMvc mockMvc; @Test @WithMockUser(username = "Viewer-User") public void testViewer() throws Exception { mockMvc.perform(get(ORDERS_URL)).andExpect(status().isOk()); } @Test public void testUnauthorized() throws Exception { mockMvc.perform(get(ORDERS_URL)).andExpect(status().isUnauthorized()); } } ``` #### Mock Tenants A `tenants` section allows to specify additional configuration for the _mock tenants_. In particular it is possible to assign features to tenants: ::: code-group ```yaml [srv/src/main/resources/application.yaml] --- spring: config.activate.on-profile: test cds: security: mock: users: - name: Alice tenant: CrazyCars tenants: - name: CrazyCars features: - cruise - park ``` ::: The mock user `Alice` is assigned to the mock tenant `CrazyCars` for which the features `cruise` and `park` are enabled. ## Authorization { #auth} CAP Java SDK provides a comprehensive authorization service. By defining authorization rules declaratively via annotations in your CDS model, the runtime enforces authorization of the requests in a generic manner. Two different levels of authorization can be distinguished: - [Role-based authorization](#role-based-auth) allows to restrict resource access depending on user roles. - [Instance-based authorization](#instance-based-auth) allows to define user privileges even on entity instance level, that is, a user can be restricted to instances that fulfill a certain condition. It's recommended to configure authorization declaratively in the CDS model. If necessary, custom implementations can be built on the [Authorization API](#enforcement-api). A precise description of the general authorization capabilities in CAP can be found in the [Authorization](../guides/security/authorization) guide. ### Role-Based Authorization { #role-based-auth} Use CDS annotation `@requires` to specify in the CDS model which role a user requires to access the annotated CDS resources such as services, entities, actions, and functions (see [Restricting Roles with @requires](../guides/security/authorization#requires)). The generic authorization handler of the runtime rejects all requests with response code 403 that don't match the accepted roles. More specific access control is provided by the `@restrict` annotation, which allows to combine roles with the allowed set of events. For instance, this helps to distinguish between users that may only read an entity from those who are allowed to edit. See section [Control Access with @restrict](../guides/security/authorization#restrict-annotation) to find details about the possibilities. ### Instance-Based Authorization { #instance-based-auth} Whereas role-based authorization applies to whole entities only, [Instance-Based Authorization](../guides/security/authorization#instance-based-auth) allows to add more specific conditions that apply on entity instance level and depend on the attributes that are assigned to the request user. A typical use case is to narrow down the set of visible entity instances depending on user properties (for example, `CountryCode` or `Department`). Instance-based authorization is also basis for [domain-driven authorizations](../guides/security/authorization#domain-driven-authorization) built on more complex model constraints. #### Current Limitations The CAP Java SDK translates the `where`-condition in the `@restrict` annotation to a predicate, which is appended to the `CQN` statement of the request. This applies only to `READ`,`UPDATE`, and `DELETE` events. In the current version, the following limitations apply: * For `UPDATE` and `DELETE` events no paths in the `where`-condition are supported. * Paths in `where`-conditions with `to-many` associations or compositions can only be used with an [`exists` predicate](../guides/security/authorization#exists-predicate). * `UPDATE` and `DELETE` requests that address instances that aren't covered by the condition (for example, which aren't visible) aren't rejected, but work on the limited set of instances as expected. As a workaround for the limitations with paths in `where`-conditions, you may consider using the `exists` predicate instead. CAP Java SDK supports [User Attribute Values](../guides/security/authorization#user-attrs) that can be referred by `$user.` in the where-clause of the `@restrict`-annotation. Currently, only comparison predicates with user attribute values are supported (`<,<=,=,=>,>`). Note that generally a user attribute represents an *array of strings* and *not* a single value. A given value list `[code1, code2]` for `$user.code` in predicate `$user.code = Code` evaluates to `(code1 = Code) or (code2 = Code)` in the resulting statement. ### Enforcement API & Custom Handlers { #enforcement-api} The generic authorization handler performs authorization checks driven by the annotations in an early Before handler registered to all application services by default. You may override or add to the generic authorization logic by providing custom handlers. The most important piece of information is the [UserInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/UserInfo.html) that reflects the authenticated user of the current request. You can retrieve it: a) from the [EventContext](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/EventContext.html): ```java EventContext context; UserInfo user = context.getUserInfo(); ``` b) through dependency injection within a handler bean: ```java @Autowired UserInfo user; ``` The most helpful getters in `UserInfo` are listed in the following table: | UserInfo method | Description | :---------------------------------------------------- | :----------------------------------------------------- | | `getName()` | Returns the unique (logon) name of the user as configured in the IdP. Referred by `$user` and `$user.name`. | | `getTenant()` | Returns the tenant of the user. | | `isSystemUser()` | Indicates whether the request has been initiated by a technical service. Refers to [pseudo-role](../guides/security/authorization#pseudo-roles) `system-user`. | | `isAuthenticated()` | True if the current user has been authenticated. Refers to [pseudo-role](../guides/security/authorization#pseudo-roles) `authenticated-user`. | | `isPrivileged()` | Returns `true` if the current user runs in privileged (that is, unrestricted) mode | | `hasRole(String role)` | Checks if the current user has the given role. | | `getRoles()` | Returns the roles of the current user | | `getAttributeValues(String attribute)` | Returns the value list of the given user attribute. Referred by `$user.`. | It's also possible to modify the `UserInfo` object for internal calls. See section [Request Contexts](./event-handlers/request-contexts) for more details. For instance, you might want to run internal service calls in privileged mode that bypasses authorization checks: ```java cdsRuntime.requestContext().privilegedUser().run(privilegedContext -> { assert privilegedContext.getUserInfo().isPrivileged(); // ... Service calls in this scope pass generic authorization handler }); ``` # Spring Boot Integration This section shows how CAP Java is smoothly integrated with Spring Boot. This section describes the [Spring Boot](https://spring.io/projects/spring-boot) integration of the CAP Java SDK. Classic Spring isn't supported. Running your application with Spring Boot framework offers a number of helpful benefits that simplify the development and maintenance of the application to a high extend. Spring not only provides a rich set of libraries and tools for most common challenges in development, you also profit from a huge community, which constantly contributes optimizations, bug fixes and new features. As Spring Boot is not only widely accepted but also the most popular application framework, CAP Java SDK comes with seamless integration of Spring Boot as described in the following sections. ## Integration Configuration To make your web application ready for Spring Boot, you need to make sure that the following Spring dependencies are referenced in your `pom.xml` (group ID `org.springframework.boot`): * `spring-boot-starter-web` * `spring-boot-starter-jdbc` * `spring-boot-starter-security` (optional) In addition, for activating the Spring integration of CAP Java, the following runtime dependency is required: ```xml com.sap.cds cds-framework-spring-boot ${cds.services.version} runtime ``` It might be easier to use the CDS starter bundle `cds-starter-spring-boot-odata`, which not only comprises the necessary Spring dependencies, but also configures the OData V4 protocol adapter: ```xml com.sap.cds cds-starter-spring-boot-odata ${cds.services.version} ``` ::: tip If you refrain from adding explicit Spring or Spring Boot dependencies in your service configuration, the CDS integration libraries transitively retrieve the recommended Spring Boot version for the current CAP Java version. ::: ## Integration Features Besides common Spring features such as dependency injection and a sophisticated [test framework](./developing-applications/testing), the following features are available in Spring CAP applications: * CDS event handlers within custom Spring beans are automatically registered at startup. * Full integration into Spring transaction management (`@Transactional` is supported). * A number of CAP Java SDK interfaces are exposed as [Spring beans](#exposed-beans) and are available in the Spring application context such as technical services, the `CdsModel`, or the `UserInfo` in current request scope. * *Automatic* configuration of XSUAA, IAS, and [mock user authentication](./security#mock-users) by means of Spring security configuration. * Integration of `cds`-property section into Spring properties. See section [Externalized Configuration](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config) in the Spring Boot documentation for more details. * [The cds actuator](./operating-applications/observability#spring-boot-actuators) exposing monitoring information about CDS runtime and security. * [The DB health check indicator](./operating-applications/observability#spring-health-checks) which also applies to tenant-aware DB connections. ::: tip None of the listed features will be available out of the box in case you choose to pack and deploy your web application as plain Java Servlet in a *war* file. ::: ## CDS Spring Beans { #exposed-beans} | Bean | Description | Example | :---------------------------------------------------- | :----------------------------------------------------- | :----------------------------------------------------- | | [CdsRuntime](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/CdsRuntime.html) | Runtime instance (singleton) | `@Autowired`
`CdsRuntime runtime;` | [CdsRuntimeConfigurer](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/runtime/CdsRuntimeConfigurer.html) | Runtime configuration instance (singleton) | `@Autowired`
`CdsRuntimeConfigurer configurer;` | [Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/Service.html) | All kinds of CDS services, application services, and technical services | `@Autowired`
`@Qualifier(CatalogService_.CDS_NAME)`
`private ApplicationService cs;`

`@Autowired`
`private PersistenceService ps;` | [ServiceCatalog](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/ServiceCatalog.html) | The catalog of all available services | `@Autowired`
`ServiceCatalog catalog;` | [CdsModel](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/reflect/CdsModel.html) | The current model | `@Autowired`
`CdsModel model;` | [UserInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/UserInfo.html) | Information about the authenticated user | `@Autowired`
`UserInfo userInfo;` | [AuthenticationInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/authentication/AuthenticationInfo.html) | Authentication claims | `@Autowired`
`AuthenticationInfo authInfo;` | [ParameterInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/ParameterInfo.html) | Information about request parameters | `@Autowired`
`ParameterInfo paramInfo;` | [Messages](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/messages/Messages.html) | Interface to write messages | `@Autowired`
`Messages messages;` | [FeatureTogglesInfo](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/request/FeatureTogglesInfo.html) | Information about feature toggles | `@Autowired`
`FeatureTogglesInfo ftsInfo;` | [CdsDataStore](https://javadoc.io/doc/com.sap.cds/cds4j-api/latest/com/sap/cds/CdsDataStore.html) | Direct access to the default data store | `@Autowired`
`CdsDataStore ds;` | # Developing CAP Java Applications Learn here about developing a CAP Java application. # Building Applications One of the key [CAP design principles](../../about/#open-and-opinionated) is to be an opinionated but yet open framework. Giving a clear guidance for cutting-edge technologies on the one hand and still keeping the door wide open for custom choice on the other hand, demands a highly flexible CAP Java runtime stack. The [modular architecture](#modular_architecture) reflects this requirement, allowing a fine-grained and flexible [configuration](#stack_configuration) based on standard or custom modules. ## Modular Stack Architecture { #modular_architecture} ### Overview One of the basic design principle of the CAP Java is to keep orthogonal functionality separated in independent components. The obvious advantage of this decoupling is that it makes concrete components exchangeable independently. Hence, it reduces the risk of expensive adaptions in custom code, which can be necessary due to new requirements with regards to the platform environment or used version of platform services. Hence, the application is [platform **and** service agnostic](../../about/best-practices#agnostic-by-design). For instance, custom code doesn't need to be written against the chosen type of persistence service, but can use the generic persistence service based on [CQL](../working-with-cqn/../working-with-cql/query-api). Likewise, the application isn't aware of the concrete (cloud) platform environment in which it gets embedded. Consequently, preparing an application to be deployable in different platform contexts is rather a matter of configuration than of code adaption. Consequently, CAP Java doesn't determine the technology the application is built on. But it comes with a chosen set of industry-proven frameworks that can be consumed easily. Nevertheless, you can override the defaults separately depending on the demands in your scenario. Moreover, the fine-grained modularization allows you to assemble a minimum set of components, which is necessary to fulfill the application-specific requirements. This reduces resource consumption at runtime as well as maintenance costs significantly. Another helpful result of the described architecture is that it simplifies local testing massively. Firstly, as components are coupled weakly, you can define the actual test scope precisely and concentrate on the parts that need a high test coverage. Components outside of the test scope are replaceable with mocks, which ideally simulate all the possible corner cases. Alternatively, you can even configure test on integration level to be executed locally if you replace all dependencies to remote services by local service providers. A common example for this is to run the application locally on H2 instead of SAP HANA. The following diagram illustrates the modular stack architecture and highlights the generic components: ![This screenshot is explained in the accompanying text.](./assets/modularized-architecture.png){} You can recognize five different areas of the stack, which comprise components according to different tasks: * The mandatory [application framework](#application-framework) defines the runtime basis of your application typically comprising a web server. * [Protocol adapters](#protocol-adapters) map protocol-specific web events into [CQN](../../cds/cqn) events for further processing. * The resulting CQN-events are passed to [service providers](#service-providers) or the mandatory core runtime, which drives the processing of the event. * The [CQN execution engine](#cqn-execution-engine) is capable of translating [CQN](../../cds/cqn) statements into native statements of a data sink such as a persistence service or remote service. * [Application features](#application-features) are optional application extensions, for instance to add multitenancy capabilities or a platform service integration. ### Application Framework Before starting the development of a new CAP-based application, an appropriate application framework to build on needs to be chosen. The architecture of the chosen framework not only has a strong impact on the structure of your project, but it also affects efforts for maintenance as well as support capabilities. The framework provides the basis of your web application in terms of a runtime container in which your business code can be embedded and executed. This helps to separate your business logic from common tasks like processing HTTP/REST endpoints including basic web request handling. Typically, a framework also provides you with a rich set of generic tools for recurring tasks like configuration, localization, or logging. In addition, some frameworks come with higher-level concepts like dependency injection or sophisticated testing infrastructure. CAP Java positions [Spring](https://spring.io) or more precisely [Spring Boot](https://spring.io/projects/spring-boot) as the first choice application framework, which is seamlessly integrated. Spring comes as a rich set of industry-proven frameworks, libraries, and tools that greatly simplify custom development. Spring Boot also allows the creation of self-contained applications that are easy to configure and run. As all other components in the different layers of the CAP Java stack are decoupled from the concrete application framework, you aren't obligated to build on Spring. In some scenarios, it might be even preferable to run the (web) service with minimal resource consumption or with smallest possible usage of open source dependencies. In this case, a solution based on plain Java Servlets could be favorable. Lastly, in case you want to run your application on a 3rd party application framework, you're free to bundle it with CAP modules and provide the glue code, which is necessary for integration. ### Protocol Adapters The CAP runtime is based on an [event](../../about/best-practices#events) driven approach. Generally, [Service](../../about/best-practices#services) providers are the consumers of events, that means, they do the actual processing of events in [handlers](../../guides/providing-services#event-handlers). During execution, services can send events to other service providers and consume the results. The native query language in CAP is [CQN](../../cds/cqn), which is accepted by all services that deal with data query and manipulation. Inbound requests therefore need to be mapped to corresponding CQN events, which are sent to an accepting Application Service (see concept [details](../../about/best-practices#querying)) afterwards. Mapping the ingress protocol to CQN essentially summarizes the task of protocol adapters depicted in the diagram. Most prominent example is the [OData V4](https://www.odata.org/documentation/) protocol adapter, which is fully supported by the CAP Java. Further HTTP-based protocols can be added in future, but often applications require specific protocols, most notably [RESTful](https://en.wikipedia.org/wiki/Representational_state_transfer) ones. Such application-specific protocols can easily be implemented by means of Spring RestControllers. The modular architecture allows to add custom protocol adapters in a convenient manner, which can be plugged into the stack at runtime. Note that different endpoints can be served by different protocol adapters at the same time. ### Service Providers { #service-providers} Services have different purposes. For instance, CDS model services provide an interface to work with persisted data of your [domain model](../../guides/domain-modeling). Other services are rather technical, for example, hiding the consumption API of external services behind a generic interface. As described in CAPs [core concepts](../../about/best-practices#services), services share the same generic provider interface and are implemented by event handlers. The service provider layer contains all generic services, which are auto-exposed by CAP Java according to the appropriate CDS model. In addition, technical services are offered such as the [Persistence Service](../cqn-services/#persistenceservice) or [Auditlog Service](../auditlog#auditlog-service), which can be consumed in custom service handlers. In case the generic handler implementation of a specific service doesn't match the requirements, you can extend or replace it with custom handler logic that fits your business needs. See section [Event Handlers](../event-handlers/) for more details. ### CQN Execution Engine { #cqn-execution-engine} The CQN execution engine is responsible for processing the passed CQN events and translating them to native statements that get executed in a target persistence service like SAP HANA, PostgreSQL or H2. CQN statements can be built conveniently in a [fluent API](../working-with-cqn/../working-with-cql/query-api). In the future, additional targets can be added to the list of supported outbound sources. ### Application Features { #application-features} The CAP Java architecture allows **additional modules to be plugged in at runtime**. This plugin mechanism makes the architecture open for future extensions and allows context-based configuration. It also enables you to override standard behavior with custom-defined logic in all different layers. Custom [plugins](../building-plugins) are automatically loaded by the runtime and can bring CDS models, CDS services, adapters or just handlers for existing services. ::: info Plugins are optional modules that adapt runtime behaviour. ::: CAP Java makes use of the plugin technique itself to offer optional functionality. Examples are [SAP Event Mesh](../messaging) and [Audit logging](../auditlog) integration. Find a full list of standard plugins in [Standard Modules](#standard-modules). ## Stack Configuration { #stack_configuration} As outlined in section [Modular Stack Architecture](#modular_architecture), the CAP Java runtime is highly flexible. You can choose among modules prepared for different environments and in addition also include plugins which are optional extensions. Which set of modules and plugins is active at runtime is a matter of compile time and runtime configuration. At compile time, you can assemble modules from the different layers: * The [application framework](#application-framework) * One or more [protocol adapters](#protocol-adapters) * The core [service providers](#service-providers) * [Application features](#application-features) to optionally extend the application or adapt to a specific environment ### Module Dependencies All CAP Java modules are built as [Maven](https://maven.apache.org/) artifacts and are available on [Apache Maven Central Repository](https://search.maven.org/search?q=com.sap.cds). They have `groupId` `com.sap.cds`. Beside the Java libraries (Jars) reflecting the modularized functionality, the group also contains a "bill of materials" (BOM) pom named `cds-services-bom`, which is recommended especially for multi-project builds. It basically helps to control the dependency versions of the artifacts and should be declared in dependency management of the parent `pom`: ```xml 2.6.0 com.sap.cds cds-services-bom ${cds.services.version} pom import ``` ::: tip Keep Versions in Sync Importing `cds-services-bom` into the `dependencyManagement` of your project ensures that versions of all CAP modules are in sync. ::: The actual Maven dependencies specified in your `pom` need to cover all modules that are required to run the web application: - The application framework. - At least one protocol adapter (in case of inbound requests). - The CAP Java runtime. The dependencies of a Spring Boot application with OData V4 endpoints could look like in the following example: ```xml com.sap.cds cds-framework-spring-boot runtime com.sap.cds cds-adapter-odata-v4 runtime com.sap.cds cds-services-api com.sap.cds cds-services-impl runtime ``` ::: tip API Modules w/o scope `dependency` Only API modules without dependency scope should be added (they gain `compile` scope by default) such as `cds-services-api` or `cds4j-api`. All other dependencies should have a dedicated scope, like `runtime` or `test` to prevent misuse. ::: You are not obliged to choose one of the prepared application frameworks (identifiable by `artifactId` prefix `cds-framework`), instead you can define your own application context if required. Similarly, you're free to configure multiple adapters including custom implementations that map any specific web service protocol. ::: tip Recommended Application Framework We highly recommended to configure `cds-framework-spring-boot` as application framework. It provides you with a lot of [integration with CAP](../spring-boot-integration#spring-boot-integration) out of the box, as well as enhanced features, such as dependency injection and auto configuration. ::: Additional application features (plugins) you want to use can be added as additional dependencies. The following is required to make your application multitenancy aware: ```xml com.sap.cds cds-feature-mt runtime ``` Choosing a feature by adding the Maven dependency *at compile time* enables the application to make use of the feature *at runtime*. If a chosen feature misses the required environment at runtime, the feature won't be activated. Together with the fact that all features have a built-in default implementation ready for local usage, you can run the application locally with the same set of dependencies as for productive mode. For instance, the authentication feature `cds-feature-hana` requires a valid `hana` binding in the environment. Hence, during local development without this binding, this feature gets deactivated and the stack falls back to default feature adapted for H2. #### Standard Modules { #standard-modules } CAP Java comes with a rich set of prepared modules for all different layers of the stack: **Application Frameworks**: * `cds-framework-spring-boot`: Makes your application a Spring Boot application. * `cds-framework-plain`: Adds support to run as plain Java Servlet-based application. **Protocol adapters**: * `cds-adapter-odata-v4`: Auto-exposes Application Services as OData V4 endpoints. * `cds-adapter-odata-v2`: Auto-exposes Application Services as OData V2 endpoints. **Core runtime**: * `cds-adapter-api`: Generic protocol adapter interface to be implemented by customer adapters. * `cds-services-api`: Interface of the CAP Java SDK. Custom handler or adapter code needs to compile against. * `cds-services-impl`: Implementation of the core CAP Java runtime (**mandatory**). **Application plugins**: * `cds-feature-cloudfoundry`: Makes your application aware of SAP BTP, Cloud Foundry environment. * `cds-feature-k8s`: Service binding support for SAP BTP, Kyma Runtime. * `cds-feature-jdbc`: Consuming JDBC persistences using the CDS4j JDBC runtime. * `cds-feature-hana`: Makes your application aware of SAP HANA data sources. * `cds-feature-postgresql`: Makes your application aware of PostgreSQL data sources. * `cds-feature-xsuaa`: Adds [XSUAA](https://github.com/SAP/cloud-security-xsuaa-integration)-based authentication to your application. * `cds-feature-identity`: Adds [Identity Services](https://github.com/SAP/cloud-security-xsuaa-integration) integration covering IAS to your application. * `cds-feature-mt`: Makes your application multitenant aware. * `cds-feature-enterprise-messaging`: Connects your application to SAP Event Mesh. * `cds-feature-kafka`: Benefit from intra-application messaging with Apache Kafka. * `cds-feature-remote-odata`: Adds [Remote Service](../cqn-services/remote-services#remote-services) support. * `cds-feature-auditlog-v2`: Provides out of the box integration with SAP BTP Auditlog Service V2. * `cds-integration-cloud-sdk`: Allows smooth integration with Cloud SDK to connect with remote REST-based services. ::: tip `cds-feature-cloudfoundry` and `cds-feature-k8s` can be combined to create binaries that support both environments. ::: ### Starter Bundles To simplify the configuration on basis of Maven dependencies, the CAP Java comes with several starter bundles that help to set up your configuration for most common use cases quickly: * `cds-starter-cloudfoundry`: Bundles features to make your application production-ready for SAP BTP, Cloud Foundry environment. It comprises XSUAA authentication, SAP HANA persistence, Cloud Foundry environment for SAP BTP, and multitenancy support. * `cds-starter-k8s`: Bundles features to make your application production-ready for SAP BTP, Kyma/K8s environment. It comprises XSUAA authentication, SAP HANA persistence, Kyma/K8s environment for SAP BTP, and multitenancy support. * `cds-starter-spring-boot`: Bundles all dependencies necessary to set up a web-application based on Spring Boot. No protocol adapter is chosen. Starter bundle `cds-starter-spring-boot` can be combined with any of the other bundles. An example of a CAP application with OData V4 on Cloud Foundry environment: ```xml com.sap.cds cds-starter-spring-boot com.sap.cds cds-adapter-odata-v4 runtime com.sap.cds cds-starter-cloudfoundry runtime ``` ## Generating Projects with Maven { #the-maven-archetype } Use the following command line to create a project from scratch with the CDS Maven archetype: ::: code-group ```sh [Mac/Linux] mvn archetype:generate -DarchetypeArtifactId=cds-services-archetype -DarchetypeGroupId=com.sap.cds -DarchetypeVersion=RELEASE ``` ```cmd [Windows] mvn archetype:generate -DarchetypeArtifactId=cds-services-archetype -DarchetypeGroupId=com.sap.cds -DarchetypeVersion=RELEASE ``` ```powershell [Powershell] mvn archetype:generate `-DarchetypeArtifactId=cds-services-archetype `-DarchetypeGroupId=com.sap.cds `-DarchetypeVersion=RELEASE ``` :::
It supports the following command-line options: | Option | Description | | -- | -- | | `-DgroupId=` | The `groupId` of the Maven artifact for the new project. If not specified, Maven prompts for user input. | | `-DartifactId=` | The `artifactId` of the Maven artifact for the new project. If not specified, Maven prompts for user input. | | `-Dversion=` | The `version` of the Maven artifact for the new project. Defaults to `1.0.0-SNAPSHOT` | | `-Dpackage=` | The Java package for your project's classes. Defaults to `${groupId}.${artifactId}`. | | `-DincludeModel=true` | Adds a minimalistic sample CDS model to the project. | | `-DincludeIntegrationTest=true` | Adds an integration test module to the project. | | `-DodataVersion=[v2\|v4]` | Specify which protocol adapter is activated by default. | | `-DtargetPlatform=cloudfoundry` | Adds CloudFoundry target platform support to the project. | | `-DinMemoryDatabase=[h2\|sqlite]` | Specify which in-memory database is used for local testing. If not specified, the default value is `h2`. | | `-DjdkVersion=[17\|21]` | Specifies the target JDK version. If not specified, the default value is `21`. | | `-Dpersistence=[true\|false]` | Specify whether persistence is enabled (`true`) or disabled (`false`). Defaults to `true`. | | `-DcdsdkVersion=` | Sets the provided cds-dk version in the project. If not specified, the default of CAP Java is used. | ## Building Projects with Maven { #maven-build-options } You can build and run your application by means of the following Maven command: ```sh mvn spring-boot:run ``` ### CDS Maven Plugin CDS Maven plugin provides several goals to perform CDS-related build steps. For instance, the CDS model needs to be compiled to a CSN file which requires a Node.js runtime with module `@sap/cds-dk`. It can be used in CAP Java projects to perform the following build tasks: - Install Node.js in the specified version - Install the CDS Development Kit `@sap/cds-dk` with a specified version - Perform arbitrary CDS commands on a CAP Java project - Generate Java classes for type-safe access - Clean a CAP Java project from artifacts of the previous build Since CAP Java 1.7.0, the CDS Maven Archetype sets up projects to leverage the CDS Maven plugin to perform the previous mentioned build tasks. To have an example on how you can modify a project generated with a previous version of the CDS Maven Archetype, see [this commit](https://github.com/SAP-samples/cloud-cap-samples-java/commit/ceb47b52b1e30c9a3f6e0ea29e207a3dad3c0190). See [CDS Maven Plugin documentation](../assets/cds-maven-plugin-site/plugin-info.html){target="_blank"} for more details. ::: tip Use the _.cdsrc.json_ file to add project specific configuration of `@sap/cds-dk` in case defaults are not appropriate. ::: [Learn more about configuration and `cds.env`](../../node.js/cds-env){.learn-more} ### Using a Local cds-dk Starting with version 3.6.0 of the `cds-services-archetype`, the default setup of a newly created CAP Java project has changed. The `@sap/cds-dk` is maintained as a `devDependency` in `package.json` and installed with an `npm ci` during the Maven build. The `install-cdsdk` goal is no longer used to install the `@sap/cds-dk` locally and it's also marked as deprecated. The version of the `@sap/cds-dk` is no longer maintained in _pom.xml_, it's configured in the _package.json_: ```json { "devDependencies" : { "@sap/cds-dk" : "^8.5.1", } } ``` A `package-lock.json` is also created during project creation with the `cds-services-archetype`. The lock file is needed for `npm ci` to run successfully and pins the transitive dependencies of `@sap/cds-dk` to fixed versions. Fixing the versions ensures that the CDS build is fully reproducible. ::: warning For multitenant applications, ensure that the `@sap/cds-dk` version in the sidecar is in sync. ::: #### Migrate From Goal `install-cdsdk` to `npm ci` { #migration-install-cdsdk } To migrate from the deprecated goal `install-cdsdk` to the new `npm ci` approach, the following steps are required: 1. Remove execution of goal `install-cdsdk` from the `cds-maven-plugin` in _srv/pom.xml_: ```xml com.sap.cds cds-maven-plugin ${cds.services.version} cds.install-cdsdk install-cdsdk ``` 2. Then add execution of goal `npm` with arguments `ci` instead to the `cds-maven-plugin` in _srv/pom.xml_: ```xml cds.npm-ci npm ci ``` 3. Remove cds-dk version property `cds.install-cdsdk.version` from _pom.xml_: ```xml 8.4.2 ``` 4. Add `@sap/cds-dk` as devDependency to _package.json_: ```json { "devDependencies" : { "@sap/cds-dk" : "^8.5.0" } } ``` 5. Perform `npm install` on the command line to get the _package-lock.json_ created or updated. 6. Finally, do a `mvn clean install` and verify that the installation of `@sap/cds-dk` is done with the new approach. #### Maintaining cds-dk 1. _package.json_ and `npm ci`
Newly created CAP Java projects maintain the `@sap/cds-dk` with a specific version as a devDependency in `package.json`. So, when you update the version, run npm install from the command line to update the `package-lock.json`. `npm ci` will then install the updated version of `@sap/cds-dk`. 2. Goal `install-cdsdk`
Older CAP Java projects that use the `install-cdsdk` goal of the `cds-maven-plugin` don't update `@sap/cds-dk`. By default, the goal skips the installation if it's already installed. To update the `@sap/cds-dk` version: 3. Specify a newer version of `@sap/cds-dk` in your *pom.xml* file. 4. Execute `mvn spring-boot:run` with an additional property `-Dcds.install-cdsdk.force=true`, to force the installation of a **`@sap/cds-dk`** in the configured version. ```sh mvn spring-boot:run -Dcds.install-cdsdk.force=true ``` ::: tip _Recommendation_ This should be done regularly to get the latest bugfixes, but at least with every **major update** of `@sap/cds-dk`. :::
### Using a Global cds-dk By default, the build is configured to download a Node.js runtime and the `@sap/cds-dk` tools and install them locally within the project. This step makes the build self-contained, but the build also takes more time. You can omit these steps and speed up the Maven build, using the Maven profile `cdsdk-global`. Prerequisites: * `@sap/cds-dk` is [globally installed](../../get-started/#setup). * Node.js installation is available in current *PATH* environment. If these prerequisites are met, you can use the profile `cdsdk-global` by executing: ```sh mvn spring-boot:run -P cdsdk-global ``` # Running Applications ## Spring Boot Devtools You can speed up your development turnaround by adding the [Spring Boot Devtools](https://docs.spring.io/spring-boot/docs/current/reference/html/using.html#using.devtools) dependency to your CAP Java application. Just add this dependency to the `pom.xml` of your `srv` module: ```xml org.springframework.boot spring-boot-devtools ``` Once this is added, you can use the restart capabilities of the Spring Boot Devtools while developing your application in your favorite Java IDE. Any change triggers an automatic application context reload without the need to manually restart the complete application. Besides being a lot faster than a complete restart this also eliminates manual steps. The application context reload is triggered by any file change on the application's classpath: * Java classes (for example, custom handlers) * Anything inside src/main/resources * Configuration files (for example, _application.yaml_) * Artifacts generated from CDS (schema.sql, CSN, EDMX) * Any other static resource ::: warning Restart for changed Java classes Spring Boot Devtools only detects changes to .class files. You need to enable the *automatic build* feature in your IDE which detects source file changes and rebuilds the .class file. If not, you have to manually rebuild your project to restart your CAP Java application. ::: ### CDS Build The Spring Boot Devtools have no knowledge of any CDS tooling or the CAP Java runtime. Thus, they can't trigger a CDS build if there are changes in the CDS source files. For more information, please check the [Local Development Support](#local-development-support) section. ::: tip CDS builds in particular change numerous resources in your project. To have a smooth experience, define a [trigger file](https://docs.spring.io/spring-boot/docs/current/reference/html/using.html#using.devtools.restart.triggerfile) and [use `auto-build` goal](#cds-auto-build) of the CDS Maven plugin started from the command line. ::: ## Local Development Support { #local-development-support} ### CDS Watch In addition to the previously mentioned build tasks, the CDS Maven plugin can also support the local development of your CAP Java application. During development, you often have to perform the same steps to test the changes in the CDS model: 1. Modify your CDS model. 1. Build and run your application. 1. Test your changes. To automate and accelerate these steps, the `cds-maven-plugin` offers the goal `watch`, which can be executed from the command line by using Maven: ```sh # from your root directory mvn com.sap.cds:cds-maven-plugin:watch # or your srv/ folder cd srv mvn cds:watch ``` It builds and starts the application and looks for changes in the CDS model. If you change the CDS model, these are recognized and a restart of the application is initiated to make the changes effective. The `watch` goal uses the `spring-boot-maven-plugin` internally to start the application with the goal `run` (this also includes a CDS build). Therefore, it's required that the application is a Spring Boot application and that you execute the `watch` goal within your service module folder. When you add the [Spring Boot Devtools](https://docs.spring.io/spring-boot/docs/current/reference/html/using.html#using.devtools) to your project, the `watch` goal can take advantage of the reload mechanism. In case your application doesn't use the Spring Boot Devtools the `watch` goal performs a complete restart of the Spring Boot application after CDS model changes. As the application context reload is always faster than a complete restart the approach using the Spring Boot Devtools is the preferred approach. ::: warning On Windows, the `watch` goal only works if the Spring Boot Devtools are enabled. ::: ### CDS Auto-Build If you want to have the comfort of an automated CDS build like with the `watch` goal but want to control your CAP Java application from within the IDE, you can use the `auto-build` goal. This goal reacts on any CDS file change and performs a rebuild of your applications's CDS model. However, no CAP Java application is started by the goal. This doesn't depend on Spring Boot Devtools support. ::: tip If the Spring Boot Devtools configuration of your CAP Java application defines a [trigger file](https://docs.spring.io/spring-boot/docs/current/reference/html/using.html#using.devtools.restart.triggerfile), the `auto-build` can detect this and touch the trigger file in case of any file change. The same applies to the `watch` goal. ::: ### Multitenant Applications With the streamlined MTX, you can run your multitenant application locally along with the MTX sidecar and use SQLite as the database. See [the _Multitenancy_ guide](../../guides/multitenancy/#test-locally) for more information. ## Debugging You can debug both local and remote Java applications. - For local applications, it's best to start the application using the integrated debugger of your [preferred IDE](../../tools/cds-editors). - Especially for remote applications, we recommend [`cds debug`](../../tools/cds-cli#java-applications) to turn on debugging. # Testing Applications { #testing-cap-java-applications } This section describes some best practices and recommendations for testing CAP Java applications. As described in [Modular Architecture](building#starter-bundles#modular_architecture), a CAP Java application consists of weakly coupled components, which enables you to define your test scope precisely and focus on parts that need a high test coverage. Typical areas that require testing are the [services](../cqn-services/#cdsservices) that dispatch events to [event handlers](../event-handlers/), the event handlers themselves that implement the behaviour of the services, and finally the APIs that the application services define and that are exposed to clients through [OData](../cqn-services/application-services#odata-requests). ::: tip Aside from [JUnit](https://junit.org/junit5/), the [Spring framework](https://docs.spring.io/spring-framework/docs/current/reference/html/index.html) provides much convenience for both unit and integration testing, like dependency injection via [*autowiring*](https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#beans-factory-autowire) or the usage of [MockMvc](https://docs.spring.io/spring-framework/docs/current/reference/html/testing.html#spring-mvc-test-framework) and [*mocked users*]( https://docs.spring.io/spring-security/reference/servlet/test/method.html#test-method-withmockuser). So whenever possible, it's recommended to use it for writing tests. ::: ## Sample Tests To illustrate this, the following examples demonstrate some of the recommended ways of testing. All the examples are taken from the [CAP Java bookshop sample project](https://github.com/SAP-samples/cloud-cap-samples-java/) in a simplified form, so definitely have a look at this as well. Let's assume you want to test the following custom event handler: ```java @Component @ServiceName(CatalogService_.CDS_NAME) public class CatalogServiceHandler implements EventHandler { private final PersistenceService db; public CatalogServiceHandler(PersistenceService db) { this.db = db; } @On public void onSubmitOrder(SubmitOrderContext context) { Integer quantity = context.getQuantity(); String bookId = context.getBook(); Optional book = db.run(Select.from(BOOKS).columns(Books_::stock).byId(bookId)).first(Books.class); book.orElseThrow(() -> new ServiceException(ErrorStatuses.NOT_FOUND, MessageKeys.BOOK_MISSING) .messageTarget(Books_.class, b -> b.ID())); int stock = book.map(Books::getStock).get(); if (stock >= quantity) { db.run(Update.entity(BOOKS).byId(bookId).data(Books.STOCK, stock -= quantity)); SubmitOrderContext.ReturnType result = SubmitOrderContext.ReturnType.create(); result.setStock(stock); context.setResult(result); } else { throw new ServiceException(ErrorStatuses.CONFLICT, MessageKeys.ORDER_EXCEEDS_STOCK, quantity); } } @After(event = CqnService.EVENT_READ) public void discountBooks(Stream books) { books.filter(b -> b.getTitle() != null).forEach(b -> { loadStockIfNotSet(b); discountBooksWithMoreThan111Stock(b); }); } private void discountBooksWithMoreThan111Stock(Books b) { if (b.getStock() != null && b.getStock() > 111) { b.setTitle(String.format("%s -- 11%% discount", b.getTitle())); } } private void loadStockIfNotSet(Books b) { if (b.getId() != null && b.getStock() == null) { b.setStock(db.run(Select.from(BOOKS).byId(b.getId()).columns(Books_::stock)).single(Books.class).getStock()); } } } ``` ::: tip You can find a more complete sample of the previous snippet in our [CAP Java bookshop sample project](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/main/java/my/bookshop/handlers/CatalogServiceHandler.java). ::: The `CatalogServiceHandler` here implements two handler methods -- `onSubmitOrder` and `discountBooks` -- that should be covered by tests. The method `onSubmitOrder` is registered to the `On` phase of a `SubmitOrder` event and basically makes sure to reduce the stock quantity of the ordered book by the order quantity, or, in case the order quantity exceeds the stock, throws a `ServiceException`. Whereas `discountBooks` is registered to the `After` phase of a `read` event on the `Books` entity and applies a discount information to a book's title if the stock quantity is larger than 111. ## Event Handler Layer Testing Out of these two handler methods `discountBooks` doesn't actually depend on the `PersistenceService`. That allows us to verify its behavior in a unit test by creating a `CatalogServiceHandler` instance with the help of a `PersistenceService` mock to invoke the handler method on, as demonstrated below: ::: tip For mocking, you can use [Mockito](https://site.mockito.org/), which is already included with the `spring-boot-starter-test` starter bundle. ::: ```java @ExtendWith(MockitoExtension.class) public class CatalogServiceHandlerTest { @Mock private PersistenceService db; @Test public void discountBooks() { Books book1 = Books.create(); book1.setTitle("Book 1"); book1.setStock(10); Books book2 = Books.create(); book2.setTitle("Book 2"); book2.setStock(200); CatalogServiceHandler handler = new CatalogServiceHandler(db); handler.discountBooks(Stream.of(book1, book2)); assertEquals("Book 1", book1.getTitle(), "Book 1 was discounted"); assertEquals("Book 2 -- 11% discount", book2.getTitle(), "Book 2 was not discounted"); } } ``` ::: tip You can find a variant of this sample code also in our [CAP Java bookshop sample project](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/test/java/my/bookshop/handlers/CatalogServiceHandlerTest.java). ::: Whenever possible, mocking dependencies and just testing the pure processing logic of an implementation allows you to ignore the integration bits and parts of an event handler, which is a solid first layer of your testing efforts. ## Service Layer Testing [Application Services](../cqn-services/application-services) that are backed by an actual service definition within the `CdsModel` implement an interface, which extends the `Service` interface and offers a common `CQN execution API` for `CRUD` events. This API can be used to run `CQN` statements directly against the service layer, which can be used for testing, too. To verify the proper discount application in our example, we can run a `Select` statement against the `CatalogService` and assert the result as follows, using a well-known dataset: ```java @ExtendWith(SpringExtension.class) @SpringBootTest public class CatalogServiceTest { @Autowired @Qualifier(CatalogService_.CDS_NAME) private CqnService catalogService; @Test public void discountApplied() { Result result = catalogService.run(Select.from(Books_.class).byId("51061ce3-ddde-4d70-a2dc-6314afbcc73e")); // book with title "The Raven" and a stock quantity of > 111 Books book = result.single(Books.class); assertEquals("The Raven -- 11% discount", book.getTitle(), "Book was not discounted"); } } ``` As every service in CAP implements the [Service](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/Service.html) interface with its [emit(EventContext)](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/Service.html#emit-com.sap.cds.services.EventContext-) method, another way of testing an event handler is to dispatch an event context via the `emit()` method to trigger the execution of a specific handler method. Looking at the `onSubmitOrder` method from our example above we see that it uses an event context called `SubmitOrderContext`. Therefore, using an instance of that event context, in order to test the proper stock reduction, we can trigger the method execution and assert the result, as demonstrated: ```java @SpringBootTest public class CatalogServiceTest { @Autowired @Qualifier(CatalogService_.CDS_NAME) private CqnService catalogService; @Test public void submitOrder() { SubmitOrderContext context = SubmitOrderContext.create(); // ID of a book known to have a stock quantity of 22 context.setBook("4a519e61-3c3a-4bd9-ab12-d7e0c5329933"); context.setQuantity(2); catalogService.emit(context); assertEquals(22 - context.getQuantity(), context.getResult().getStock()); } } ``` In the same way you can verify that the `ServiceException` is being thrown when the order quantity exceeds the stock value: ```java @SpringBootTest public class CatalogServiceTest { @Autowired @Qualifier(CatalogService_.CDS_NAME) private CqnService catalogService; @Test public void submitOrderExceedingStock() { SubmitOrderContext context = SubmitOrderContext.create(); // ID of a book known to have a stock quantity of 22 context.setBook("4a519e61-3c3a-4bd9-ab12-d7e0c5329933"); context.setQuantity(30); catalogService.emit(context); assertThrows(ServiceException.class, () -> catalogService.emit(context), context.getQuantity() + " exceeds stock for book"); } } ``` ::: tip For a more extensive version of the previous `CatalogServiceTest` snippets, have a look at our [CAP Java bookshop sample project](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/test/java/my/bookshop/CatalogServiceTest.java). ::: ## Integration Testing Integration tests enable us to verify the behavior of a custom event handler execution doing a roundtrip starting at the protocol adapter layer and going through the whole CAP architecture until it reaches the service and event handler layer and then back again through the protocol adapter. As the services defined in our `CDS model` are exposed as `OData` endpoints, by using [MockMvc](https://docs.spring.io/spring-framework/docs/current/reference/html/testing.html#spring-mvc-test-framework) we can simply invoke a specific `OData` request and assert the response from the addressed service. The following demonstrates this by invoking a `GET` request to the `OData` endpoint of our `Books` entity, which triggers the execution of the `discountBooks` method of the `CatalogServiceHandler` in our example: ```java @SpringBootTest @AutoConfigureMockMvc public class CatalogServiceITest { private static final String booksURI = "/api/browse/Books"; @Autowired private MockMvc mockMvc; @Test public void discountApplied() throws Exception { mockMvc.perform(get(booksURI + "?$filter=stock gt 200&top=1")) .andExpect(status().isOk()) .andExpect(jsonPath("$.value[0].title").value(containsString("11% discount"))); } @Test public void discountNotApplied() throws Exception { mockMvc.perform(get(booksURI + "?$filter=stock lt 100&top=1")) .andExpect(status().isOk()) .andExpect(jsonPath("$.value[0].title").value(not(containsString("11% discount")))); } } ``` ::: tip Check out the version in our [CAP Java bookshop sample project](https://github.com/SAP-samples/cloud-cap-samples-java/blob/main/srv/src/test/java/my/bookshop/CatalogServiceITest.java) for additional examples of integration testing. ::: # Configuring Applications ## Profiles and Properties This section describes how to configure applications. CAP Java applications can fully leverage [Spring Boot's](../spring-boot-integration) capabilities for [Externalized Configuration](https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config). This enables you to define multiple **configuration profiles** for different scenarios, like local development and cloud deployment. For a first introduction, have a look at our [sample application](https://github.com/sap-samples/cloud-cap-samples-java) and the [configuration profiles](https://github.com/SAP-samples/cloud-cap-samples-java/blob/master/srv/src/main/resources/application.yaml) we added there. Now, that you're familiar with how to configure your application, start to create your own application configuration. See the full list of [CDS properties](properties) as a reference. ### Production Profile { #production-profile } When running your application in production, it makes sense to strictly disable some development-oriented features. The production profile configures a set of selected property defaults, recommended for production deployments, at once. By default the production profile is set to `cloud`. To specify a custom production profile, set `cds.environment.production.profile` to a Spring profile used in your production deployments. ::: tip Production profile = `cloud` The Java Buildpacks set the `cloud` profile for applications by default. Other active profiles for production deployments are typically set using the environment variable `SPRING_PROFILES_ACTIVE` on your application in your deployment descriptors (`mta.yaml`, Helm charts, etc.). ::: Property defaults adjusted with the production profile are the following: - Index Page is disabled: `cds.index-page.enabled` is set to `false` - Mock Users are strictly disabled: `cds.security.mock.enabled` is set to `false` Note, that explicit configuration in the application takes precedence over property defaults from the production profile. ## Using SAP Java Buildpack { #buildpack } In SAP BTP Cloud Foundry environment, the Java runtime that is used to run your application is defined by the so-called [buildpack](https://docs.cloudfoundry.org/buildpacks/). For CAP applications, we advise you to use the [SAP Java Buildpack 2](https://help.sap.com/docs/btp/sap-business-technology-platform/sap-jakarta-buildpack). CAP applications built with Spring Boot don't require any specific configuration for the buildpack and run using [Java Main](https://help.sap.com/docs/btp/sap-business-technology-platform/java-main) runtime by default. To configure the buildpack for Java 21 with SapMachine JRE, add the following lines to your `mta.yaml` right under your Java service definition: ::: code-group ```yaml [mta.yaml] parameters: buildpack: sap_java_buildpack_jakarta properties: JBP_CONFIG_COMPONENTS: "jres: ['com.sap.xs.java.buildpack.jre.SAPMachineJRE']" JBP_CONFIG_SAP_MACHINE_JRE: '{ version: 21.+ }' ``` ::: :::warning SAP Business Application Studio If you develop your application in SAP Business Application Studio and Java 21 is not available there, use the Java 17, instead. ::: # CDS Properties The following table lists all configuration properties that can be used to configure CAP Java {{ version }}. You can set them in your project's `application.yml`. ::: tip In property files `` should be replaced with a number and `` with an arbitrary String. In YAML files, you can use standard YAML list and map structures. ::: [Learn more about Spring Properties.](https://docs.spring.io/spring-boot/how-to/properties-and-configuration.html){.learn-more}
Property Type Default Value Description
# Operating CAP Java Applications Learn here about operating a CAP Java application. # Optimizing Applications ## Profiling { #profiling} To minimize overhead at runtime, [monitoring](observability#monitoring) information is gathered rather on a global application level and hence might not be sufficient to troubleshoot specific issues. In such a situation, the use of more focused profiling tools can be an option. Typically, such tools are capable of focusing on a specific aspect of an application (for instance CPU or Memory management), but they come with an additional overhead and should only be enabled when needed. Hence, they need to meet the following requirements: * Switchable at runtime * Use a communication channel not exposed to unauthorized users * Not interfering or even blocking business requests How can dedicated Java tools access the running services in a secure manner? The depicted diagram shows recommended options that **do not require exposed HTTP endpoints**: ![This screenshot is explained in the accompanying text.](./assets/remote-tracing.png){} As an authorized operator, you can access the container and start tools [locally](#profiling-local) in a CLI session running with the same user as the target process. Depending on the protocol, the JVM supports on-demand connections, for example, JVM diagnostic tools such as `jcmd`. Alternatively, additional JVM configuration is required as a prerequisite (JMX). A bunch of tools also support [remote](#profiling-remote) connections in a secure way. Instead of running the tool locally, a remote daemon is started as a proxy in the container, which connects the JVM with a remote profiling tool via an ssh tunnel. ### Local Tools { #profiling-local} Various CLI-based tools for JVMs are delivered with the SDK. Popular examples are [diagnostic tools](https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/toc.html) such as `jcmd`, `jinfo`, `jstack`, and `jmap`, which help to fetch basic information about the JVM process regarding all relevant aspects. You can take stack traces, heap dumps, fetch garbage collection events and read Java properties and so on. The SAP JVM comes with additional handy profiling tools: `jvmmon` and `jvmprof`. The latter, for instance, provides a helpful set of traces that allow a deep insight into JVM resource consumption. The collected data is stored within a `prf`-file and can be analyzed offline in the [SAP JVM Profiler frontend](https://wiki.scn.sap.com/wiki/display/ASJAVA/Features+and+Benefits). ### Remote Tools { #profiling-remote} It's even more convenient to interact with the JVM with a frontend client running on a local machine. As already mentioned, a remote daemon as the endpoint of an ssh tunnel is required. Some representative tools are: - [SAP JVM Profiler](https://wiki.scn.sap.com/wiki/display/ASJAVA/Features+and+Benefits) for SAP JVM with [Memory Analyzer](https://www.eclipse.org/mat/) integration. Find a detailed documentation how to set up a secure remote connection on [Profiling an Application Running on SAP JVM](https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/e7097737709842b7bb1c3b9bf3d688b6.html). - [JProfiler](https://www.ej-technologies.com/products/jprofiler/overview.html) is a popular Java profiler available for different platforms and IDEs. ### Remote JMX-Based Tools { #profiling-jmx} Java's standardized framework [Java Management Extensions](https://www.oracle.com/java/technologies/javase/javamanagement.html) (JMX) allows introspection and monitoring of the JVM's internal state via exposed Management Beans (MBeans). MBeans also allow to trigger operations at runtime, for instance setting a logger level. Spring Boot automatically creates a bunch of MBeans reflecting the current [Spring configuration and metrics](observability#spring-boot-actuators) and offers convenient ways for customization. To activate JMX in Spring, add the following property to your application configuration.: ```sh spring.jmx.enabled: true ``` In addition, to enable remote access, add the following JVM parameters to open JMX on a specific port (for example, 5000) in the local container: ```sh -Djava.rmi.server.hostname=localhost -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port= -Dcom.sun.management.jmxremote.rmi.port= -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false ``` ::: warning Don't use public endpoints with JMX/MBeans Exposing JMX/MBeans via a public endpoint can pose a serious security risk. ::: To establish a connection with a remote JMX client, first open an ssh tunnel to the application via `cf` CLI as operator user: ```sh cf ssh -N -T -L :localhost: ``` Afterwards, connect to `localhost:` in the JMX client. Common JMX clients are: - [JConsole](https://openjdk.java.net/tools/svc/jconsole/), which is part of the JDK delivery. - [OpenJDK Mission Control](https://github.com/openjdk/jmc), which can be installed separately. ## GraalVM Native Image Support { #graalvm-native-image-support-beta } Since Spring Boot 3 it's possible to compile Spring Boot applications to stand-alone native executables leveraging GraalVM Native Images. Native Image applications have faster startup times and require less memory. CAP Java provides compatibility with the Native Image technology. [Learn more about Native Image support in Spring Boot.](https://docs.spring.io/spring-boot/how-to/native-image/index.html){.learn-more} If you want to compile your application as a native executable the following boundary conditions need to be considered: 1. The GraalVM Native Image build analyzes your application from the `main` entry point. Only the code that is reachable through static analysis is included into the native image. This means that the full classpath needs to be known and available already at build time. 2. Dynamic elements of your code, such as usage of reflection, JDK proxies, or resources need to be registered with the GraalVM Native Image build. You can learn more about this in the [GraalVM Native Image documentation](https://www.graalvm.org/latest/reference-manual/native-image/metadata/). ::: tip Many runtime hints for reflection, JDK proxy usage, and resources are contributed automatically to the Native Image build. This includes - Required reflection for event handler classes defined in application code. - JDK proxies for interfaces generated from the application's CDS model by the CDS Maven Plugin. ::: 3. Spring Boot automatically defines and fixes all bean definitions of your application at build time. If you have bean definitions that are created based on conditions on externalized configuration or profiles, you need to supply these triggers to the Native Image build. CAP Java also creates various bean definitions based on service bindings. Therefore, you need to provide the metadata of expected service bindings at runtime already during build time. This is similar to the information you define in deployment descriptors (for example `mta.yaml` or Helm charts). This information is also required to be supplied to the Native Image build. The Spring Boot Maven Plugin allows you to [configure the Spring profiles](https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto.aot.conditions) that are used during the Native Image build. You can supply information to the Native Image Build in a `native-build-env.json`, which you can configure together with the Spring profile. For example you can provide information to the Native image build in the `native-build-env.json` which you can configure together with the spring profile in the `srv/pom.xml`: ::: code-group ```json [native-build-env.json] { "hana": [ { "name": "" } ], "xsuaa": [ { "name": "" } ] } ``` ```xml [srv/pom.xml] native org.springframework.boot spring-boot-maven-plugin process-aot cloud -Dcds.environment.local.defaultEnvPath=../native-build-env.json ``` ::: When using Spring Boot's parent POM, you can easily trigger the Native Image build by executing `mvn spring-boot:build-image -Pnative`. This builds a Docker image using Cloud Native Buildpacks including a minimized OS and your application. You can launch the Docker image by running `docker run --rm -p 8080:8080 :`. ::: tip If you want to try out CAP's Native Image support you can use the [SFlight sample application](https://github.com/SAP-samples/cap-sflight) which is prepared for GraalVM Native Images. Note, that SFlight's native executable is built and configured to use SAP HANA and XSUAA by default. You therefore need to run it with the `cloud` profile and supply an SAP HANA and XSUAA service binding. Alternatively you can make corresponding adaptations in `native-build-env.json` and `srv/pom.xml` to build the native executable for a different set of service bindings and profile. ::: # Observability Presents a set of recommended tools that help to understand the current status of running CAP services. ## Logging { #logging} When tracking down erroneous behavior, *application logs* often provide useful hints to reconstruct the executed program flow and isolate functional flaws. In addition, they help operators and supporters to keep an overview about the status of a deployed application. In contrast, messages created using the [Messages API](../event-handlers/indicating-errors#messages) in custom handlers are reflected to the business user who has triggered the request. ### Logging Façade { #logging-facade} Various logging frameworks for Java have evolved and are widely used in Open Source software. Most prominent are `logback`, `log4j`, and `JDK logging` (`java.util.logging` or briefly `jul`). These well-established frameworks more or less deal with the same problem domain, that is: - Logging API for (parameterized) messages with different log levels. - Hierarchical logger components that can be configured independently. - Separation of log input (messages, parameters, context) and log output (format, destination). CAP Java SDK seamlessly integrates with Simple Logging Façade for Java ([SLF4J](https://www.slf4j.org)), which provides an abstraction layer for logging APIs. Applications compiled against SLF4J are free to choose a logging framework implementation at deployment time. Most famous libraries have a native integration to SLF4J, but it also can bridge legacy logging API calls: ![](./assets/slf4j.png){} ### Logger API { #logging-api} The SLF4J API is simple to use. Retrieve a logger object, choose the log method of the corresponding log level and compose a message with optional parameters via the Java API: ```java import org.slf4j.Logger; import org.slf4j.LoggerFactory; Logger logger = LoggerFactory.getLogger("my.loggers.order.consolidation"); @After(event = CqnService.EVENT_READ) public void readAuthors(List orders) { orders.forEach(order -> { logger.debug("Consolidating order {}", order); consolidate(order); }); logger.info("Consolidated {} orders", orders.size()); } ``` Some remarks: * [Spring Boot Logging](#logging-configuration) shows how to configure loggers individually to control the emitted log messages. * The API is robust with regards to the passed parameters, which means no exception is thrown on parameters mismatch or invalid parameters. ::: tip Prefer *passing parameters* over *concatenating* the message. `logger.info("Consolidating order " + order)` creates the message `String` regardless the configured log level. This can have a negative impact on performance. ::: ::: tip A `ServiceException` thrown in handler code and indicating a server error (that is, HTTP response code `5xx`) is *automatically* logged as error along with a stacktrace. ::: ### Spring Boot Logging { #logging-configuration} To set up a logging system, a concrete logging framework has to be chosen and, if necessary, corresponding SLF4j adapters. In case your application runs on Spring Boot and you use the Spring starter packages, **you most likely don't have to add any explicit dependency**, as the bundle `spring-boot-starter-logging` is part of all Spring Boot starters. It provides `logback` as default logging framework and in addition adapters for the most common logging frameworks (`log4j` and `jul`). Similarly, no specific log output configuration is required for local development, as per default, log messages are written to the console in human-readable form, which contains timestamp, thread, and logger component information. To customize the log output, for instance to add some application-specific information, you can create corresponding configuration files (such as `logback-spring.xml` for logback). Add them to the classpath and Spring picks them automatically. Consult the documentation of the dedicated logging framework to learn about the configuration file format. All logs are written, that have a log level greater or equal to the configured log level of the corresponding logger object. The following log levels are available: | Level | Use case | :--------| :-------- | `OFF` | Turns off the logger | `TRACE` | Tracks the application flow only | `DEBUG` | Shows diagnostic messages | `INFO` | Shows important flows of the application (default level) | `WARN` | Indicates potential error scenarios | `ERROR` | Shows errors and exceptions With Spring Boot, there are different convenient ways to configure log levels in a development scenario, which is explained in the following section. #### At Compile Time { #logging-configuration-compiletime} The following log levels can be configured: ::: code-group ```sh [srv/src/main/resources/application.yaml] # Set new default level logging.level.root: WARN # Adjust custom logger logging.level.my.loggers.order.Consolidation: INFO # Turn off all loggers matching org.springframework.*: logging.level.org.springframework: OFF ``` ::: Note that loggers are organized in packages, for instance `org.springframework` controls all loggers that match the name pattern `org.springframework.*`. #### At Runtime with Restart { #logging-configuration-restart} You can overrule the given logging configuration with a corresponding environment variable. For instance, to set loggers in package `my.loggers.order` to `DEBUG` level set the following environment variable: ```sh LOGGING_LEVEL_MY_LOGGERS_ORDER=DEBUG ``` and restart the application. ::: tip Note that Spring normalizes the variable's suffix to lower case, for example, `MY_LOGGERS_ORDER` to `my.loggers.order`, which actually matches the package name. However, configuring a dedicated logger (such as `my.loggers.order.Consolidation`) can't work in general as class names are in camel case typically. ::: ::: tip On SAP BTP, Cloud Foundry environment, you can add the environment variable with `cf set-env LOGGING_LEVEL_MY_LOGGERS_ORDER DEBUG`. Don't forget to restart the application with `cf restart ` afterwards. The additional configuration endures an application restart but might be lost on redeployment. ::: #### At Runtime Without Restart { #logging-configuration-runtime} If configured, you can use [Spring actuators](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html) to view and adjust logging configuration. Disregarding security aspects and provided that the `loggers` actuator is configured as HTTP endpoint on path `/actuator/loggers`, following example HTTP requests show how to accomplish this: ```sh # retrieve state of all loggers: curl https:///actuator/loggers # retrieve state of single logger: curl https:///actuator/loggers/my.loggers.oder.consolidation #> {"configuredLevel":null,"effectiveLevel":"INFO"} # Change logging level: curl -X POST -H 'Content-Type: application/json' -d '{"configuredLevel": "DEBUG"}' \ https:///actuator/loggers/my.loggers.oder.consolidation ``` [Learn more about Spring actuators and security aspects in the section **Metrics**.](#spring-boot-actuators){ .learn-more} #### Predefined Loggers { #predefined-loggers} CAP Java SDK has useful built-in loggers that help to track runtime behavior: | Logger | Use case | :------------------------------| :-------- | `com.sap.cds.security.authentication` | Logs authentication and user information | `com.sap.cds.security.authorization` | Logs authorization decisions | `com.sap.cds.odata.v2` | Logs OData V2 request handling in the adapter | `com.sap.cds.odata.v4` | Logs OData V4 request handling in the adapter | `com.sap.cds.handlers` | Logs sequence of executed handlers as well as the lifecycle of RequestContexts and ChangeSetContexts | `com.sap.cds.persistence.sql` | Logs executed queries such as CQN and SQL statements (w/o parameters) | `com.sap.cds.persistence.sql-tx` | Logs transactions, ChangeSetContexts, and connection pool | `com.sap.cds.multitenancy` | Logs tenant-related events and sidecar communication | `com.sap.cds.messaging` | Logs messaging configuration and messaging events | `com.sap.cds.remote.odata` | Logs request handling for remote OData calls | `com.sap.cds.remote.wire` | Logs communication of remote OData calls | `com.sap.cds.auditlog` | Logs audit log events Most of the loggers are used on DEBUG level by default as they produce quite some log output. It's convenient to control loggers on package level, for example, `com.sap.cds.security` covers all loggers that belong to this package (namely `com.sap.cds.security.authentication` and `com.sap.cds.security.authorization`). ::: tip Spring comes with its own [standard logger groups](https://docs.spring.io/spring-boot/docs/2.1.1.RELEASE/reference/html/boot-features-logging.html#boot-features-custom-log-groups). For instance, `web` is useful to track HTTP requests. However, HTTP access logs gathered by the Cloud Foundry platform router are also available in the application log. ::: ### Logging Service { #logging-service} The SAP BTP platform offers the [SAP Application Logging service for SAP BTP](https://help.sap.com/docs/r/product/APPLICATION_LOGGING) and its recommended successor [SAP Cloud Logging](https://help.sap.com/docs/cloud-logging) service to which bound Cloud Foundry applications can stream logs. Establishing a connection is the same for both services: The application needs to be [bound to the service](https://help.sap.com/docs/application-logging-service/sap-application-logging-service/produce-logs-container-metrics-and-custom-metrics). To match the log output format and structure expected by the logging service, it's recommended to use a prepared encoder from [cf-java-logging-support](https://github.com/SAP/cf-java-logging-support) that matches the configured logger framework. `logback` is used by default as outlined in [Logging Frameworks](#logging-configuration): ```xml [srv/pom.xml] com.sap.hcp.cf.logging cf-java-logging-support-logback ${logging.support.version} ``` By default, the library appends additional fields to the log output such as correlation id or Cloud Foundry space. To instrument incoming HTTP requests, a servlet filter needs to be created. See [Instrumenting Servlets](https://github.com/SAP/cf-java-logging-support/wiki/Instrumenting-Servlets) for more details. During local development, you might want to stick to the (human-readable) standard log line format. This boils down to having different logger configurations for different Spring profiles. The following sample configuration outlines how you can achieve this. `cf-java-logging-support` is only active for profile `cloud`, since all other profiles are configured with the standard logback output format: ::: code-group ```xml [srv/src/main/resources/logback-spring.xml] ... ``` ::: ::: tip For an example of how to set up a multitenant aware CAP Java application with enabled logging service support, have a look at section [Multitenancy > Adding Logging Service Support](../multitenancy#app-log-support). ::: ### Correlation IDs In general, a request can be handled by unrelated execution units such as internal threads or remote services. This fact makes it hard to correlate the emitted log lines of the different contributors in an aggregated view. The problem can be solved by enhancing the log lines with unique correlation IDs, which are assigned to the initial request and propagated throughout the call tree. In case you've configured `cf-java-logging-support` as described in [Logging Service](#logging-service) before, *correlation IDs are handled out of the box by the CAP Java SDK*. In particular, this includes: - Generation of IDs in non-HTTP contexts - Thread propagation through [Request Contexts](../event-handlers/request-contexts#threading-requestcontext) - Propagation to remote services when called via CloudSDK (for instance [Remote Services](../cqn-services/remote-services) or [MTX sidecar](../multitenancy-classic#mtx-sidecar-server)) By default, the ID is accepted and forwarded via HTTP header `X-CorrelationID`. If you want to accept `X-Correlation-Id` header in incoming requests alternatively, follow the instructions given in the guide [Instrumenting Servlets](https://github.com/SAP/cf-java-logging-support/wiki/Instrumenting-Servlets#correlation-id). ### JDBC Tracing in SAP Hana To activate JDBC tracing in the SAP Hana JDBC driver, you have to use the driver [Trace Options](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4033f8e603504c0faf305ab77627af03.html). You can activate it either by setting datasource properties in the `application.yaml` and restarting the application, or, while the application is running by using the [command line](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/e411647b03f1425fab1e33bb495c9c42.html). #### Using datasource properties In the `application.yaml` under `cds.dataSource.:` specify `hikari.data-source-properties.traceFile` and `hikari.data-source-properties.traceOptions`: ```yaml [srv/src/main/resources/application.yaml] cds: dataSource: service-manager: # name of service binding hikari: data-source-properties: traceFile: "/home/user/jdbctraces/trace_.log" # use a path that is write accessible traceOptions: "CONNECTIONS,API,PACKET" ``` ::: tip Add an underscore at the end of the trace file's name. It helps redability by separating the string of numbers that the JDBC tracing process appends for the epoch timestamp. ```sh ~/jdbctraces/ $ ls trace_10324282997834295561.log trace_107295864860396783.log trace_10832681394984179734.log ... ``` ::: [Trace Options](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4033f8e603504c0faf305ab77627af03.html) lists the available command line options. For the datasource property, you only need the option's name, such as `CONNECTIONS`, `API`, or `PACKET`. You can specify more than one option, separated by commas. This method of activating JDBC tracing requires restarting the application. For cloud deployments on Cloud Foundry this typically means redeploying via MTA, on Kyma this means rebuilding the application, re-creating, and publishing the container image to the container image registry and redeploying the application via Helm. Once the `application.yaml` of the deployed application contains both `hikari.data-source-properties.traceFile` and `hikari.data-source-properties.traceOptions`, their values can also be overwritten by setting the corresponding environment variables in the container. For example, to overwrite the tracefile path for the `application.yaml` you have to set the environment variable such as this with `SERVICE_MANAGER` being the name of the service binding: ```yaml CDS_DATASOURCE_SERVICE_MANAGER_HIKARI_DATA_SOURCE_PROPERTIES_TRACEFILE: "/home/cnb/jdbctraces/sm/trace_.log" ``` To overwrite the tracing options respectively: ```yaml CDS_DATASOURCE_SERVICE_MANAGER_HIKARI_DATA_SOURCE_PROPERTIES_TRACEOPTIONS: "DISTRIBUTIONS" ``` #### Using the command line Using the command line to activate JDBC tracing doesn't require an application restart. However, when running in the cloud it depends on the buildpacks used for the CAP Java application where the exact location of the Hana JDBC driver and the `java` executable are. The following assumes the usage of the [Cloud Native Buildpacks](https://pages.github.tools.sap/unified-runtime/docs/building-blocks/unified-build-and-deploy/buildpacks) as recommended by the [Unified Runtime](https://pages.github.tools.sap/unified-runtime/). ##### On Kyma Step-by-step description on how to access a bash session in the application's container to use trace options in the Hana JDBC driver: 1. Run bash in the pod that runs the CAP Java application: First, identify the pod name: ```sh kubectl get pods ``` in the right namespace run: ```sh kubectl exec -it pod/ -- bash ``` to acquire a bash session in the container. 1. Locate java executable and JDBC driver: By default `JAVA_HOME` isn't set in the buildpack and contains minimal tooling, as it tries to minimize the container size. However, the default location of the `java` executable is `/layers/paketo-buildpacks_sap-machine/jre/bin`. For convenience, store the path into a variable, for example, `JAVA_HOME`: ```sh export JAVA_HOME=/layers/paketo-buildpacks_sap-machine/jre/bin/ ``` The JDBC driver is usually located in `/workspace/BOOT-INF/lib`. Store it into another variable, for example, `JDBC_DRIVER_PATH`: ```sh export JDBC_DRIVER_PATH=/workspace/BOOT-INF/lib ``` 1. Use JDBC trace options in the driver, using the correct (versioned) name of the `ngdbc.jar`: ```sh $JAVA_HOME/java -jar $JDBC_DRIVER_PATH/ngdbc-.jar
2) Open Telemetry support in OneAgent needs to be enabled once in your Dynatrace environment using the Dynatrace UI. Navigate to **Settings > Preferences > OneAgent features** and turn on the switch for **OpenTelemetry (Java)** as well as for **OpenTelemetry Java Instrumentation agent support**. 3) In addition, enable W3C Trace Context for proper context propagation between remote services. Navigate to **Settings > Server-side service monitoring > Deep monitoring > Distributed tracing** and turn on **Send W3C Trace Context HTTP headers**. 4) Define an additional environment variable to tell the [agent extension](#agent-extension) to export metrics to Dynatrace via OpenTelemetry. ::: code-group ```yaml [mta.yaml] - name: # ... properties: # ... OTEL_METRICS_EXPORTER: dynatrace OTEL_TRACES_EXPORTER: none OTEL_LOGS_EXPORTER: none ``` ::: 5) Check your Dynatrace binding. You are looking for two tokens generated for you: the default one is called `apitoken` and the second one should correspond to the token you have requested in your `mta.yaml` or generated from Dynatrace instance manually. 6) Add to the environment variable `JBP_CONFIG_JAVA_OPTS` the following option `-Dotel.javaagent.extension.sap.cf.binding.dynatrace.metrics.token-name=`. Replace the name `` with the name of the token you have found previously. Traces will be handled by the Dynatrace OneAgent and OpenTelemetry export for them is disabled to prevent the OpenTelemetry agent from interfering with that. #### CAP Instrumentation By default, instrumentation for CAP-specific components is disabled, so that no traces and spans are created even if the Open Telemetry Java Agent has been configured. It's possible to selectively activate specific spans by changing the log level for a component. | Logger | Required Level | Description | |------------------------------------------------|----------------|------------------------------------------------------------| | `com.sap.cds.otel.span.ODataBatch` | `INFO` | Spans for individual requests of a OData $batch request. | | `com.sap.cds.otel.span.CQN` | `INFO` | Spans for executed CQN statement. | | `com.sap.cds.otel.span.OutboxCollector` | `INFO` | Spans for execution of the transactional outbox collector. | | `com.sap.cds.otel.span.DraftGarbageCollection` | `INFO` | Spans for execution of the draft garbage collection. | | `com.sap.cds.otel.span.RequestContext` | `DEBUG` | Spans for each Request Context. | | `com.sap.cds.otel.span.ChangeSetContext` | `DEBUG` | Spans for each ChangeSet Context. | | `com.sap.cds.otel.span.Emit` | `DEBUG` | Spans for dispatching events in the CAP runtime. | For specific steps to change the log level, please refer to the respective section for [configuring logging](#logging-configuration). #### Custom Instrumentation Using the Open Telemetry Java API, it's possible to provide additional observability signals from within a CAP Java application. This can include additional spans as well as metrics. You may use annotation-based instrumentation using the OpenTelemetry annotations for instrumenting your code or you can define your custom spans for places where you need a lot of context or require the advanced features of the OpenTelemetry API. To enable annotation-based tracing, include the following dependency in your `pom.xml`: ::: code-group ```xml [srv/pom.xml] io.opentelemetry.instrumentation opentelemetry-instrumentation-annotations 2.3.0 ``` ::: Then, you can create additional spans around your event handlers just by annotating their methods with the annotation `@WithSpan`. Such spans will react on exceptions and by default they will have class and method name as the description. ```java @Component @ServiceName(CatalogService_.CDS_NAME) class CatalogServiceHandler implements EventHandler { @Before(entity = Books_.CDS_NAME) @WithSpan public void beforeAddReview(AddReviewContext context) { // ... } } ``` [Learn more about the features of annotation-based spans.](https://opentelemetry.io/docs/languages/java/automatic/annotations/) {.learn-more} To use OpenTelemetry API for more complex spans, add a dependency to the Open Telemetry Java API in the `pom.xml` of the CAP Java application: ::: code-group ```xml [srv/pom.xml] io.opentelemetry opentelemetry-api ``` ::: The instance of OpenTelemetry API is preconfigured for you by the agent that was injected in your application. You don't need to configure it again. The following example produces an additional span when the `@After` handler is executed. The Open Telemetry API automatically ensures that the span is correctly added to the current span hierarchy. Span attributes allow an application to associate additional data to the span, which helps to identify and to analyze the span. Exceptions that were thrown within the span should be associated with the span using the `recordException` method. This marks the span as erroneous and helps to analyze failures. It's important to close the span in any case. Otherwise, the span isn't recorded and is lost. ```java @Component @ServiceName(CatalogService_.CDS_NAME) class CatalogServiceHandler implements EventHandler { Tracer tracer = GlobalOpenTelemetry.getTracerProvider() .tracerBuilder("RatingCalculator").build(); @After(entity = Books_.CDS_NAME) public void afterAddReview(AddReviewContext context) { Span childSpan = tracer.spanBuilder("setBookRating").startSpan(); childSpan.setAttribute("book.title", context.getResult().getTitle()); childSpan.setAttribute("book.id", context.getResult().getBookId()); childSpan.setAttribute("book.rating", context.getResult().getRating()); try(Scope scope = childSpan.makeCurrent()) { ratingCalculator.setBookRating(context.getResult().getBookId()); } catch (Throwable t) { childSpan.recordException(t); throw t; } finally { childSpan.end(); } } } ``` [Learn more about the features of the instrumentation API](https://opentelemetry.io/docs/languages/java/instrumentation/) {.learn-more} You can record metrics during execution of, for example, a custom event handler. The following example manages a metric `reviewCounter`, which counts the number of book reviews posted by users. Adding the `bookId` as additional attribute improves the value of the data as this can be handled by the Open Telemetry front end as dimension for aggregating values of this metric. ```java @Component @ServiceName(CatalogService_.CDS_NAME) class CatalogServiceHandler implements EventHandler { Meter meter = GlobalOpenTelemetry.getMeterProvider().meterBuilder("RatingCalculator").build(); @After(entity = Books_.CDS_NAME) public void afterAddReview(AddReviewContext context) { ratingCalculator.setBookRating(context.getResult().getBookId()); LongCounter counter = meter.counterBuilder("reviewCounter") .setDescription("Counts the number of reviews created per book") .build(); counter.add(1, Attributes.of(AttributeKey.stringKey("bookId"), context.getResult().getBookId())); } } ``` ### Dynatrace { #dynatrace } [Dynatrace](https://www.dynatrace.com/support/help) is a comprehensive platform that delivers analytics and automation based on monitoring events sent by the backend services. It requires OneAgent that runs in the backend capturing monitoring data and sending to the Dynatrace service. How to configure a Dynatrace connection to your CAP Java application is described in [Dynatrace Integration](https://help.sap.com/docs/BTP/65de2977205c403bbc107264b8eccf4b/1610eac123c04d07babaf89c47d82c91.html).
### Spring Boot Actuators { #spring-boot-actuators } Metrics are mainly referring to operational information about various resources of the running application, such as HTTP sessions and worker threads, JDBC connections, JVM memory including garbage collector statistics and so on. Similar to [health checks](#spring-health-checks), Spring Boot comes with a bunch of built-in metrics based on the [Spring Actuator](#spring-boot-actuators) framework. Actuators form an open framework, which can be enhanced by libraries (see [CDS Actuator](#cds-actuator)) as well as the application (see [Custom Actuators](#custom-actuators)) with additional information. [Spring Boot Actuators](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html) are designed to provide a set of out-of-the-box supportability features, that help to make your application observable in production. To add actuator support in your application, add the following dependency: ::: code-group ```xml [srv/pom.xml] org.springframework.boot spring-boot-starter-actuator ``` ::: The following table lists some of the available actuators that might be helpful to understand the internal status of the application: | Actuator | Description | :--------| :-------- | `metrics` | Thread pools, connection pools, CPU, and memory usage of JVM and HTTP web server | `beans` | Information about Spring beans created in the application | `env` | Exposes the full Spring environment including application configuration | `loggers` | List and modify application loggers By default, nearly all actuators are active. You can switch off actuators individually in the configuration. The following configuration turns off `flyway` actuator: ```yaml management.endpoint.flyway.enabled: false ``` Depending on the configuration, exposed actuators can have HTTP or [JMX](https://en.wikipedia.org/wiki/Java_Management_Extensions) endpoints. For security reasons, it's recommended to expose only the `health` actuator as web endpoint as described in [Health Indicators](#spring-health-checks). All other actuators are recommended for local JMX-based access as described in [JMX-based Tools](optimizing#profiling-jmx). #### CDS Actuator { #cds-actuator } CAP Java SDK plugs a CDS-specific actuator `cds`. This actuator provides information about: - The version and commit ID of the currently used `cds-services` library - All services registered in the service catalog - Security configuration (authentication type and so on) - Loaded features such as `cds-feature-xsuaa` - Database pool statistics (requires `registerMbeans: true` in [Hikari pool configuration](../cqn-services/persistence-services#datasource-configuration)) #### Custom Actuators { #custom-actuators } Similar to [Custom Health Indicators](#custom-health-indicators), you can add application-specific actuators as done in the following example: ```java @Component @ConditionalOnClass(Endpoint.class) @Endpoint(id = "app", enableByDefault = true) public class AppActuator { @ReadOperation public Map info() { Map info = new LinkedHashMap<>(); info.put("Version", "1.0.0"); return info; } } ``` The `AppActuator` bean registers an actuator with name `app` that exposes a simple version string. ### Availability { #availability} This section describes how to set up an endpoint for availability or health check. At a first glance, providing such a health check endpoint sounds like a simple task. But some aspects need to be considered: - Authentication (for example, Basic or OAuth2) increases security but introduces higher configuration and maintenance effort. - Only low resource consumption can be introduced. If you provide a public endpoint, only low overhead is accepted to avoid denial-of-service attacks. - Ideally, the health check response shows not only the aggregate status, but also the status of crucial services the application depends on such as the underlying persistence. #### Spring Boot Health Checks { #spring-health-checks} Conveniently, Spring Boot offers out-of-the-box capabilities to report the health of the running application and its components. Spring provides a bunch of health indicators, especially `PingHealthIndicator` (`/ping`) and `DataSourceHealthIndicator` (`/db`). This set can be extended by [custom health indicators](#custom-health-indicators) if necessary, but most probably, **setting up an appropriate health check for your application is just a matter of configuration**. To do so, first add a dependency to Spring Actuators, which forms the basis for health indicators: ::: code-group ```xml [srv/pom.xml] org.springframework.boot spring-boot-starter-actuator ``` ::: By default, Spring exposes the *aggregated* health status on web endpoint `/actuator/health`, including the result of all registered health indicators. But also the `info` actuator is exposed automatically, which might be not desired for security reasons. It's recommended to **explicitly** control web exposition of actuator components in the application configuration. The following configuration snippet is an example suitable for public visible health check information: ```yaml [srv/src/main/resources/application.yaml] management: endpoint: health: show-components: always # shows individual indicators endpoints: web: exposure: include: health # only expose /health as web endpoint health: defaults.enabled: false # turn off all indicators by default ping.enabled: true db.enabled: true ``` The example configuration makes Spring exposing only the health endpoint with health indicators `db` and `ping`. Other indicators ready for auto-configuration such as `diskSpace` are omitted. All components contributing to the aggregated status are shown individually, which helps to understand the reason for overall status `DOWN`. ::: tip For multitenancy scenarios, CAP Java replaces the default `db` indicator with an implementation that includes the status of all tenant databases. ::: In addition CAP Java offers a health indicator `modelProvider`. This health indicator allows to include the status of the MTX sidecar serving the [Model Provider Service](/java/reflection-api#the-model-provider-service). ```yaml management: health: modelProvider.enabled: true ``` ::: warning The `modelProvider` health indicator requires `@sap/cds` version `7.8.0` or higher in MTX sidecar. ::: Endpoint `/actuator/health` delivers a response (HTTP response code `200` for up, `503` for down) in JSON format with the overall `status` property (for example, `UP` or `DOWN`) and the contributing components: ```json { "status": "UP", "components": { "db": { "status": "UP" }, "ping": { "status": "UP" } } } ``` It might be advantageous to expose information on a detailed level. This configuration is only an option for a [protected](#protected-health-checks) health endpoint: ```yaml management.endpoint.health.show-details: always ``` ::: warning Be mindful about data exposure and resource consumption A public health check endpoint may neither disclose system internal data (for example, health indicator details) nor introduce significant resource consumption (for example, doing synchronous database request). ::: Find all details about configuration opportunities in [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html) documentation. #### Custom Health Indicators { #custom-health-indicators} In case your application relies on additional, mandatory services not covered by default health indicators, you can add a custom health indicator as sketched in this example: ```java @Component("crypto") @ConditionalOnEnabledHealthIndicator("crypto") public class CryptoHealthIndicator implements HealthIndicator { @Autowired CryptoService cryptoService; @Override public Health health() { Health.Builder status = cryptoService.isAvailalbe() ? Health.up() : Health.down(); return status.build(); } } ``` The custom `HealthIndicator` for the mandatory `CryptoService` is registered by Spring automatically and can be controlled with property `management.health.crypto.enabled: true`. #### Protected Health Checks { #protected-health-checks} Optionally, you can configure a protected health check endpoint. On the one hand this gives you higher flexibility with regards to the detail level of the response but on the other hand introduces additional configuration and management efforts (for instance key management). As this highly depends on the configuration capabilities of the client services, CAP doesn't come with an auto-configuration. Instead, the application has to provide an explicit security configuration on top as outlined with `ActuatorSecurityConfig` in [Customizing Spring Boot Security Configuration](../security#custom-spring-security-config). # Developer Dashboard ::: warning Only to be used in development The dashboard is only intended for use in the development environment. It is strictly forbidden to use the dashboard in a production environment, as it allows access to sensitive data and presents a security risk. :::
![Screenshot of the CAP developer dashboard UI.](assets/dashboard.jpg) The CAP Developer Dashboard simplifies development by providing a centralized point where developers can efficiently manage and monitor their CAP applications. It offers tools and functions to support the development process and helps developers to quickly identify and resolve problems. Additionally, the dashboard facilitates better integration of CAP components, such as messaging, resilience and multitenancy, ensuring seamless functionality throughout CAP applications. You can get a brief overview of the dashboard's features in the [Developer Dashboard Presentation](https://broadcast.sap.com/replay/240604_recap?playhead=2188) at our RECAP 2024 conference. Add the `cds-feature-dev-dashboard` feature to your maven dependencies: ```xml [pom.xml] com.sap.cds cds-feature-dev-dashboard ``` ## Local Setup By default, the dashboard requires authorized access, which requires the `cds.Developer` role. The default mock user configuration provides the user `developer` already configured with this role. If you use your own mocked users, you must assign them the `cds.Developer` role if you want to give them access to the dashboard. ::: code-group ```yaml [application.yaml] cds: security: mock: users: - name: myUser password: myPass roles: - cds.Developer ``` ::: ## Cloud Setup If you also want to use the CAP Developer Dashboard in your cloud development scenario, you need to take a few more steps to achieve this. Let's take an example of a BTP Cloud Foundry app example with Approuter and XSUAA. 1. Deactivate the [production profile](../developing-applications/configuring#production-profile) in the _mta.yaml_. 2. Add the `cds.Developer` role to your security configuration in the *xs-security.json*. 3. Customize the approuter configuration (*xs-app.json*) by enabling support for websocket connections and defining the dashboard routes. ::: code-group ```yaml [mta.yaml] modules: - name: my-cap-app-srv [...] properties: CDS_ENVIRONMENT_PRODUCTION_ENABLED: false ``` ```json [xs-security.json] { "xsappname": "dashboard-test", [...] "scopes": [ { "name": "$XSAPPNAME.cds.Developer", "description": "CAP Developer" }, [...] ], "attributes": [ { [...] } ], "role-templates": [ { "name": "capDeveloper", "description": "generated", "scope-references": [ "$XSAPPNAME.cds.Developer" ] }, [...] ] } ``` ```json [xs-app.json] { ... "authenticationMethod": "route", "websockets": { "enabled": true }, "routes": [ { "source": "^/dashboard", "authenticationType": "xsuaa", "destination": "backend" }, { "source": "^/dashboard/(.*)", "authenticationType": "xsuaa", "destination": "backend" }, { "source": "^/dashboard_api/(.*)", "authenticationType": "xsuaa", "destination": "backend" }, [...] ] } ``` ::: Now you can deploy the application in BTP and assign the `cds.Developer` role to the users you want to grant access to the CAP Developer Dashboard. ::: warning For security reasons, the **cds.Developer** role should only be used in conjunction with test users. It is strongly recommended not to use this role with users who could potentially be used in production systems. ::: ## Disable Authorization In some cases, your application may run in a complex environment and you simply want to access the CAP Developer Dashboard running in your CAP Service Module directly without using a router in between. For this reason, you can switch off the authorization to grant direct unauthorized access. 1. Switch off authorization using one of the following options: ::: code-group ```yaml [application.yaml] cds: dashboard: authorization: enabled: false ``` ```yaml [mta.yaml] modules: - name: my-cap-app-srv [...] properties: CDS_DASHBOARD_AUTHORIZATION_ENABLED: false ``` ::: 2. Disable authentication. ::: code-group ```java [WebSecurity] import static org.springframework.security.web.util.matcher.AntPathRequestMatcher.antMatcher; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.web.SecurityFilterChain; @Configuration @Order(1) public class WebSecurity { @Bean public SecurityFilterChain appFilterChain(HttpSecurity http) throws Exception { return http .securityMatchers(m -> m.requestMatchers(antMatcher("/dashboard/**"), antMatcher("/dashboard_api/**"))) .authorizeHttpRequests(auth -> auth.anyRequest().permitAll()) .csrf(c-> c.disable()) .build(); } } ``` :::
# Building Plugins A collection of different mechanisms that can be used to build plugins for CAP Java. Especially, when working with larger projects that may consist of many individual CAP Java applications or when building platform services that need to be integrated with CAP applications there's the requirement to extend CAP Java with custom, yet reusable code. In the following sections, the different extension points and mechanisms are explained. ## General Considerations ### Java Version When building CAP Java plugin modules, you need to keep in mind that the generated Java byte code of the plugin has to be compatible with the Java byte code version of the potential consumers of the plugin. To be on the safe side, we recommend using *Java 17* as this is anyways the minimum Java version for CAP Java (for 2.x release) applications. In case you deviate from this you need to check and align with the potential consumers of the plugin. ### Maven GroupId and Java Packages Of course, it's up to your project / plugin how you call the corresponding Maven GroupId and Java packages. To avoid confusion and also to make responsibilities clear `com.sap.cds` for GroupId and Java package names are reserved for components maintained by the CAP Java team and must not be used for external plugins. This rule also includes substructures to `com.sap.cds` like `com.sap.cds.foo.plugin`. ## Share CDS Models via Maven Artifacts Before the CAP Java 2.2 release CDS definitions had to be shared as Node.js modules, also for Java projects. Starting with the 2.2 release CDS models, CSV import data and i18n files can now be shared through Maven dependencies in addition to npm packages. This means you can now provide CDS models, CSV files, i18n files, and Java code (for example, event handlers) in a single Maven dependency. ### Create the CDS Model in a New Maven Artifact Simply create a plain Maven Java project and place your CDS models in the `main/resources/cds` folder of the reuse package under a unique module directory (for example, leveraging group ID and artifact ID): `src/main/resources/cds/com.sap.capire/bookshop/`. With `com.sap.capire` being the group ID and `bookshop` being the artifact ID. You can simplify the creation of such a **plain Maven Java** project by calling the following Maven archetype command: ```shell mvn archetype:generate -DgroupId=com.sap.capire -DartifactId=bookshop -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false ``` After the creation you'll need to maintain the plugin versions as well as the desired Java language version (we recommend version 17). ::: warning Only plain Maven Java projects Please make sure that your plugin / reuse model project is neither created as a CAP Java project nor as a plain Spring Boot project. ::: ### Reference the New CDS Model in an Existing CAP Java Project Projects wanting to import the content simply add a Maven dependency to the reuse package to their _srv/pom.xml_ in the `` section. ::: code-group ```xml [srv/pom.xml] com.sap.capire bookshop 1.0.0 ``` ::: Additionally, the new `resolve` goal from the CDS Maven Plugin needs to be added, to extract the models into the `target/cds/` folder of the Maven project, in order to make them available to the CDS Compiler. ::: code-group ```xml [srv/pom.xml] com.sap.cds cds-maven-plugin ${cds.services.version} ... cds.resolve resolve ... ``` ::: ::: details Reuse module as Maven module Please be aware that the module that uses the reuse module needs to be a Maven module itself or a submodule to a Maven module that declares the dependency to the Maven module. Usually you would declare the dependency in the `srv` module of your CAP Java project and use the reuse model in the service's CDS files then. In case you want to use the reuse model in your `db` module you need to make sure that your `db` module is a Maven module and include it to the project's parent `pom.xml` file. ::: When your Maven build is set up correctly, you can use the reuse models in your CDS files using the standard `using` directive: ```cds using { CatalogService } from 'com.sap.capire/bookshop'; ``` ::: details Different resolution rules The location in the `using` directive differs from the default [CDS model resolution rules](../cds/cdl#model-resolution). The *name* does not refer to a local file/package, nor to an NPM package. Instead, it follows to the groupId/artifactId scheme. The name doesn't directly refer to an actual file system location but is looked up in a _cds_ folder in Maven's _target_ folder. ::: [Learn more about providing and using reuse packages.](../guides/extensibility/composition){.learn-more} This technique can be used independently or together with one or more of the techniques described on this page. ## Event Handlers for Custom Types and Annotations In CAP Java, event handlers aren't tightly coupled to the request handling or any other runtime components. Thus, it's easily possible to package event handlers in external libraries (like plugins) in order to provide common but custom functionality to CAP Java applications. You can achieve this by defining custom handlers that react on model characteristics (common types or annotations) or also on entity values, for example, validations. In most of the cases an event handler plugin for a CAP Java application can be a plain Maven project without further dependencies or special project layout. Since you need to use or implement CAP Java extension points, it's required to define the following dependencies: ```xml 2.4.0 com.sap.cds cds-services-bom ${cds.services.version} pom import com.sap.cds cds-services-api ``` Inside your plugin module, you can define a custom event handler and a registration hook as plain Java code. Once this module is deployed to a Maven repository it can be added to any CAP Java application as a dependency. The contained event handler code is active automatically once your CAP Java application is started along with the new reuse module. The heart of the plugin module, the event handler, basically looks like any other CAP Java event handler. Take this one as an example: ```java @ServiceName(value = "*", type = ApplicationService.class) public class SampleHandler implements EventHandler { @After public void handleSample(CdsReadEventContext context) { // any custom Java code using the event context and CQL APIs } } ``` The shown handler code is registered for any entity type on any [ApplicationService](../guides/providing-services). Depending on the use case the target scope could be narrowed to specific entities and/or services. The handler registration applies to the same rules as custom handlers that are directly packaged with a CAP Java application. [Learn more about event handling in our EventHandler documentation](event-handlers/){.learn-more} Of course, this handler code looks just the same as any other custom or builtin CAP Java handler. The only difference here is that you need to think a bit more about the provisioning of the handler. When you write a custom handler as part of (in the package of) a CAP Java application, you can annotate the handler's class with `@Component`. Then Spring Boot's component scan picks up the class during startup of the Application Context. When you provide your custom handler as part of a reuse library, external to your application, things change a bit. At first, you need to decide whether you want to use Spring Boot's component model and rely on dependency injection or if you want to use one of the CAP Java ServiceLoader based extension points. The decision between the two is straightforward: In case your handler depends on other Spring components, for example relies on dependency injection, you should use the [Spring approach](#spring-autoconfiguration). This applies as soon as you need to access another CAP Service like [`CqnService`](./cqn-services/application-services), [`PersistenceService`](./cqn-services/persistence-services) or to a service using it's [typed service interface](../releases/archive/2023/nov23#typed-service-interfaces). If your custom handler is isolated and, for example, only performs a validation based on provided data or a calculation, you can stick with the [CAP Java ServiceLoader approach](#service-loader), which is described in the following section. ### Load Plugin Code via ServiceLoaders {#service-loader} At runtime, CAP Java uses the [`ServiceLoader`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/ServiceLoader.html) mechanism to load all implementations of the `CdsRuntimeConfiguration` interface from the application's ClassPath. In order to qualify as a contributor for a given ServiceLoader-enabled interface, we need to place a plain text file, named like the fully qualified name of the interface, in the directory `src/main/resources/META-INF/services` of our reuse model. This file contains the name of one or more implementing classes. For the earlier implemented `CdsRuntimeConfiguration` we need to create a file `src/main/resources/META-INF/services/com.sap.cds.services.runtime.CdsRuntimeConfiguration` with the following content: ```txt com.sap.example.cds.SampleHandlerRuntimeConfiguration ``` With this code you instrument the CAP Java's ServiceLoader for `CdsRuntimeConfiguration` to load our new, generic EventHandler for all read events on all entities of all services. For realistic use cases, the handler configuration can be more concise, of course. So, in order to have a framework independent handler registration the `CdsRuntimeConfiguration` interface needs to be implemented like this: ```java package com.sap.example.cds; import com.sap.cds.services.runtime.CdsRuntimeConfiguration; import com.sap.cds.services.runtime.CdsRuntimeConfigurer; public class SampleHandlerRuntimeConfiguration implements CdsRuntimeConfiguration { @Override public void eventHandlers(CdsRuntimeConfigurer configurer) { configurer.eventHandler(new SampleHandler()); } } ``` ### Load Plugin Code with the Spring Component Model {#spring-autoconfiguration} In case your reuse module depends on other components managed as part of the Spring ApplicationContext (having an `@Autowired` annotation in your class is a good hint for that) you need to register your plugin as a Spring component itself. The most straight forward (but not recommended) way is to annotate the plugin class itself with `@Component`. This is, however, error-prone: [Spring Boot's component scan](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/ComponentScan.html) is by default scanning downward from the package in which the main `Application` class is declared. Meaning that you need to place the plugin either in a subpackage or the same package as the `Application` class. This would hamper the reuse aspect of the plugin as it would only work applications in a specific package. You could customize the component scan of the application using your plugin but this is also error-prone as you explicitly have to remember to change the `@ComponentScan` annotation each time you include a plugin. Because of those complications it's best practice to use the `AutoConfiguration` mechanism provided by Spring Boot in reuse modules that ship Spring components. For further details, please refer to the [Spring Boot reference documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/using.html#using.auto-configuration). A complete end-to-end example for reusable event handlers can be found in this [blog post](https://blogs.sap.com/2023/05/16/how-to-build-reusable-plugin-components-for-cap-java-applications/). ## Custom Protocol Adapters {#protocol-adapter} In CAP, the protocol adapter is the mechanism to implement inbound communication (from another service or the UI) to the CAP service in development. You can read more about protocol adapters in our [architecture documentation](developing-applications/building#protocol-adapters). Usually, a protocol adapter comes in two parts: - the adapter - a factory class that creates an instance of the adapter The adapter itself is in most cases an extension of the HttpServlet abstract class. The factory class also provides information about the paths to which the protocol adapter (the servlet) needs to be registered. The factory interface is called `ServletAdapterFactory` and implementations of that factory will be loaded with the same [`ServiceLoader` approach as described above](#service-loader) in the event handler section. This is an example implementation of the `ServletAdapterFactory`: ```java public class SampleAdapterFactory implements ServletAdapterFactory, CdsRuntimeAware { /* * a short key identifying the protocol that's being served * by the new protocol adapter, for example, odata-v4, hcql, .. */ static final String PROTOCOL_KEY = "protocol-key"; private CdsRuntime runtime; @Override public void setCdsRuntime(CdsRuntime runtime) { /* * In case the protocol adapter needs the CdsRuntime * the factory can implement CdsRuntimeAware and will * be provided with a CdsRuntime via this method. * The create() method below can then use the provided * runtime for the protocol adapter. */ this.runtime = runtime; } @Override public Object create() { // Create and return the protocol adapter return new SampleAdapter(runtime); } @Override public boolean isEnabled() { // Determines if the protocol adapter is enabled } @Override public String getBasePath() { // Return the base path } @Override public String[] getMappings() { /* * Return all paths to which the protocol adapter is * going to be mapped. Usually, this will be each CDS * service with either it's canonical or annotated * path prefixed with the base path of the protocol * adapter (see above). */ } @Override public UrlResourcePath getServletPath() { /* * Use the UrlResourcePathBuilder to build and return * a UrlResourcePath containing the basePath (see above) * and all paths being registered for the protocol key * of the new protocol adapter. */ } } ``` With the factory in place, you can start to build the actual protocol adapter. As mentioned before, most adapters implement HTTP connectivity and are an extension of the Jakarta `HttpServlet` class. Based on the incoming request path the protocol adapter needs to determine the corresponding CDS `ApplicationService`. Parts of the request path together with potential request parameters (this depends on the protocol to be implemented) then need to be mapped to a CQL statement, which is then executed on the previously selected CDS `ApplicationService`. ```java public class SampleAdapter extends HttpServlet { private final CdsRuntime runtime; public SampleAdapter(CdsRuntime runtime) { this.runtime = runtime; // see below for further details } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws IOException { // see below for further details } } ``` As mentioned previously, a protocol adapter maps incoming requests to CQL statements and executes them on the right [`ApplicationService`](./cqn-services/application-services) according to the `HttpServletRequest`'s request-path. In order to have all relevant `ApplicationServices` ready at runtime, you can call `runtime.getServiceCatalog().getServices(ApplicationService.class)` in the adapter's constructor to load all `ApplicationServices`. Then select the ones relevant for this protocol adapter to have them ready, for example in a Map, for serving requests in `service()`. When handling incoming requests at runtime, you need to extract the request path and parameters from the incoming HttpServletRequest. Then, you can use CQL API from the `cds4j-api` module to [create CQL](./working-with-cql/query-api) corresponding to the extracted information. This statement then needs to be executed with [`ApplicationService.run()`](./working-with-cql/query-execution). The returned result then needs to be mapped to the result format that is suitable for the protocol handled by the adapter. For REST, it would be some canonical JSON serialization of the returned objects. REST request: ```http GET /CatalogService/Books?id=100 ``` Resulting CQL statement: ```java CqnSelect select = Select.from("Books").byId(100); ``` The `CqnSelect` statement can then be executed with the right (previously selected) `ApplicationService` and then written to `HttpServletResponse` as a serialized string. ```java String resposePayload = applicationService.run(select).toJson(); response.getWriter().write(responsePayload); ``` With that, a first iteration of a working CAP Java protocol adapter would be complete. As a wrap-up, this would be the tasks that need to be implemented in the adapter: 1. Extract the request path and select the corresponding CDS `ApplicationService`. 2. Build a CQL statement based on the request path and parameters. 3. Execute the CQL statement on the selected service and write the result to the response. One final comment on protocol adapters: even a simple protocol adapter like sketched in this section enables full support of other CAP features like declarative security, i18n and of course custom as well as generic event handlers. ## Putting It All Together As you've learned in this guide, there are various ways to extend the CAP Java framework. You can use one or more of the mentioned techniques and combine them in one or more Maven modules. This totally depends on your needs and requirements. Most probably you combine the *Event Handler with custom types and annotations* mechanism together with *Sharing reusable CDS models via Maven artifacts* because the event handler mechanism might rely on shared CDS artifacts. The protocol adapters on the other hand are generic and model-independent modules that should be packaged and distributed independently. # Migration Guides This chapter contains comprehensive guides that help you to work through migrations such as from CAP Java 1.x to CAP Java 2.x. ## CAP Java 2.10 to CAP Java 3.0 { #two-to-three } ### Minimum Versions CAP Java 3.0 increased some minimum required versions: | Dependency | Minimum Version | | --- | --- | | Cloud SDK | 5.9.0 | | @sap/cds-dk | ^7 | | Maven | 3.6.3 | CAP Java 3.0 no longer supports @sap/cds-dk ^6. ### Production Profile `cloud` The Production Profile now defaults to `cloud`. This ensures that various property defaults suited for local development are changed to recommended secure values for production. One of the effects of the production profile is that the index page is disabled by default. If you are using the root path `/` for a readiness or liveness probe in Kyma you will need to adjustment them. in this case the recommended approach would be to use the Spring Boot actuator's `/actuator/health` endpoint instead. [Learn more about the Production Profile.](developing-applications/configuring#production-profile){.learn-more} ### Removed MTX Classic Support Support for classic MTX (@sap/cds-mtx) has been removed. Using streamlined MTX (@sap/cds-mtxs) is mandatory for multitenancy. If you're still using MTX Classic refer to the [multitenancy migration guide](../guides/multitenancy/old-mtx-migration). In addition, the deprecated `MtSubscriptionService` API, has been removed. It has now been superseeded by the `DeploymentService` API. As part of this change the compatibility mode for the `MtSubscriptionService` API has been removed. Besides the removal of the Java APIs this includes the following behavioural changes: - During unsubscribe, the tenant's content (like HDI container) is now deleted by default when using the new `DeploymentService` API. - The HTTP-based tenant upgrade APIs provided by the CAP Java app have been removed, use the [`Deploy` main method](/java/multitenancy#deploy-main-method) instead. This includes the following endpoints: - `/mt/v1.0/subscriptions/deploy/**` (GET & POST) - `/messaging/v1.0/em/` (PUT) ### Removed feature `cds-feature-xsuaa` The feature `cds-feature-xsuaa` has been removed. Support for XSUAA and IAS has been unified under the umbrella of `cds-feature-identity`. It utilizes [SAP´s `spring-security` library](https://github.com/SAP/cloud-security-services-integration-library/tree/main/spring-security) instead of the deprecated [`spring-xsuaa` library](https://github.com/SAP/cloud-security-services-integration-library/tree/main/spring-xsuaa). If your application relies on the standard security configuration by CAP Java and depend on one of the CAP starter bundles, it is expected that you won't need to adapt code. If you have customized the security configuration, you need to adapt it to the new library. If your application had a direct dependency to `cds-feature-xsuaa`, we recommend using one of our starter bundles `cds-starter-cloudfoundry` or `cds-starter-k8s`. Though CAP does not support multiple XSUAA bindings, it was possible in previous versions to extend the standard security configuration to work with multiple bindings. If you require this, you need to set `cds.security.xsuaa.allowMultipleBinding` to `true` so that all XSUAA bindings are available in custom spring auto-configurations. Note: CAP Java still does not process multiple bindings and requires a dedicated spring configuration. In general, applications should refrain from configuring several XSUAA bindings. [Learn more about the security configuration.](./security#xsuaa-ias){.learn-more} [Learn more about migration to SAP´s `spring-security` library.](https://github.com/SAP/cloud-security-services-integration-library/blob/main/spring-security/Migration_SpringXsuaaProjects.md) ### Proof-Of-Possession enforced for IAS-based authentication In IAS scenarios, the [Proof-Of-Possession](https://github.com/SAP/cloud-security-services-integration-library/tree/main/java-security#proofofpossession-validation) is now enforced by default for incoming requests for versions starting from `3.5.1` of the [SAP BTP Spring Security Client](https://github.com/SAP/cloud-security-services-integration-library/tree/main/spring-security). Because of this, applications calling a CAP Java application will need to send a valid client certificate in addition to the JWT token. In particular, applications using an Approuter have to set `forwardAuthCertificates: true` on the Approuter destination pointing to your CAP backend. [Learn more about Proof-Of-Possession.](./security.md#proof-of-possession){.learn-more} ### Lazy Localization by default EDMX resources served by the OData V4 `/$metadata` endpoints are now localized lazily by default. This significantly reduces EDMX cache memory consumption when many languages are used. Note, that this requires at least `@sap/cds-mtxs` in version `1.12.0`. The cds build no longer generates localized EDMX files by default anymore, but instead generates templated EDMX files and a `i18n.json` containing text bundles. If you need localized EDMX files to be generated, set `--opts contentLocalizedEdmx=true` when calling `cds build`. ### Star-expand and inline-all are no longer permitted Previously, you could not use expand or inline without explicit paths on draft-enabled entities. Now they are rejected for all entities on application service level. For example, following statement will not be executed when submitted to an instance of [`ApplicationService`](https://www.javadoc.io/doc/com.sap.cds/cds-services-api/latest/com/sap/cds/services/cds/ApplicationService.html). ```java Select.from(BOOKS).columns(b -> b.expand()); ``` This does not impact OData where `expand=*` is transformed into expands for all associations. ### Adjusted POJO class generation Some parameter defaults of the goal `generate` have been adjusted: | Parameter | Old Value | New Value | Explanation | | --- | --- | --- | --- | | `sharedInterfaces` | `false` | `true` | Interfaces for global arrayed types with inline anonymous type are now generated exactly once. `sharedInterfaces` ensures such types are not generated as inline interfaces again, if used in events, actions or functions. | | `uniqueEventContexts` | `false` | `true` | Determines whether the event context interfaces should be unique for bound actions and functions, by prefixing the interfaces with the entity name. | Both these changes result in the generation of incompatible POJOs. To get the former POJOs, the new defaults can be overwritten by setting the parameters to the old values. Consider the following example: ```cds service MyService { entity MyEntity { key ID: UUID } actions { // bound action action doSomething(values: MyArray); } } // global arrayed type type MyArray: many { value: String; } ``` With the new defaults the generated interface for the `doSomething` action looks like this: ```java // uniqueEventContexts: true => // interface is prefixed with entity name "MyEntity" public interface MyEntityDoSomethingContext extends EventContext { // sharedInterfaces: true => global MyArray type is used Collection getValues(); void setValues(Collection values); } ``` Formerly the generated interface looked like this: ```java // uniqueEventContexts: false => // interface is not prefixed with entity name public interface DoSomethingContext extends EventContext { // sharedInterfaces: false => global MyArray type is not used, // instead an additional interface Values is generated inline Collection getValues(); void setValues(Collection values); interface Values extends CdsData { // ... } } ``` ### Adjusted Property Defaults Some property defaults have been adjusted: | Property | Old Value | New Value | Explanation | | --- | --- | --- | --- | | `cds.remote.services..http.csrf.enabled` | `true` | `false` | Most APIs don't require CSRF tokens. | | `cds.sql.hana.optimizationMode` | `legacy` | `hex` | SQL for SAP HANA is optimized for the HEX engine. | | `cds.odataV4.lazyI18n.enabled` | `null` | `true` | Lazy localization is now enabled by default in multitenant scenarios. | | `cds.auditLog.personalData.`
`throwOnMissingDataSubject` | `false` | `true` | Raise errors for incomplete personal data annotations by default. | | `cds.messaging.services..structured` | `false` | `true` | [Enhanced message representation](./messaging.md#enhanced-messages-representation) is now enabled by default. | ### Adjusted Property Behavior | Property | New Behavior | | --- | --- | | `cds.outbox.persistent.enabled` | When set to `false`, all persistent outboxes are disabled regardless of their specific configuration. | ### Removed Properties The following table gives an overview about the removed properties: | Removed Property | Replacement / Explanation | | --- | --- | | `cds.auditlog.outbox.persistent.enabled` | `cds.auditlog.outbox.name` | | `cds.dataSource.csvFileSuffix` | `cds.dataSource.csv.fileSuffix` | | `cds.dataSource.csvInitializationMode` | `cds.dataSource.csv.initializationMode` | | `cds.dataSource.csvPaths` | `cds.dataSource.csv.paths` | | `cds.dataSource.csvSingleChangeset` | `cds.dataSource.csv.singleChangeset` | | `cds.security.identity.authConfig.enabled` | `cds.security.authentication.`
`authConfig.enabled` | | `cds.security.xsuaa.authConfig.enabled` | `cds.security.authentication.`
`authConfig.enabled` | | `cds.security.mock.users..unrestricted` | Special handling of unrestricted attributes has been removed, in favor of [explicit modelling](../guides/security/authorization#unrestricted-xsuaa-attributes). | | `cds.messaging.services..outbox.persistent.enabled` | `cds.messaging.services..outbox.name` | | `cds.multiTenancy.compatibility.enabled` | MtSubscriptionService API [has been removed](#removed-mtx-classic-support) and compatibility mode is no longer available. | | `cds.multiTenancy.healthCheck.intervalMillis` | `cds.multiTenancy.healthCheck.interval` | | `cds.multiTenancy.mtxs.enabled` | MTXS is enabled [by default](#removed-mtx-classic-support). | | `cds.multiTenancy.security.deploymentScope` | HTTP-based tenant upgrade endpoints [have been removed](#removed-mtx-classic-support). | | `cds.odataV4.apply.inCqn.enabled` | `cds.odataV4.apply.transformations.enabled` | | `cds.odataV4.serializer.enabled` | The legacy serializer has been removed. | | `cds.outbox.persistent.maxAttempts` | `cds.outbox.services..maxAttempts` | | `cds.outbox.persistent.storeLastError` | `cds.outbox.services..storeLastError` | | `cds.outbox.persistent.ordered` | `cds.outbox.services..ordered` | | `cds.remote.services..destination.headers` | `cds.remote.services..http.headers` | | `cds.remote.services..destination.queries` | `cds.remote.services..http.queries` | | `cds.remote.services..destination.service` | `cds.remote.services..http.service` | | `cds.remote.services..destination.suffix` | `cds.remote.services..http.suffix` | | `cds.remote.services..destination.type` | `cds.remote.services..type` | | `cds.sql.search.useLocalizedView` | `cds.sql.search.model` | | `cds.sql.supportedLocales` | All locales are supported by default for localized entities, as session variables can now be leveraged on all databases. | ### Deprecated Session Context Variables | Old Variable | Replacement | | --- | --- | | `$user.tenant` | `$tenant` | | `$at.from` | `$valid.from` | | `$at.to` | `$valid.to` | ### Removed Java APIs - Removed deprecated classes: - `com.sap.cds.services.environment.ServiceBinding` - `com.sap.cds.services.environment.ServiceBindingAdapter` - `com.sap.cds.services.mt.MtAsyncDeployEventContext` - `com.sap.cds.services.mt.MtAsyncDeployStatusEventContext` - `com.sap.cds.services.mt.MtAsyncSubscribeEventContext` - `com.sap.cds.services.mt.MtAsyncSubscribeFinishedEventContext` - `com.sap.cds.services.mt.MtAsyncUnsubscribeEventContext` - `com.sap.cds.services.mt.MtAsyncUnsubscribeFinishedEventContext` - `com.sap.cds.services.mt.MtDeployEventContext` - `com.sap.cds.services.mt.MtGetDependenciesEventContext` - `com.sap.cds.services.mt.MtSubscribeEventContext` - `com.sap.cds.services.mt.MtSubscriptionService` - `com.sap.cds.services.mt.MtUnsubscribeEventContext` - Removed deprecated methods: - `com.sap.cds.services.request.ModifiableUserInfo.addUnrestrictedAttribute` - `com.sap.cds.services.request.ModifiableUserInfo.setUnrestrictedAttributes` - `com.sap.cds.services.request.ModifiableUserInfo.removeUnrestrictedAttribute` - `com.sap.cds.services.request.UserInfo.getUnrestrictedAttributes` - `com.sap.cds.services.request.UserInfo.isUnrestrictedAttribute` - `com.sap.cds.ql.Insert.cqn(String)` - `com.sap.cds.ql.Update.cqn(String)` - `com.sap.cds.ql.Upsert.cqn(String)` - Deprecations: - `com.sap.cds.ql.cqn.CqnSearchPredicate`, instead use `CqnSearchTermPredicate` - `com.sap.cds.ql.cqn.Modifier.search(String)`, instead use `searchTerm(CqnSearchTermPredicate)` - `com.sap.cds.services.messaging.MessageService.emit(String, String)` instead use `emit(String, Map)` or `emit(String, Map, Map)` ### Removed goals in `cds-maven-plugin` The goal `addSample` from the `cds-maven-plugin` has been removed. Use the new goal `add` with the property `-Dfeature=TINY_SAMPLE` instead. ## Cloud SDK 4 to 5 { #cloudsdk5 } CAP Java `2.6.0` and higher is compatible with Cloud SDK in version 4 and 5. For reasons of backward compatibility, CAP Java assumes Cloud SDK 4 as the default. However, we highly recommend that you use at least version `5.9.0` of Cloud SDK. If you relied on the Cloud SDK integration package (`cds-integration-cloud-sdk`), you won't need to adapt any code to upgrade your CAP Java application to Cloud SDK 5. In these cases, it's sufficient to add the following maven dependency to your CAP Java application: ```xml com.sap.cloud.sdk.cloudplatform connectivity-apache-httpclient4 ``` If you are using Cloud SDK APIs explicitly in your code consider the migration guide for Cloud SDK 5 itself: https://sap.github.io/cloud-sdk/docs/java/guides/5.0-upgrade-steps ## CAP Java 1.34 to CAP Java 2.0 { #one-to-two } This section describes the changes in CAP Java between the major versions 1.34 and 2.0. It provides also helpful information how to migrate a CAP Java application to the new major version 2.0. As preparation, we strongly recommend to firstly upgrade to 1.34.x and then follow this guide to upgrade to 2.0.x. ### Spring Boot 3 CAP Java 2 uses Spring Boot 3 as underlying framework. Consult the [Spring Boot 3.0 Migration Guide](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-3.0-Migration-Guide) for changes between Spring Boot 2.7 and Spring Boot 3.0. A CAP Java application is typically only affected by Spring Boot 3 incompatibilities if it uses native Spring APIs. #### Java 17 Spring Boot 3 requires Java 17 as minimum version. Maven dependencies, which are not managed by CAP Java, need to be updated to Java 17 compatible versions. #### Jakarta EE 10 Spring Boot 3 requires Jakarta EE 10. This includes a switch in package names from `javax` to `jakarta`. For example all Servlet-related classes are moved from package `javax.servlet` to `jakarta.servlet`. For instance, replace ```java import javax.servlet.http.HttpServletResponse; ``` with ```java import jakarta.servlet.http.HttpServletResponse; ``` Maven dependencies, which are not managed by CAP Java or Spring Boot, need to be updated to Jakarta EE 10 compatible versions. #### Spring Security Since version 1.27 CAP Java is running with Spring Boot 2.7, which uses Spring Security 5.7. Spring Boot 3 uses Spring Security 6. In case you defined custom security configurations you need to follow the guides, which describe the [migration from 5.7 to 5.8](https://docs.spring.io/spring-security/reference/5.8/migration/index.html) and the [migration from 5.8 to 6.0](https://docs.spring.io/spring-security/reference/6.0/migration/index.html). ### Minimum Dependency Versions Make sure that all libraries used in your project are either compatible with Spring Boot 3 / Jakarta EE 10 or alternatively offer a new version which you can adopt. CAP Java 2.0 itself requires updated [dependency versions](./versions#dependencies-version-2) of: - `@sap/cds-dk` - `@sap/cds-compiler` - XSUAA library - SAP Cloud SDK - Java Logging (replace `cf-java-logging-support-servlet` with `cf-java-logging-support-servlet-jakarta`) ::: warning The Cloud SDK BOM `sdk-bom` manages XSUAA until version 2.x, which isn't compatible with CAP Java 2.x. You have two options: * Replace `sdk-bom` with `sdk-modules-bom`, which [manages all Cloud SDK dependencies but not the transitive dependencies.](https://sap.github.io/cloud-sdk/docs/java/guides/manage-dependencies#the-sap-cloud-sdk-bill-of-material) * Or, add [dependency management for XSUAA](https://github.com/SAP/cloud-security-services-integration-library#installation) before Cloud SDK's `sdk-bom`. ::: ### API Cleanup Some interfaces, methods, configuration properties and annotations, which had already been deprecated in 1.x, are now removed in version 2.0. Please strictly fix all usage of [deprecated APIs](#overview-of-removed-interfaces-and-methods) by using the recommended replacement. ::: tip In your IDE, enable the compiler warning "Signal overwriting or implementing deprecated method". ::: #### Legacy Upsert Up to cds-services 1.27, upsert always completely _replaced_ pre-existing data with the given data: it was implemented as cascading delete followed by a deep _insert_. In the insert phase, for all elements that were absent in the data, the initializations were performed: UUID generation, `@cds.on.insert` handlers, and initialization with default values. Consequently, in the old implementation, an upsert with partial data would have reset absent elements to their initial values! To avoid a reset with the old upsert, data always had to be complete. Since version 1.28 the upsert is implemented as a deep _update_ that creates data if not existing. An upsert with partial data now leaves the absent elements untouched. In particular, UUID values are _not generated_ with the new upsert implementation. Application developers upgrading from cds-services <= 1.27 need to be aware of these changes. Check, if the usage of upsert in your code is compatible with the new implementation, especially: * Ensure that all key values are contained in the data and you don't rely on UUID key generation. * Check if insert is more appropriate. ::: warning The global configuration parameter `cds.sql.upsert.strategy`, as well as the upsert hint to switch back to the legacy upsert behavior are not supported anymore with 2.0. If you rely on the replace behavior of the legacy upsert, use a cascading delete followed by a deep insert. ::: #### Representation of Pagination {#limit} The interfaces CqnLimit and Limit are removed. Use the methods `limit(top)` and `limit(top, skip)` of the `Select` and `Expand` to specify the pagination settings. Use the methods top() and skip() of the `CqnEntitySelector` to introspect the pagination settings of a `CqnExpand` and `CqnSelect`. #### Statement Modification {#modification} ##### Removal of Deprecated CqnModifier The deprecated CqnModifier, whose default methods make expensive copies of literal values, is removed. Instead, use the Modifier as documented in [Modifying CQL Statements](working-with-cql/query-api#copying-modifying-cql-statements). If your modifier overrides one or more of the `CqnModifier:literal` methods that take `value` and `cdsType` as arguments, override `Modifier:literal(CqnLiteral literal)` instead. You can create new values using `CQL.val(value).type(cdsType);`. ##### Removal of Deprecated Methods in Modifier {#modifier} The deprecated methods `ref(StructuredTypeRef)` and `ref(ElementRef)` are removed, instead implement the new methods `ref(CqnStructuredTypeRef)` and `ref(CqnElementRef)`. Use `CQL.copy(ref)` if you require a modifiable copy of the ref. ```java Modifier modifier = new Modifier() { @Override public CqnStructuredTypeRef ref(CqnStructuredTypeRef ref) { RefBuilder copy = CQL.copy(ref); // try to avoid copy copy.targetSegment().filter(newFilter); return copy.build(); } @Override public CqnValue ref(CqnElementRef ref) { List segments = new ArrayList<>(ref.segments()); segments.add(0, CQL.refSegment(segments.get(0).id(), filter)); return CQL.get(segments).as(alias); } }; CqnStatement copy = CQL.copy(statement, modifier); ``` ### Removed Interfaces and Methods Overview {#overview-of-removed-interfaces-and-methods} #### com.sap.cds | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | ConstraintViolationException | | UniqueConstraintException | | ResultBuilder | updatedRows | see javadoc | #### com.sap.cds.ql | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | CQL | literal | val or constant | | Select | groupBy | groupBy | #### com.sap.cds.ql.cqn | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | CqnParameter | getName | name | | CqnReference.Segment | accept(visitor) | CqnReference.accept(visitor) | | CqnSelectList | prefix | ref | | CqnSelectListItem | displayName | asValue + displayName | | | alias | asValue + alias | | CqnSortSpecification | item | value | | CqnSource | isQuery | isSelect | | | asQuery | asSelect | | CqnVisitor | visit(CqnReference.Segment seg) | visit(CqnElementRef), visit(CqnStructuredTypeRef)| | CqnXsert | getKind | isInsert, isUpsert | | Modifier | ref(StructuredTypeRef ref) | ref(CqnStructuredTypeRef ref) | | | ref(ElementRef ref) | ref(CqnElementRef ref) | | | in(Value, Collection) | in(Value, CqnValue) | | | match(ref, pred, quantifier) | match(CqnMatchPredicate match) | | | selectListItem(value, alias) | selectListValue(value, alias) | | | inline(ref, items) | inline(CqnInline inline) | | | expand(ref, items, orderBy, limit) | expand(CqnExpand expand) | | | expand(Expand expand) | expand(CqnExpand expand) | | | limit(Limit limit) | top(long top) and skip(long skip) | #### com.sap.cds.reflect | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | CdsAssociationType | keys | refs | | CdsStructuredType | isInlineDefined | isAnonymous | #### com.sap.cds.services | Class / Interface | Method / Field | Replacement | | --- | --- | --- | |ErrorStatus | getCode() | getCodeString()| | ServiceException | messageTarget(prefix, entity, path) |messageTarget(parameter, path) | #### com.sap.cds.services.cds |Class/Interface | Method | Replacement | |---------|---------| -----| |CdsService | | CqnService | #### com.sap.cds.services.environment | Class / Interface | Method / Field | Replacement | | --- | --- | --- | |ServiceBinding | | [com.sap.cloud.environment.
`servicebinding.api.ServiceBinding`](https://github.com/SAP/btp-environment-variable-access/blob/main/api-parent/core-api/src/main/java/com/sap/cloud/environment/servicebinding/api/ServiceBinding.java) | ::: details ##### Interface `ServiceBinding` The interface `com.sap.cds.services.environment.ServiceBinding` is deprecated and replaced with interface [`com.sap.cloud.environment.servicebinding.api.ServiceBinding`](https://github.com/SAP/btp-environment-variable-access/blob/main/api-parent/core-api/src/main/java/com/sap/cloud/environment/servicebinding/api/ServiceBinding.java). For convenience the adapter class `com.sap.cds.services.utils.environment.ServiceBindingAdapter` is provided, which maps the deprecated interface to the new one. ::: #### com.sap.cds.services.handler |Class/Interface | Method | Replacement | |---------|---------| -----| |EventPredicate | | n/a | ::: details #### Interface `EventPredicate` The interface `com.sap.cds.services.handler.EventPredicate` is removed. Consequently, all methods at interface `com.sap.cds.services.Service` containing this interface as argument are removed. All removed method were marked as deprecated in prior releases. ::: #### com.sap.cds.services.messages | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | Message | target(prefix, entity, path) | target(start, path)| | MessageTarget | getPrefix() | getParameter() | | | getEntity(), getPath()| getRef() | #### com.sap.cds.services.persistence | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | PersistenceService | getCdsDataStore() | Use PersistenceService | #### com.sap.cds.services.request | Class / Interface | Method / Field | Replacement | | --- | --- | --- | |ParameterInfo | getQueryParameters() | getQueryParams() | | UserInfo | getAttribute(String) | getAttributeValues(String) | #### com.sap.cds.services.runtime | Class / Interface | Method / Field | Replacement | | --- | --- | --- | | CdsModelProvider | get(tenantId) | get(userInfo, features)| | CdsRuntime |runInChangeSetContext(Consumer) | changeSetContext().run(Consumer) | | |runInChangeSetContext(Function) | changeSetContext().run(Function) | | |runInRequestContext(Consumer) | requestContext().run(Consumer) | | |runInRequestContext(Function) | requestContext().run(Function) | | Request | CdsRuntime.runInRequestContext(Request, Function\|Consumer) | CdsRuntime.requestContext().run(Function) | | RequestParameters | CdsRuntime.runInRequestContext(Request, Function\|Consumer) | CdsRuntime.requestContext().run(Function) | | RequestUser | CdsRuntime.runInRequestContext(Request, Function\|Consumer) | CdsRuntime.requestContext().run(Function) | ::: details #### Method `CdsRuntime.runInRequestContext(Request, Function|Consumer)` The interface `Request` and its used interfaces `RequestParameters` and `RequestUser` are removed. They were still used in the method `CdsRuntime.runInRequestContext(Request, Function|Consumer)`, which was also deprecated and should be replaced by `CdsRuntime.requestContext()``.run(Function)` ::: #### Overview of Removed CDS Properties Some CdsProperties were already marked as deprected in CAP Java 1.x and are now removed in 2.x. | removed | replacement | | --- | --- | | cds.dataSource.serviceName | `cds.dataSource.binding` | | cds.drafts.associationsToInactiveEntities | see [Lean Draft](#lean-draft) | | cds.locales.normalization.whiteList | `cds.locales.normalization.includeList` | | cds.messaging.services.\.queue.maxFailedAttempts | Use custom error handling | | cds.messaging.services.\.topicNamespace | `cds.messaging.services..subscribePrefix` | | cds.multiTenancy.instanceManager | `cds.multiTenancy.serviceManager` | | cds.multiTenancy.dataSource.hanaDatabaseIds | obsolete, information is automatically retrieved from bindings | | cds.odataV4.indexPage | `cds.indexPage` | | cds.security.authenticateUnknownEndpoints | `cds.security.authentication.authenticateUnknownEndpoints` | | cds.security.authorizeAutoExposedEntities | if disabled, add auto-exposed entities explicitly into your service definition | | cds.security.authorization.autoExposedEntities | if disabled, add auto-exposed entities explicitly into your service definition | | cds.security.defaultRestrictionLevel | `cds.security.authentication.mode` | | cds.security.draftProtection | `cds.security.authorization.draftProtection` | | cds.security.instanceBasedAuthorization | if disabled, remove `@requires` / `@restrict` annotations | | cds.security.authorization.instanceBasedAuthorization | remove `@requires` / `@restrict` annotations | | cds.security.openMetadataEndpoints | `cds.security.authentication.authenticateMetadataEndpoints` | | cds.security.openUnrestrictedEndpoints | `cds.security.authentication.mode` | | cds.security.xsuaa.serviceName | `cds.security.xsuaa.binding` | | cds.security.xsuaa.normalizeUserNames | obsolete, effectively hard-coded to `false` | | cds.services | cds.application.services | | cds.sql.upsert | See [Legacy Upsert](#legacy-upsert) | ### Removed Annotations Overview - `@search.cascade` is no longer supported. It's replaced by [@cds.search](../guides/providing-services#cds-search). ### Changed Behavior #### Immutable Values The implementations of `Value` are now immutable. This change makes [copying & modifying CQL statements](./working-with-cql/query-api#copying-modifying-cql-statements) cheaper, which significantly improves the performance. Changing the type of a value via `Value::type` now returns a new (immutable) value or throws an exception if the type change is not supported: ```Java Literal number = CQL.val(100); Value string = number.type(CdsBaseType.STRING); // number is unchanged ``` #### Immutable References In CDS QL, a [reference](../cds/cxn#references) (_ref_) identifies an entity set or element of a structured type. References can have multiple segments and ref segments can have filter conditions. The default implementations of references (`ElementRef` and `StructuredTypeRef`), as well as ref segments (`RefSegment`) are now immutable. This change makes [copying & modifying CQL statements](./working-with-cql/query-api#copying-modifying-cql-statements) much cheaper, which significantly improves the performance. ##### - Set alias or type `CQL:entity:asRef`, `CQL:to:asRef` and `CQL:get` create immutable refs. Modifying the ref is not supported. Methods `as(alias)` and `type(cdsType)` now return a *new* (immutable) ref: ```java ElementRef authorName = CQL.get("name").as("Author"); ElementRef nombre = authorName.as("nombre"); // authorName is unchanged ElementRef string = authorName.type("cds.String"); // authorName is unchanged ``` ##### - Modify ref segments Also the segments of an immutable ref can't be modified in-place any longer. Create an immutable ref segment with filter as follows: ```java Segment seg = CQL.refSegment("title", predicate); ``` The deprecated `RefSegment:id` and `RefSegment:filter` methods now throw an `UnsupportedOperationException`. For in-place modification of ref segments use `CQL.copy(ref)` to create a `RefBuilder`, which is a modifiable copy of the original ref. The `RefBuilder` allows to modify the segments in-place to change the segment ID or set a filter. Finally call the `build` method to create an immutable ref. To manipulate a ref in a [Modifier](#modifier), implementations need to override the new `ref(CqnStructuredTypeRef ref)` and `ref(CqnElementRef ref)` methods. #### Null Values in CDS QL Query Results With CAP Java 2.0, `null` values are not removed from the result of CDS QL queries anymore, this needs to be considered when using methods that operate on the key set of `Row`, such as `Row:containsKey`, `Row:keySet` and `Row:entrySet`. #### Result of Updates Without Matching Entity The `Result` rows of CDS QL Updates are not cleared anymore if no entity was updated. To find out if the entity has been updated, check the [update count](./working-with-cql/query-api#update): ```Java CqnUpdate update = Update.entity(BOOKS).entry(book); // w/ book: {ID: 0, stock: 3} Result result = service.run(update); long updateCount = result.rowCount(); // 0 matches with ID 0 ``` For batch updates use `Result::rowCount` with the [batch index](./working-with-cql/query-execution#batch-execution): ```Java // books: [{ID: 251, stock: 11}, {ID: 252, stock: 7}, {ID: 0, stock: 3}] CqnUpdate update = Update.entity(BOOKS).entries(books); Result result = service.run(update); result.batchCount(); // number of batches (3) result.rowCount(2); // 0 matches with ID 0 ``` #### Provider Tenant Normalization The default value of CDS Property `cds.security.authentication.normalizeProviderTenant` is changed to `true`. With this change, the provider tenant is normalized and set to `null` in the UserInfo by default. If you have subscribed the provider tenant to your application you need to disable this feature. ### Lean Draft The property `cds.drafts.associationsToInactiveEntities` has been removed. It enabled a feature, which caused associations to other draft documents to combine active and inactive versions of the association target. This mixing of inactive and active data is no longer supported. In cases where it is still required to connect two independent draft documents through an association, you can annotate this association with `@odata.draft.enclosed`. Note: This ensures that the active version points to an active target, while the inactive version points to an inactive target. It will not mix active and inactive data into the same association. The following table summarizes the behaviour of associations between different draft-enabled entities: | Source Entity | Association Type | Target Entity | Draft Document Boundaries | | --- | --- | --- | --- | | active1 | composition | active | same document | | inactive2 | composition | inactive | same document | | active | [backlink](../cds/cdl#to-many-associations) association | active | same document | | inactive | backlink association | inactive | same document | | active | association | active | independent documents | | inactive | association | active | independent documents | | active | association with `@odata.draft.enclosed` | active | independent documents | | inactive | association with `@odata.draft.enclosed` | inactive | independent documents | 1 `IsActiveEntity = true`
2 `IsActiveEntity = false` ### Changes to Maven Plugins #### cds-maven-plugin The deprecated parameters `generateMode` and `parserMode` are removed from the [goal generate](./assets/cds-maven-plugin-site/generate-mojo.html){target="_blank"}. #### cds4j-maven-plugin The deprecated Maven plugin `cds4j-maven-plugin` is removed and no longer available. It's replaced by the [`cds-maven-plugin`](./assets/cds-maven-plugin-site/plugin-info.html){target="_blank"} which provides the same functionality and more. ## Classic MTX to Streamlined MTX How to migrate from [classic MTX](./multitenancy) to [streamlined MTX](../guides/multitenancy/) is described [here](../guides/multitenancy/old-mtx-migration). ## CAP Java Classic to CAP Java 1.x To make the CAP Java SDK and therefore the applications built on it future-proof, we revamped the CAP Java SDK. Compared the classic CAP Java Runtime (also known as the "Java Gateway stack"), the new CAP Java SDK has numerous benefits: - Starts up much faster - Supports local development with SQLite - Has clean APIs to register event handlers - Integrates nicely with Spring and Spring Boot - Supports custom protocol adapters (OData V4 support included) - Has a modular design: Add features as your application grows - Enables connecting to advanced SAP BTP services like SAP Event Mesh We strongly recommend adopting the new CAP Java SDK when starting a new project. Existing projects that currently use the classic CAP Java Runtime can adopt the new CAP Java SDK midterm to take advantage of new features and the superior architecture. In the following sections, we describe the steps to migrate a Java project from the classic CAP Java Runtime to the new CAP Java SDK. ### OData Protocol Version The classic CAP Java Runtime came in several different flavors supporting either the OData V2 or V4 protocols. The new CAP Java SDK streamlines this by providing a common [protocol adapter layer](./developing-applications/building#protocol-adapters), which enables to handle any OData protocol version or even different protocols with *one* application backend. Hence, if you decide to change the protocol that exposes your domain model, you no longer have to change your business logic. ::: tip By default, the CAP Java Runtime comes with protocol adapters for OData V4 and [OData V2 (Beta)](#v2adapter). Therefore, you can migrate your frontend code to new CAP Java SDK without change. In addition, you have the option to move from SAP Fiori Elements V2 to SAP Fiori Elements V4 at any time. ::: ### Migrate the Project Structure Create a new CAP Java project beside your existing one, which you want to migrate. You can use the CAP Java Maven archetype to create a new CAP Java project: ```sh mvn archetype:generate -DarchetypeArtifactId=cds-services-archetype -DarchetypeGroupId=com.sap.cds -DarchetypeVersion=RELEASE ```
Further details about creating a new CAP Java project and the project structure itself can be found in section [Starting a New Project](./getting-started#new-project). By default, the Java service module goes to the folder `srv`. If you want to use a different service module folder, you have to adapt it manually. Rename the service module folder to your preferred name and adjust also the `` section in the file `pom.xml` in your projects root folder: ```xml ... srv ... ``` ::: tip If you've changed the service module folder name, you have to consider this in the next steps. ::: ### Copy the CDS Model Now, you can start migrating your CDS model from the classic project to the newly created CAP Java project. Therefore, copy your CDS model and data files (_*.cds_ & _*.csv_) manually from the classic project to the corresponding locations in the new project, presumably the `db` folder. If you organize your CDS files within subfolders, also re-create these subfolders in the new project to ensure the same relative path between copied CDS files. Otherwise, compiling your CDS model in the new project would fail. Usually the CDS files are located in the following folders: | Usage | Location in classic project | Location in new CAP Java project | | --- | --- | --- | | Database Model | `/db/**` | `/db/**` | | Service Model | `/srv/**` | `/srv/**` | If your CDS model depends on other reusable CDS models, add those dependencies to `/package.json`: ```json ... "dependencies": { "@sap/cds": "^3.0.0", ... // add your CDS model reuse dependencies here }, ... ``` ::: tip In your CDS model, ensure that you explicitly define the data type of the elements whenever an aggregate function (max, min, avg etc.) is used, else the build might fail. ::: In the following example, element `createdAt` has an explicitly specified datatype (that is `timestamp`): ```cds view AddressView as select from Employee.Address { street, apartment, postal_code, MAX(createdAt) AS createdAt: timestamp }; ``` #### CDS Configuration The CDS configuration is also part of `/package.json` and has to be migrated as well from the classic to the new project. Therefore, copy and replace the whole `cds` section from your classic _package.json_ to the new project: ```json ... "dependencies": { "@sap/cds": "^3.0.0", }, "cds": { // copy this CDS configuration from your classic project ... } ... ``` ::: tip If there's also a `/.cdsrc.json` in your classic project to configure the CDS build, copy this file to the new project. ::: You can validate the final CDS configuration by executing a CDS command in the root folder of the new project: ```sh cds env ``` It prints the effective CDS configuration on the console. Check, that this configuration is valid for your project. Execute this command also in your classic project and compare the results, they should be same. Further details about effective CDS configuration can be found in section [Effective Configuration](../node.js/cds-env#cli). #### First Build and Deployment After you've copied all your CDS files, maintained additional dependencies and configured the CDS build, you can try to build your new CAP Java project the first time. Therefore, execute the following Maven command in the root folder of your new CAP Java project: ```sh mvn clean install ``` If this Maven build finishes successfully, you can optionally try to deploy your CDS model to an SAP HANA database by executing the following CDS command: ```sh cds deploy --to hana ``` [See section **SAP HANA Cloud** for more details about deploying to SAP HANA.](../guides/databases-hana){.learn-more} ### Migrate Java Business Logic #### Migrate Dependencies Now, it's time to migrate your Java business logic. If your event handlers require additional libraries that go beyond the already provided Java Runtime API, add those dependencies manually to section `dependencies` in file `/srv/pom.xml`, for example: ```xml ... ... com.sap.cds cds-starter-spring-boot-odata org.xerial sqlite-jdbc ... ... ``` ::: tip Don't add any dependencies of the classic Java Runtime to the new project. Those dependencies are already replaced with the corresponding version of the new CAP Java SDK. ::: #### Migrate Event Handlers In the next steps, you have to adapt your Java classes to be compatible with the new Java Runtime API. That means, you'll copy and migrate your event handler classes from the classic to the new project. It will be required to modify and adapt your Java source code to be compatible with the new Java SDK. Usually the event handler classes and tests are located in these folders: | Usage | Location in classic project | Location in new CAP Java project | | --- | --- | --- | | Handler classes| `/srv/src/main/java/**` | `/srv/src/main/java/**` | | Test classes | `/srv/src/test/java/**` | `/srv/src/test/java/**` | Copy your Java class files (`*.java`) manually from the classic project to corresponding locations in the new project. It's important that you re-create the same subfolder structure in the new project as it is in the classic project. The subfolder structure reflects the Java package names of your Java classes. ##### Annotations Annotate all of your event handler classes with the following annotations and ensure a unique service name: ```java @org.springframework.stereotype.Component @com.sap.cds.services.handler.annotations.ServiceName("serviceName") ``` ::: tip All event handler classes also *have* to implement the marker interface `com.sap.cds.services.handler.EventHandler`. Otherwise, the event handlers defined in the class won't get called. ::: Finally, your event handler class has to look similar to this example: ```java import org.springframework.stereotype.Component; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.ServiceName; @Component @ServiceName("AdminService") public class AdminServiceHandler implements EventHandler { // ... } ``` The new CAP Java SDK introduces new annotations for event handlers. Replace event annotations at event handler methods according to this table: | Classic Java Runtime | CAP Java SDK | | --- | --- | | `@BeforeCreate(entity = "yourEntityName")` | `@Before(event = CqnService.EVENT_CREATE, entity = "yourEntityName")` | | `@BeforeDelete(entity = "yourEntityName")` | `@Before(event = CqnService.EVENT_DELETE, entity = "yourEntityName")` | | `@BeforeRead(entity = "yourEntityName")` | `@Before(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@BeforeQuery(entity = "yourEntityName")` | `@Before(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@BeforeUpdate(entity = "yourEntityName")` | `@Before(event = CqnService.EVENT_UPDATE, entity = "yourEntityName")` | | `@Create(entity = "yourEntityName")` | `@On(event = CqnService.EVENT_CREATE, entity = "yourEntityName")` | | `@Delete(entity = "yourEntityName")` | `@On(event = CqnService.EVENT_DELETE, entity = "yourEntityName")` | | `@Query(entity = "yourEntityName")` | `@On(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@Read(entity = "yourEntityName")` | `@On(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@Update(entity = "yourEntityName")` | `@On(event = CqnService.EVENT_UPDATE, entity = "yourEntityName")` | | `@AfterCreate(entity = "yourEntityName")` | `@After(event = CqnService.EVENT_CREATE, entity = "yourEntityName")` | | `@AfterRead(entity = "yourEntityName")` | `@After(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@AfterQuery(entity = "yourEntityName")` | `@After(event = CqnService.EVENT_READ, entity = "yourEntityName")` | | `@AfterUpdate(entity = "yourEntityName")` | `@After(event = CqnService.EVENT_UPDATE, entity = "yourEntityName")` | | `@AfterDelete(entity = "yourEntityName")` | `@After(event = CqnService.EVENT_DELETE, entity = "yourEntityName")` | ::: tip The `sourceEntity` annotation field doesn't exist in the new CAP Java SDK. In case your event handler should only be called for specific source entities you need to achieve this by [analyzing the CQN](./working-with-cql/query-introspection#using-the-iterator) in custom code. ::: ##### Event Handler Signatures The basic signature of an event handler method is `void process(EventContext context)`. However, it doesn't provide the highest level of comfort. Event handler signatures can vary on three levels: - EventContext arguments - POJO-based arguments - Return type Replace types from package `com.sap.cloud.sdk.service.prov.api.request` in the classic Java Runtime by types from package `com.sap.cds.services.cds` as described by the following table: | Classic Java Runtime | New CAP Java SDK | | --- | --- | | `CreateRequest` | `CdsCreateEventContext` | | `DeleteRequest` | `CdsDeleteEventContext` | | `QueryRequest` | `CdsReadEventContext` | | `ReadRequest` | `CdsReadEventContext` | | `UpdateRequest` | `CdsUpdateEventContext` | | `ExtensionHelper` | Use dependency injection provided by Spring | You can also get your entities injected by adding an additional argument with one of the following types: - `java.util.stream.Stream` - `java.util.List` [See section **Event Handler Method Signatures** for more details.](event-handlers/#handlersignature){.learn-more} Also replace the classic handler return types with the corresponding new implementation: | Classic Java Runtime | New CAP Java SDK | | --- | --- | | return `BeforeCreateResponse` | call `CdsCreateEventContext::setResult(..)` or return `Result` | | return `BeforeDeleteResponse` | call `CdsDeleteEventContext::setResult(..)` or return `Result` | | return `BeforeQueryResponse` | call `CdsReadEventContext::setResult(..)` or return `Result` | | return `BeforeReadResponse` | call `CdsReadEventContext::setResult(..)` or return `Result` | | return `BeforeUpdateResponse` | call `CdsUpdateEventContext::setResult(..)` or return `Result` | ### Delete Obsolete Files There are numerous files in your classic project, which aren't required and supported anymore in the new project. Don't copy any of the following files to the new project: ```txt / ├─ db/ │ ├─ .build.js │ └─ package.json └─ srv/src/main/ ├─ resources/ │ ├─ application.properties │ └─ connection.properties └─ webapp/ ├─ META-INF/ │ ├─ sap_java_buildpack/config/resources_configuration.xml │ └─ context.xml └─ WEB-INF/ ├─ resources.xml ├─ spring-security.xml └─ web.xml ``` ### Transaction Hooks In the Classic Java Runtime, it was possible to hook into the transaction initialization and end phase by adding the annotations `@InitTransaction` or `@EndTransaction` to a public method. The method annotated with `@InitTransaction` was invoked just after the transaction started and before any operation executed. Usually this hook was used to validate incoming data across an OData batch request. [See section **InitTransaction Hook** for more details about init transaction hook in classic CAP Java.](./custom-logic/hooks#inittransaction-hook){.learn-more} The method annotated with `@EndTransaction` was invoked after all the operations in the transaction were completed and before the transaction was committed. [See section **EndTransaction Hook** for more details about end transactions hook in classic CAP Java.](./custom-logic/hooks#endtransaction-hook){.learn-more} The new CAP Java SDK doesn't support these annotations anymore. Instead, it supports registering a `ChangeSetListener` at the `ChangeSetContext` supporting hooks for `beforeClose` and `afterClose`. [See section **Reacting on ChangeSets** for more details.](./event-handlers/changeset-contexts#reacting-on-changesets){.learn-more} To replace the `@InitTransaction` handler, you can use the `beforeClose` method, instead. This method is called at the end of the transaction and can be used, for example, to validate incoming data across multiple requests in an OData batch *before* the transaction is committed. It's possible to cancel the transaction in this phase by throwing an `ServiceException`. The CAP Java SDK sample application shows how such a validation using the `ChangeSetListener` approach can be implemented. See [here](https://github.com/SAP-samples/cloud-cap-samples-java/blob/cross-validation/srv/src/main/java/my/bookshop/handlers/ChapterServiceHandler.java) for the example code. Note that to validate incoming data for *single* requests, we recommend to use a simple `@Before` handler, instead. [See section **Introduction to Event Handlers** for a detailed description about `Before` handler.](event-handlers/#before){.learn-more} ### Security Settings For applications based on Spring Boot, the new CAP Java SDK simplifies configuring *authentication* significantly: Using the classic CAP Java Runtime, you had to configure authentication for all application endpoints (including the endpoints exposed by your CDS model) explicitly. The new CAP Java SDK configures authentication for all exposed endpoints automatically, based on the security declarations in your CDS model. *Authorization* can be accomplished in both runtimes with CDS model annotations `@requires` and `@restrict` as described in section [Authorization and Access Control](../guides/security/authorization). Making use of the declarative approach in the CDS model is highly recommended. In addition, the new CAP Java SDK enables using additional authentication methods. For instance, you can use basic authentication for mock users, which are useful for local development and testing. See section [Mock Users](./security#mock-users) for more details. An overview about the general security configuration in the new CAP Java SDK can be found in section [Security](security). #### Configuration and Dependencies To make use of authentication and authorization with JWT tokens issued by XSUAA on the SAP BTP, add the following dependency to your `pom.xml`: ```xml com.sap.cds cds-feature-xsuaa ``` This feature provides utilities to access information in JWT tokens, but doesn't activate authentication by default. Therefore, as in the classic CAP Java Runtime, activate authentication by adding a variant of the [XSUAA library](https://github.com/SAP/cloud-security-xsuaa-integration) suitable for your application (depending on if you use Spring, Spring Boot, plain Java) as described in the following sections. ##### Spring Boot Activate Spring security with XSUAA authentication by adding the following Maven dependency: ```xml com.sap.cloud.security.xsuaa xsuaa-spring-boot-starter ${xsuaa.version} ``` Maintaining a `spring-security.xml` file or a custom `WebSecurityConfigurerAdapter` or `SecurityFilterChain` isn't necessary anymore because the new CAP Java SDK runtime *autoconfigures* authentication in the Spring context according to your CDS model: - Endpoints exposed by the CDS model annotated with `@restrict` are automatically authenticated. - Endpoints exposed by the CDS model *not* annotated with `@restrict` are public by definition and hence not authenticated. - All other endpoints the application exposes manually through Spring are authenticated. If you need to change this default behavior either [manually configure these endpoints](./security#spring-boot) or turn off auto configuration of custom endpoints by means of the following application configuration parameter: ```yaml cds.security.authentication.authenticate-unknown-endpoints: false ``` ##### Plain Java The existing authentication configuration stays unchanged. No autoconfiguration is provided. #### Enforcement API & Custom Handlers The new CAP Java SDK offers a technical service called `AuthorizationService`, which serves as a replacement for the former Enforcement APIs. Obtain a reference to this service just like for all other services, either explicitly through a `ServiceCatalog` lookup or per dependency injection in Spring: ```java @Autowire AuthorizationService authService; ``` Information of the request user is passed in the current `RequestContext`: ```java EventContext context; UserInfo user = context.getUserInfo(); ``` or through dependency injection within a handler bean: ```java @Autowire UserInfo user; ``` With the help of these interfaces, the classic enforcement API can be mapped to the new API as listed in the following table: | classic API | new API | Remarks | :---------------------------------------------------- | :----------------------------------------------------- | ------------------- | | `isAuthenticatedUser(String serviceName)` | `authService.hasServiceAccess(serviceName, event)` | | `isRegisteredUser(String serviceName)` | no substitution required | | `hasEntityAccess(String entityName, String event)` | `authService.hasEntityAccess(entityName, event)` | | `getWhereCondition() ` | `authService.calcWhereCondition(entityName, event)` | | `getUserName()` | `user.getName()` | The user's name is also referenced with `$user` and used for `managed` aspect. | `getUserId()` | `user.getId()` | | `hasUserRole(String roleName)` | `user.hasRole(roleName)` | | `getUserAttribute(String attributeName)` | `user.getAttribute(attributeName)` | | `isContainerSecurityEnabled()` | no substitution required | [See section **Enforcement API & Custom Handlers in Java** for more details.](./security#enforcement-api){.learn-more} ### Data Access and Manipulation There are several ways of accessing data. The first and most secure way is to use the Application Service through an `CqnService` instance. The second is to use `PersistenceService`, in that case the query execution is done directly against underlying datasource, bypassing all authority checks available on service layer. The third one is to use CDS4J component called `CdsDataStore`, which also executes queries directly. #### Access Application Service in Custom Handler and Query Execution To access an Application Service in custom handler and to execute queries, perform the following steps: 1) Inject the instance of `CqnService` in your custom handler class: ```java @Resource(name = "CatalogService") private CqnService catalogService; ``` [See section **Services Accepting CQN Queries** for more details.](cqn-services/#cdsservices){.learn-more} 2) In each custom handler, replace instance of `DataSourceHandler` as well as `CDSDataSourceHandler` with the `CqnService` instance. 3) Rewrite and execute the query (if any). Example of query execution in *Classic Java Runtime*: ```java CDSDataSourceHandler cdsHandler = DataSourceHandlerFactory .getInstance() .getCDSHandler(getConnection(), queryRequest.getEntityMetadata().getNamespace()); CDSQuery cdsQuery = new CDSSelectQueryBuilder("CatalogService.Books") .selectColumns("id", "title") .where(new ConditionBuilder().columnName("title").IN("Spring", "Java")) .orderBy("title", true) .build(); cdsHandler.executeQuery(cdsQuery); ``` [See section **CDS Data Source** for more details.](./custom-logic/remote-data-source#cds-data-source){.learn-more} The corresponding query and its execution in *New CAP Java SDK* looks as follows: ```java Select query = Select.from("CatalogService.Books") .columns("id", "title") .where(p -> p.get("title") .in("Spring", "Java")) .orderBy("title"); catalogService.run(query); ``` [See section **Query Builder API** for more details.](./working-with-cql/query-api){.learn-more} 4) Rewrite and execute the CRUD operations (if any). |Action|Classic Java Runtime|New CAP Java SDK| |---|---|---| |Create|`dsHandler.executeInsert(request.getData(), true)`|`catalogService.run(event.getCqn())` or `catalogService.run(Insert.into("Books").entry(book))`| |Read|`dsHandler.executeRead(request.getEntityMetadata().getName(), request.getKeys(), request.getEntityMetadata().getElementNames());`|`catalogService.run(event.getCqn())` or `catalogService.run(Select.from("Books").where(b->b.get("ID").eq(42)))`| |Update|`dsHandler.executeUpdate(request.getData(), request.getKeys(), true)`|`catalogService.run(event.getCqn())` or `catalogService.run(Update.entity("Books").data(book))`| |Delete| `dsHandler.executeDelete(request.getEntityMetadata().getName(), request.getKeys())` |`catalogService.run(event.getCqn())` or `catalogService.run(Delete.from("Books").where(b -> b.get("ID").eq(42)))`| As you can see in *New CAP Java SDK* it's possible to either directly execute a CQN of the event, or you can construct and execute your own custom query. [See section **Query Builder API** for more details.](./working-with-cql/query-api){.learn-more} #### Accessing `PersistenceService` If for any reason you decided to use `PersistenceService` instead of `CqnService` in your custom handler, you need to inject the instance of `PersistenceService` in your custom handler class: ```java @Autowired private PersistenceService persistence; ``` [See section **Persistence API** for more details.](./cqn-services/#persistenceservice){.learn-more} Example of Query execution in *Classic Java Runtime*: ```java CDSDataSourceHandler cdsHandler = ...; CDSQuery cdsQuery = new CDSSelectQueryBuilder("CatalogService.Books") .selectColumns("id", "title") .where(new ConditionBuilder().columnName("title").IN("Spring", "Java")) .orderBy("title", true) .build(); cdsHandler.executeQuery(cdsQuery); ``` The corresponding query execution in *New CAP Java SDK* looks as follows: ```java Select query = Select.from("CatalogService.Books") .columns("id", "title") .where(p -> p.get("title") .in("Spring", "Java")) .orderBy("title"); persistence.run(query); ``` #### Accessing `CdsDataStore` If you want to use `CdsDataStore` in your custom handler, you first need to do the steps described in section [Accessing PersistenceService](#accessing-persistenceservice). After that you can get the instance of `CdsDataStore` using `persistence.getCdsDataStore()` method: ```java Select query = ...; // construct the query CdsDataStore cdsDataStore = persistence.getCdsDataStore(); cdsDataStore.execute(query); ``` ### CDS OData V2 Adapter { #v2adapter} When you generate a new project using the [CAP Java Maven Archetype](./getting-started#new-project), OData V4 is enabled by default. To be able to migrate the backend from the *Classic Java Runtime* without making changes in your frontend code, you can activate the *OData V2 Adapter* as follows: 1. Add the following dependency to the `pom.xml` of your `srv` module: ```xml com.sap.cds cds-adapter-odata-v2 runtime ``` 2. In addition, turn off the OData V4 adapter by replacing the following dependency: ```xml com.sap.cds cds-starter-spring-boot-odata ``` with ```xml com.sap.cds cds-starter-spring-boot ``` if present. Additionally, remove the dependency ```xml com.sap.cds cds-adapter-odata-v4 ``` if present. 3. To make the CDS Compiler generate EDMX for OData V2, add or adapt the following property in the _.cdsrc.json_ file: ```json { ... "odata": { "version": "v2" } } ``` ::: tip In case you're using [multitenancy](./multitenancy), keep in mind to make the same change in the _.cdsrc.json_ of the _mtx-sidecar_. ::: After rerunning the Maven build and starting the CAP Java application, Application Services are served as OData V2. By default, the endpoints will be available under `/odata/v2/`. The default response format is `xml`, to request `json` use `$format=json` or `Accept: application/json` header. ::: tip The index page available at \ lists service endpoints of all protocol adapters. ::: #### Enabling OData V2 and V4 in Parallel You can also use OData V2 and V4 in parallel. However, by default the Maven build generates EDMX files for one OData version, only. Therefore, you've to add an extra compile step for the missing OData version to the Maven build of your application: 1. In _.cdsrc_, choose `v4` for `odata.version` 2. Add an extra compile command to the subsection `commands` of the section with ID `cds.build` in the *pom.xml* file in the *srv* folder of your project: ```xml compile ${project.basedir} -s all -l all -2 edmx-v2 -o ${project.basedir}/src/main/resources/edmx/v2 ``` This command picks up all service definitions in the Java project base directory (`srv` by default) and generates EDMX for OData V2. It also localizes the generated EDMX files with all available translations. For more information on the previous command, call `cds help compile` on the command line. If your service definitions are located in a different directory, adopt the previous command. If your service definitions are contained in multiple directories, add the previous command for each directory separately. Make sure to use at least `cds-dk 3.2.0` for this step. If you are using feature toggles in your CAP Java project, the list of models must also contain the features' root folder: ```xml compile ${project.basedir} ${session.executionRootDirectory}/fts/* -s all -l all -2 edmx-v2 -o ${project.basedir}/src/main/resources/edmx/v2 ``` This command includes the folder _/fts_ and all sub-folders into the CDS model. 3. Make sure that the dependencies to the OData V2 and V4 adapters are present in your *pom.xml* file: ```xml com.sap.cds cds-starter-spring-boot com.sap.cds cds-adapter-odata-v2 runtime com.sap.cds cds-adapter-odata-v4 runtime ``` 4. Optionally it's possible to configure different serve paths for the application services for different protocols. See [Serve configuration](./cqn-services/application-services#serve-configuration) for more details. After rebuilding and restarting your application, your Application Services are exposed as OData V2 and OData V4 in parallel. This way, you can migrate your frontend code iteratively to OData V4. # Choose Your Preferred Tools {{$frontmatter?.synopsis}}
# CDS Command Line Interface (CLI) {#cli} To use `cds` from your command line, install package `@sap/cds-dk` globally: ```sh npm i -g @sap/cds-dk ``` ## cds version Use `cds version` to get information about your installed package version:
> cds version

@cap-js/asyncapi: 1.0.2
@cap-js/cds-types: 0.8.0
@cap-js/db-service: 1.16.2
@cap-js/openapi: 1.1.1
@cap-js/sqlite: 1.7.8
@sap/cds: 8.6.0
@sap/cds-compiler: 5.6.0
@sap/cds-dk (global): 8.6.1
@sap/cds-fiori: 1.2.8
@sap/cds-foss: 5.0.1
@sap/cds-mtxs: 2.4.2
@sap/eslint-plugin-cds: 3.1.2
Node.js: v20.18.1
your-project: 1.0.0
Using `--markdown` you can get the information in markdown format:
> cds version --markdown

| your-project           | <Add your repository here>              |
| ---------------------- | --------------------------------------- |
| @cap-js/asyncapi       | 1.0.2                                   |
| @cap-js/cds-types      | 0.8.0                                   |
| @cap-js/db-service     | 1.16.2                                  |
| @cap-js/openapi        | 1.1.1                                   |
| @cap-js/sqlite         | 1.7.8                                   |
| @sap/cds               | 8.6.0                                   |
| @sap/cds-compiler      | 5.6.0                                   |
| @sap/cds-dk (global)   | 8.6.1                                   |
| @sap/cds-fiori         | 1.2.8                                   |
| @sap/cds-foss          | 5.0.1                                   |
| @sap/cds-mtxs          | 2.4.2                                   |
| @sap/eslint-plugin-cds | 3.1.2                                   |
| Node.js                | v20.18.1                                |
## cds completion The `cds` command supports shell completion with the tab key for several shells and operating systems. For Linux, macOS and Windows use the following command to activate shell completion: ```sh cds add completion ``` After that, restart your shell (or source the shell configuration) and enjoy shell completion support for all `cds` commands. Currently supported shells: | Operating System | Shell | |-------------------|-------| | Linux | bash, fish (version 8 or higher), zsh | | macOS | bash, fish (version 8 or higher), zsh | | Windows | PowerShell, Git Bash | | WSL | bash, fish (version 8 or higher), zsh | To remove the shell completion, run the following command: ```sh cds completion --remove ``` Then source or restart your shell. ## cds help Use `cds help` to see an overview of all commands:
> cds --help

USAGE
    cds <command> [<args>]
    cds <src>  =  cds compile <src>
    cds        =  cds help

COMMANDS
    i | init        jump-start cds-based projects
    a | add         add a feature to an existing project
      | gen         generate models/data using a descriptive prompt [beta]
    y | bind        bind application to remote services
    m | import      add models from external sources
    c | compile     compile cds models to different outputs
    p | parse       parses given cds models
    s | serve       run your services in local server
    w | watch       run and restart on file changes
      | mock        call cds serve with mocked service
    r | repl        read-eval-event loop
    e | env         inspect effective configuration
    b | build       prepare for deployment
    d | deploy      deploy to databases or cloud
      | subscribe   subscribe a tenant to a multitenant SaaS app
      | unsubscribe unsubscribe a tenant from a multitenant SaaS app
    l | login       login to extensible multitenant SaaS app
      | logout      logout from extensible multitenant SaaS app
      | pull        pull base model of extensible SaaS app
      | push        push extension to extensible SaaS app
    t | lint        run linter for env or model checks
    v | version     get detailed version information
      | completion  add/remove cli completion for cds commands
    ? | help        get detailed usage information

  Learn more about each command using:
  cds help <command> or
  cds <command> --help
Use `cds help ` or `cds ?` to get specific help:
> cds repl --help

SYNOPSIS
    cds repl [ <options> ]

    Launches into a read-eval-print-loop, an interactive playground to
    experiment with cds' JavaScript APIs. See documentation of Node.js'
    REPL for details at http://nodejs.org/api/repl.html

OPTIONS
    -r | --run <project>

      Runs a cds server from a given CAP project folder, or module name.
      You can then access the entities and services of the running server.
      It's the same as using the repl's builtin .run command.

    -u | --use <cds feature>

      Loads the given cds feature into the repl's global context. For example,
      if you specify xl it makes the cds.xl module's methods available.
      It's the same as doing {ref,val,xpr,...} = cds.xl within the repl.

EXAMPLES
    cds repl --run bookshop
    cds repl --run .
    cds repl --use cds.ql

SEE ALSO
    cds eval  to evaluate and execute JavaScript.
## cds init Use `cds init` to create new projects. The simplest form creates a minimal Node.js project. For Java, use ```sh cds init --java ``` In addition, you can add (most of) the project 'facets' from [below](#cds-add) right when creating the project. For example to create a project with a sample bookshop model and configuration for SAP HANA, use: ```sh cds init --add sample,hana ``` ::: details See the full help text of `cds init`
> cds init --help

SYNOPSIS
    cds init [<project>] [<options>]

    Initializes a new project in folder ./<project>, with the current
    working directory as default.

OPTIONS
    --java

        Create a CAP Java project.

    --add <feature | comma-separated list of features>

        Add one or more features while creating the project.
        <feature> can be one of the following:

      completion                   - shell completion for cds commands
      java                         - creates a Java-based project
      nodejs                       - creates a Node.js-based project
      esm                          - ESM-compatible Node.js project
      tiny-sample                  - add minimal sample files
      sample                       - add sample files including Fiori UI
      typer                        - type generation for CDS models
      typescript                   - add minimum configuration for a bare TypeScript project
      handler                      - handler stubs for service entities, actions and functions
      mta                          - Cloud Foundry deployment using mta.yaml
      cf-manifest                  - Cloud Foundry deployment using manifest files
      helm                         - Kyma deployment using Helm charts
      helm-unified-runtime         - Kyma deployment using Unified Runtime Helm charts
      containerize                 - containerization using ctz CLI
      multitenancy                 - schema-based multitenancy support
      toggles                      - allow dynamically toggled features
      extensibility                - tenant-specific model extensibility
      side-by-side-extensibility   - logic extensibility via extension points
      mtx                          - multitenancy + toggles + extensibility
      xsuaa                        - authentication via XSUAA
      ias                          - authentication via IAS
      ams                          - authorization via AMS
      hana                         - database support for SAP HANA
      postgres                     - database support for PostgreSQL
      sqlite                       - database support for SQLite
      h2                           - database support for H2
      liquibase                    - database migration using Liquibase
      redis                        - SAP BTP Redis, Hyperscaler Option
      attachments                  - SAP BTP Object Store Service
      malware-scanner              - SAP Malware Scanning Service
      local-messaging              - messaging via local event bus
      file-based-messaging         - messaging via file system
      enterprise-messaging         - messaging via SAP Enterprise Messaging
      enterprise-messaging-shared  - messaging via shared SAP Enterprise Messaging
      redis-messaging              - messaging via Redis
      kafka                        - messaging via Apache Kafka
      approuter                    - dynamic routing using @sap/approuter
      connectivity                 - SAP BTP Connectivity Service
      destination                  - SAP BTP Destination Service
      html5-repo                   - SAP BTP HTML5 Application Repository
      portal                       - SAP BTP Portal Service
      application-logging          - SAP BTP Application Logging Service
      audit-logging                - SAP BTP Audit Logging Service
      notifications                - SAP BTP Notification Service
      workzone-standard            - SAP BTP Work Zone, Standard Edition
      data                         - add CSV headers for modeled entities
      http                         - add .http files for modeled services
      lint                         - configure cds lint
      pipeline                     - CI/CD pipeline integration

    --java:mvn <Comma separated maven archetype specific parameters>

        Add the given parameters to the archetype call.
        See https://cap.cloud.sap/docs/java/developing-applications/building#the-maven-archetype
        for parameters supported by the archetype.

    --force

        Overwrite all files.

EXAMPLES
    cds init bookshop
    cds init bookshop --java
    cds init bookshop --add hana
    cds init bookshop --add multitenancy,mta
    cds init --java --java:mvn groupId=myGroup,artifactId=newId,package=my.company

SEE ALSO
    cds add - to augment your projects later on
::: ## cds add Use `cds add` to gradually add capabilities ('facets') to projects. The facets built into `@sap/cds-dk` provide you with a large set of standard features that support CAP's grow-as-you-go approach: | Feature | Node.js | Java | |-------------------------------|:----------------:|:----------------:| | `tiny-sample` | | | | `sample` | | | | `mta` | | | | `cf-manifest` | | | | `helm` | | | | `helm-unified-runtime` | | | | `containerize` | | | | `multitenancy` | | | | `toggles` | | | | `extensibility` | | | | `xsuaa` | | | | `hana` | | | | `postgres` | 1 | 1 | | `sqlite` | | | | `h2` | | | | `liquibase` | | | | `local-messaging` | | | | `file-based-messaging` | | | | `enterprise-messaging` | | | | `enterprise-messaging-shared` | | | | `redis-messaging` | 1 | | | `kafka` | | | | `approuter` | | | | `connectivity` | | | | `destination` | | | | `html5-repo` | | | | `portal` | | | | `application-logging` | | | | `audit-logging` | | | | `notifications` | | | | `attachments` | | | | [`data`](#data) | | | | [`http`](#http) | | | | `lint` | | | | `pipeline` | | | | `esm` | | | | `typer` | | | | `typescript` | | | | `completion` | | | | [`handler`](#handler) | | | > 1 Only for Cloud Foundry
::: details See the full help text of `cds add`
> cds add --help

SYNOPSIS
    cds add <feature | comma-separated list of features>

    Add one or more features to an existing project - grow as you go.

    Pick any of these:

      completion                   - shell completion for cds commands
      esm                          - ESM-compatible Node.js project
      tiny-sample                  - add minimal sample files
      sample                       - add sample files including Fiori UI
      typer                        - type generation for CDS models
      typescript                   - add minimum configuration for a bare TypeScript project
      handler                      - handler stubs for service entities, actions and functions
      mta                          - Cloud Foundry deployment using mta.yaml
      cf-manifest                  - Cloud Foundry deployment using manifest files
      helm                         - Kyma deployment using Helm charts
      helm-unified-runtime         - Kyma deployment using Unified Runtime Helm charts
      containerize                 - containerization using ctz CLI
      multitenancy                 - schema-based multitenancy support
      toggles                      - allow dynamically toggled features
      extensibility                - tenant-specific model extensibility
      side-by-side-extensibility   - logic extensibility via extension points
      mtx                          - multitenancy + toggles + extensibility
      xsuaa                        - authentication via XSUAA
      ias                          - authentication via IAS
      ams                          - authorization via AMS
      hana                         - database support for SAP HANA
      postgres                     - database support for PostgreSQL
      sqlite                       - database support for SQLite
      h2                           - database support for H2
      liquibase                    - database migration using Liquibase
      redis                        - SAP BTP Redis, Hyperscaler Option
      attachments                  - SAP BTP Object Store Service
      malware-scanner              - SAP Malware Scanning Service
      local-messaging              - messaging via local event bus
      file-based-messaging         - messaging via file system
      enterprise-messaging         - messaging via SAP Enterprise Messaging
      enterprise-messaging-shared  - messaging via shared SAP Enterprise Messaging
      redis-messaging              - messaging via Redis
      kafka                        - messaging via Apache Kafka
      approuter                    - dynamic routing using @sap/approuter
      connectivity                 - SAP BTP Connectivity Service
      destination                  - SAP BTP Destination Service
      html5-repo                   - SAP BTP HTML5 Application Repository
      portal                       - SAP BTP Portal Service
      application-logging          - SAP BTP Application Logging Service
      audit-logging                - SAP BTP Audit Logging Service
      notifications                - SAP BTP Notification Service
      workzone-standard            - SAP BTP Work Zone, Standard Edition
      data                         - add CSV headers for modeled entities
      http                         - add .http files for modeled services
      lint                         - configure cds lint
      pipeline                     - CI/CD pipeline integration

OPTIONS
    --for | -4 <profile>

      Write configuration data for the given profile.

    --force

      Overwrite all files in case the target files already exist.

    --package <name>

      Pull a package from your npm registry.


FEATURE OPTIONS
    cds add audit-logging

      --plan

        Specify the service plan.


    cds add cloud-logging

      --plan

        Override the service plan used for the MTA generation.

      --with-telemetry

        Add telemetry capabilities.


    cds add completion

      --shell | -s

        <optional> Forces completion setup for a given shell and disables auto detection.
        Usually the shell is determined automatically and this is only for cases where the automatic
        detection fails. Valid values: bash, fish, gitbash, ps, zsh.


    cds add data

      --filter | -f

        Filter for entities matching the given pattern. If it contains meta
        characters like '^' or '*', it is treated as a regular expression,
        otherwise as an include pattern, i.e /.*pattern.*/i

      --data:for

        Deprecated. Use '--filter' instead.

      --records | -n

        The number of records to be created for each entity.

      --content-type | -c

        The content type of the data. One of "json" or "csv".

      --out | -o

        The output target folder.


    cds add enterprise-messaging

      --cloudevents | -c

        Use CloudEvents formatting.


    cds add enterprise-messaging-shared

      --cloudevents | -c

        Use CloudEvents formatting.


    cds add handler

      --filter | -f

        Filter for entities, actions or functions matching the given pattern.
        For Node.js, if it contains meta characters like '^' or '*', it is treated as a regular expression,
        otherwise as an include pattern, i.e /.*bookshop.*/i
        For Java, only '*' and '**' as suffix wildcards are allowed, as in 'my.bookshop.*' or 'my.**'

      --out | -o

        Custom output directory.
        For Java, the default is 'handlers'. For Node.js, the default is 'srv'.


    cds add helm

      --y

        If provided, the default values will be used for all prompts.


    cds add http

      --filter | -f

        Filter for services or entities or actions matching the given pattern. If it contains meta
        characters like '^' or '*', it is treated as a regular expression,
        otherwise as an include pattern, i.e /.*pattern.*/i

      --for-app | -a

        Specify the name of the app to generate requests for.
        If not specified, localhost and default auth will be used.

      --out | -o

        The output directory.
        By default, an `http` dir is created in either `test/`, `tests/`, `__tests__/`, or at the root level.

      --dry

        Print the generated requests to the console instead of writing them to a file.


EXAMPLES
    cds add sample
    cds add multitenancy,hana,xsuaa --for production
    cds add data --filter my.namespace.MyEntity
    cds add mta
    cds add helm


SEE ALSO
  cds init
::: ### sample {.add} Creates a bookshop application including custom code (Node.js or Java) and a UI with [SAP Fiori Elements](../advanced/fiori). ```sh cds add sample ``` This corresponds to the result of the [_Getting Started in a Nutshell_ guide](../get-started/in-a-nutshell). ### tiny-sample {.add} Creates a minimal CAP application without UI. ```sh cds add tiny-sample ``` ### data {.add} Adds files to the project that carry initial data, in either JSON and CSV format. The simplest form of: ```sh cds add data ``` adds _csv_ files with a single header line for all entities to the _db/data/_ folder. The name of the files matches the entities' namespace and name, separated by `-`. #### Filtering {#data-filtering} To create data for some entities only, use `--filter`. For example: ```sh cds add data --filter books ``` would only create data for entity names that include _books_ (case insensitive). You can use regular expressions for more flexibility and precision. For example, to only match _Books_, but not _Books.texts_, use: ```sh cds add data --filter "books$" ``` ::: details Special characters like `?` or `*` need escaping or quoting in shells The escape character is usually the backslash, for example, `\?`. Quote characters are `'` or `"` with varying rules between shells. Consult the documentation for your shell here. ::: #### Sample records To create actual data (along with the header line), use `--records` with a number for how many records you wish to have. This example creates 2 records for each entity: ```sh cds add data --records 2 ``` [Watch a short video by DJ Adams to see this in action.](https://www.youtube.com/shorts/_YVvCA2oSco){.learn-more} #### Formats By default, the data format is _CSV_. You can change this to JSON with the `--content-type` option: ```sh cds add data --content-type json ``` The result could look like this for a typical _Books_ entity from the _Bookshop_ application: ```jsonc [ { "ID": 29894036, "title": "title-29894036", "author": { "ID": 1343293 }, "stock": 94, "texts": [ { ... } ] } ] ``` ::: details Some details on the generated data - For the _JSON_ format, _structured_ objects are used instead of flattened properties, for example, `author: { ID: ... }` instead of `author_ID.` The flattened properties would work as well during database deployment and runtime though. Flattened properties are also used in the _CSV_ format. - `author.ID` refers to a key from the _...Authors.json_ file that is created at the same time. If the _Authors_ entity is excluded, though, no such foreign key would be created, which cuts the association off. - Data for _compositions_, like the `texts` composition to `Books.texts`, is always created. - A random unique number for each record, _29894036_ here, is added to each string property, to help you correlate properties more easily. - Data for elements annotated with a regular expression using [`assert.format`](../guides/providing-services#assert-format) can be generated using the NPM package [randexp](https://www.npmjs.com/package/randexp), which you need to installed manually. - Other constraints like [type formats](../cds/types), [enums](../cds/cdl#enums), and [validation constraints](../guides/providing-services#input-validation) are respected as well, in a best effort way. ::: #### Interactively in VS Code In [VS Code](./cds-editors#vscode), use the commands _Generate Model Data as JSON / CSV_ to insert test data at the cursor position for a selected entity. ### http {.add} Adds `.http` files with sample read and write requests. The simplest form of: ```sh cds add http ``` creates `http` files for all services and all entities. #### Filtering {#http-filtering} See the filter option of [`add data`](#data-filtering) for the general syntax. In addition, you can filter with a service name: ```sh cds add http --filter CatalogService ``` #### Interactively in VS Code In [VS Code](./cds-editors#vscode), use the command _Generate HTTP Requests_ to insert request data in an _http_ file for a selected entity or service. #### Authentication / Authorization ##### To local applications
By default, an authorization header with a [local mock user](../node.js/authentication#mock-users) is written to the `http` file, and `localhost` is the target host. ```http [Node.js] @server = http://localhost:4004 @auth = Authorization: Basic alice: ### CatalogService.Books GET {{server}}/odata/v4/admin/Books {{auth}} ... ```
By default, an authorization header with a [local mock user](../java/security#mock-users) is written to the `http` file, and `localhost` is the target host. ```http [Java] @server = http://localhost:8080 ### CatalogService.Books GET {{server}}/odata/v4/admin/Books {{auth}} ... ```
##### To remote applications Use `--for-app ` to use a JWT token of a remote application. For example: ```sh cds add http --for-app bookshop ``` assumes a remote app named `bookshop` on CloudFoundry and a JWT token for this app is written to the request file: ```http @server = https://... @auth = x-approuter-authorization: bearer ... ``` ::: details Cloud login required For CloudFoundry, use `cf login ...` and select org and space. ::: ### handler {.add} Generates handler stubs for actions and functions for both Java and Node.js projects. To generate handler files, run: ::: code-group ```sh [Node.js] cds add handler ``` ```sh [Java] mvn compile # let Java know what your model looks like cds add handler ``` ::: The files contain handlers for - actions and functions - service entities (Node.js only) #### Filtering {#handler-filtering} Use the `--filter` option to create handlers for specific actions/functions or entities. ```sh cds add handler --filter submitOrder cds add handler --filter Books ``` ## cds env Use `cds env` to inspect currently effective config settings:
> cds env requires.db

{
  impl: '@cap-js/sqlite',
  credentials: { url: ':memory:' },
  kind: 'sqlite'
}
::: details See the full help text of `cds env`
> cds env --help

SYNOPSIS
    cds env [<key>] [<options>]

EXPLANATION
    Displays the effective configuration for the given key, or all of the
    current environment.

OPTIONS
    --sources

       Lists the sources from with the current env has been compiled.

    -k | --keys

       Prints (top-level) keys of matching properties only

    -p | --properties
    -l | --list

       Prints output in .properties format

    -j | --json

       Prints output in JSON format

    -r | --raw

       Prints output with minimum formatting or decoration

    -4 | --for | --profile <profile,...>

       Load configuration for the specified profile(s).
       The development profile is used by default.

    -P | --process-env

       Show properties from Node.js process.env.

    -b | --resolve-bindings

       Resolve remote service bindings configured via cds bind.
::: ## cds compile Compiles the specified models to [CSN](../cds/csn) or other formats. [See simple examples in the getting started page](../get-started/in-a-nutshell#cli).{.learn-more} [For the set of built-in compile 'formats', see the `cds.compile.to` API](../node.js/cds-compile#cds-compile-to).{.learn-more} In addition, the following formats are available: ### mermaid {.compile} This produces text for a [Mermaid class diagram](https://mermaid.js.org/syntax/classDiagram.html): ```sh cds compile db/schema.cds --to mermaid ``` Output: ```log classDiagram namespace sap_fe_cap_travel { class `sap.fe.cap.travel.Travel`["Travel"] class `sap.fe.cap.travel.Booking`["Booking"] class `sap.fe.cap.travel.Airline`["Airline"] class `sap.fe.cap.travel.Airport`["Airport"] class `sap.fe.cap.travel.Flight`["Flight"] } ``` If wrapped in a markdown code fence of type `mermaid`, such diagram text is supported by many markdown renderers, for example, on [GitHub](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-diagrams). ````md ```mermaid classDiagram namespace sap_fe_cap_travel { class `sap.fe.cap.travel.Travel`["Travel"] ... } ``` ```` To customize the diagram layout, use these environment variables when calling `cds compile`: ```sh CDS_MERMAID_ASSOCNAMES=false|true # show association/composition names CDS_MERMAID_ELEMENTS=false|all|keys # no, all, or only key elements CDS_MERMAID_MIN=false|true # remove unused entities CDS_MERMAID_NAMESPACES=false|true # group entities by namespace CDS_MERMAID_QUERIES=false|true # show queries/projections CDS_MERMAID_DIRECTION=TB|BT|LR|RL # layout direction of the diagram ```
#### Interactively in VS Code To visualize your CDS model as a diagram in VS Code, open a `.cds` file and use the dropdown in the editor toolbar or the command _CDS: Preview as diagram_: ![The screenshot is described in the accompanying text.](assets/mermaid-preview.png) {} If you don't see the graphics rendered, but only text, install the [Markdown Preview Mermaid Support](https://marketplace.visualstudio.com/items?itemName=bierner.markdown-mermaid) extension for VS Code. To customize the diagram layout, use these settings in the _Cds > Preview_ category: - [Diagram: Associations](vscode://settings/cds.preview.diagram.associations) - [Diagram: Direction](vscode://settings/cds.preview.diagram.direction) - [Diagram: Elements](vscode://settings/cds.preview.diagram.elements) - [Diagram: Minify](vscode://settings/cds.preview.diagram.minify) - [Diagram: Namespaces](vscode://settings/cds.preview.diagram.namespaces) - [Diagram: Queries](vscode://settings/cds.preview.diagram.queries) ## cds watch Use `cds watch` to watch for changed files, restarting your server. ::: details See the full help text of `cds watch`
> cds watch --help

SYNOPSIS
  cds watch [<project>]

  Tells cds to watch for relevant things to come or change in the specified
  project or the current work directory. Compiles and (re-)runs the server
  on every change detected.

  Actually, cds watch is just a convenient shortcut for:
  cds serve all --with-mocks --in-memory?

OPTIONS
  --port <number>

    Specify the port on which the launched server listens.
    If you specify '0', the server picks a random free port.
    Alternatively, specify the port using env variable PORT.

  --ext <extensions>

    Specify file extensions to watch for in a comma-separated list.
    Example: cds w --ext cds,json,js.

  --include <paths,...>

    Comma-separated list of additional paths to watch.

  --exclude <paths,...>

    Comma-separated list of additional paths to ignore.

  --livereload <port | false>

    Specify the port for the livereload server. Defaults to '35729'.
    Disable it with value false.

  --open <url>

    Open the given URL (suffix) in the browser after starting.
    If none is given, the default application URL will be opened.

  --profile <profile,...>

    Specify from which profile(s) the binding information is taken.
    Example: cds w --profile hybrid,production

  --debug / --inspect <host:port | 127.0.0.1:9229>

    Activate debugger on the given host:port.
    If port 0 is specified, a random available port will be used.

  --inspect-brk <host:port | 127.0.0.1:9229>

    Activate debugger on the given host:port and break at start of user script.
    If port 0 is specified, a random available port will be used.

SEE ALSO
  cds serve --help for the different start options.
::: ### Includes and Excludes Additional watched or ignored paths can be specified via CLI options: ```sh cds watch --include ../other-app --exclude .idea/ ``` Alternatively, you can add these paths through settings cds.watch.include: ["../other-app"] and cds.watch.exclude: [".idea"] to your project configuration. ## cds repl Use `cds repl` to live-interact with cds' JavaScript APIs in an interactive read-eval-print-loop.
$ cds repl
Welcome to cds repl

> cds.parse`
  entity Foo { bar : Association to Bar }
  entity Bar { key ID : UUID }
`
{
  definitions: {
    Foo: {
      kind: 'entity',
      elements: {
        bar: { type: 'cds.Association', target: 'Bar' }
      }
    },
    Bar: ...
  }
}

> SELECT.from(Foo)
cds.ql {
  SELECT: { from: { ref: [ 'Foo' ] } }
}
There a couple of shortcuts and convenience functions: - `.run` (a [REPL dot commands](https://nodejs.org/en/learn/command-line/how-to-use-the-nodejs-repl#dot-commands)) allows to start Node.js `cds.server`s: ```sh .run cap/samples/bookshop ``` - CLI option `--run` does the same from command line, for example: ```sh cds repl --run cap/samples/bookshop ``` - CLI option `--use` allows to use the features of a `cds` module, for example: ```sh cds repl --use ql # as a shortcut of that within the repl: ``` ```js var { expr, ref, columns, /* ...and all other */ } = cds.ql ``` - `.inspect` command displays objects with configurable depth: ```sh .inspect cds .depth=1 .inspect CatalogService.handlers .depth=1 ``` ::: details See the full help text of `cds repl`
> cds repl --help

SYNOPSIS
    cds repl [ <options> ]

    Launches into a read-eval-print-loop, an interactive playground to
    experiment with cds' JavaScript APIs. See documentation of Node.js'
    REPL for details at http://nodejs.org/api/repl.html

OPTIONS
    -r | --run <project>

      Runs a cds server from a given CAP project folder, or module name.
      You can then access the entities and services of the running server.
      It's the same as using the repl's builtin .run command.

    -u | --use <cds feature>

      Loads the given cds feature into the repl's global context. For example,
      if you specify xl it makes the cds.xl module's methods available.
      It's the same as doing {ref,val,xpr,...} = cds.xl within the repl.

EXAMPLES
    cds repl --run bookshop
    cds repl --run .
    cds repl --use cds.ql

SEE ALSO
    cds eval  to evaluate and execute JavaScript.
::: ## Debugging with `cds debug` {#cds-debug} `cds debug` lets you debug applications running locally or remotely on SAP BTP Cloud Foundry. Local applications will be started in debug mode, while (already running) remote applications are put into debug mode. To debug an application on Cloud Foundry, the following is important: - You're logged in to the space where the application is deployed to. - You have developer permissions in that space -> [Space Developer role](https://help.sap.com/docs/btp/sap-business-technology-platform/about-roles-in-cloud-foundry-environment). - The app is running and [reachable through SSH](https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html#check-ssh-permissions). Effectively, run: ```sh cf login # select the correct org and space here cf ssh-enabled # to check if SSH is enabled ``` ::: tip Scale to one application instance only We recommend to only scale to a _single_ app instance on SAP BTP Cloud Foundry, as then your request is guaranteed to hit this one instance. If you scale out to more instances, only some of your requests will hit the instance that the debugger is connected to. This can result in 'missed breakpoints'. However, it's possible to [route a request to a specific instance](https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#surgical-routing), which is useful if you can't reduce the number of app instances. ::: ### Node.js Applications #### Remote Applications Run the following, to debug remote Node.js applications in the currently targeted CF space:
$ cds debug <app-name>

Opening SSH tunnel on 9229:127.0.0.1:9229
Opening Chrome DevTools at devtools://devtools/bundled/inspector.html?ws=...

> Keep this terminal open while debugging.
This opens an [SSH tunnel](https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html), puts the application in debug mode, and connects and opens the [debugger of Chrome DevTools](https://developer.chrome.com/docs/devtools/javascript).
`: ::: code-group ```js [lib/add.js] const cds = require('@sap/cds-dk') //> load from cds-dk const { copy, path } = cds.utils, { join } = path module.exports = class extends cds.add.Plugin { options() { // [!code ++] return { // [!code ++] 'out': { // [!code ++] type: 'string', // [!code ++] short: 'o', // [!code ++] help: 'The output directory for the pg.yaml file.', // [!code ++] } // [!code ++] } // [!code ++] } // [!code ++] async run() { const pg = join(__dirname, 'pg.yaml') await copy(pg).to('pg.yaml') //> 'to' is relative to cds.root // [!code --] await copy(pg).to(cds.cli.options.out, 'pg.yaml') //> 'to' is relative to cds.root // [!code ++] } async combine() { /* ... */ } } ``` ::: #### Call `cds add` for an NPM package Similar to `npx -p`, you can use the `--package/-p` option to directly install a package from an *npm* registry before running the command. This lets you invoke `cds add` for CDS plugins easily with a single command: ```sh cds add my-facet -p @cap-js-community/example ``` ::: details Install directly from your GitHub branch For example, if your plugin's code is in `https://github.com/cap-js-community/example` on branch `cds-add` and registers the command `cds add my-facet`, then doing an integration test of your plugin with `@sap/cds-dk` in a single command: ```sh cds add my-facet -p @cap-js-community/example@git+https://github.com/cap-js-community/example#cds-add ``` ::: ## Plugin API Find here a complete overview of public `cds add` APIs. ### `register(name, impl)` {.method} Register a plugin for `cds add` by providing a name and plugin implementation: ::: code-group ```js [cds-plugin.js] /* ... */ cds.add?.register?.('postgres', class extends cds.add.Plugin { async run() { /* ... */ } async combine() { /* ... */ } } ) ``` ::: ...or use the standard Node.js `require` mechanism to load it from elsewhere: ```js cds.add?.register?.('postgres', require('./lib/add') ) ``` ### `run()` {.method} This method is invoked when `cds add` is run for your plugin. In here, do any modifications that are not depending on other plugins and must be run once only. ```js async run() { // [!code focus] const { copy, path } = cds.utils, { mvn, readProject } = cds.add // [!code focus] await copy (path.join(__dirname, 'files/pg.yaml')).to('pg.yaml') // [!code focus] const { isJava } = readProject() // [!code focus] if (isJava) await mvn.add('postgres') // [!code focus] } ``` > In contrast to `combine`, `run` is not invoked when other `cds add` commands are run. ### `combine()` {.method} This method is invoked, when `cds add` is run for other plugins. In here, do any modifications with dependencies on other plugins. These adjustments typically include enhancing the _mta.yaml_ for Cloud Foundry or _values.yaml_ for Kyma, or adding roles to an _xs-security.json_. ```js async combine() { const { hasMta, hasXsuaa, hasHelm } = readProject() if (hasMta) { /* adjust mta.yaml */ } if (hasHelm) { /* adjust values.yaml */ } if (hasXsuaa) { /* adjust xs-security.json */ } } ``` ### `options()` {.method} The `options` method allows to specify custom options for your plugin: ```js options() { return { 'out': { type: 'string', short: 'o', help: 'The output directory. By default the application root.', } } } ``` We follow the Node.js [`util.parseArgs`](https://nodejs.org/api/util.html#utilparseargsconfig) structure, with an additional `help` field to provide manual text for `cds add help`. ::: details Run `cds add help` to validate... You should now see output similar to this:
$ cds help add
SYNOPSIS
    ···
OPTIONS
    ···
FEATURE OPTIONS
    ···
    cds add postgres

      --out | -o

        The output directory. By default the application root.

::: ::: warning See if your command can do without custom options `cds add` commands should come with carefully chosen defaults and avoid offloading the decision-making to the end-user. ::: ### `requires()` {.method} The `requires` function allows to specify other plugins that need to be run as a prerequisite: ```js requires() { return ['xsuaa'] //> runs 'cds add xsuaa' before plugin is run } ``` ::: warning Use this feature sparingly Having to specify hard-wired dependencies could point to a lack of coherence in the plugin. ::: ## Utilities API ### `readProject()` {.method} This method lets you retrieve a project descriptor for the productive environment. ```js const { isJava, hasMta, hasPostgres } = cds.add.readProject() ``` Any plugin provided by `cds add` can be availability-checked. The readable properties are prefixed by `has` or `is`, in addition to being converted to camel-case. A few examples: | facet | properties | | ----- | --- | | `java` | `hasJava` or `isJava` | | `hana` | `hasHana` or `isHana` | | `html5-repo` | `hasHtml5Repo` or `isHtml5Repo` | | ... | ... | ### `merge(from).into(file, o?)` {.method} CAP provides a uniform convenience API to simplify merging operations on the most typical configuration formats — JSON and YAML files. ::: tip For YAML in particular, comments are preserved `cds.add.merge` can perform AST-level merging operations. This means, even comments in both your provided template and the user YAML are preserved. ::: A large number of merging operations can be done without specifying additional semantics, but simply specifying `from` and `file`: ```js const config = { cds: { requires: { db: 'postgres' } } } cds.add.merge(config).into('package.json') ``` ::: details Semantic-less mode merges and de-duplicates flat arrays Consider this `source.json` and `target.json`:
```js // source.json { "my-plugin": { "x": "value", "z": ["a", "b"] } } ```
```js // target.json { "my-plugin": { "y": "value", "z": ["b", "c"] } } ```
A `cds.add.merge('source.json').into('target.json')` produces this result: ```js // target.json { "my-plugin": { "x": "value", // [!code ++] "y": "value", "z": ["b", "c"] // [!code --] "z": ["a", "b", "c"] // [!code ++] } } ``` ::: We can also specify options for more complex merging semantics or Mustache replacements: ```js const { merge, readProject, registries } = cds.add // Generic variants for maps and flat arrays await merge(__dirname, 'lib/add/package-plugin.json').into('package.json') await merge({ some: 'variable' }).into('package.json') // With Mustache replacements const project = readProject() await merge(__dirname, 'lib/add/package.json.hbs').into('package.json', { with: project }) // With Mustache replacements and semantics for nested arrays const srv = registries.mta.srv4(srvPath) const postgres = { in: 'resources', where: { 'parameters.service': 'postgresql-db' } } const postgresDeployer = { in: 'modules', where: { type: 'nodejs', path: 'gen/pg' } } await merge(__dirname, 'lib/add/mta.yml.hbs').into('mta.yaml', { with: project, additions: [srv, postgres, postgresDeployer], relationships: [{ insert: [postgres, 'name'], into: [srv, 'requires', 'name'] }, { insert: [postgres, 'name'], into: [postgresDeployer, 'requires', 'name'] }] }) ``` ### `.registries` {.property} `cds.add` provides a default registry of common elements in configuration files, simplifying the merging semantics specification: ```js const { srv4, approuter } = cds.add.registries.mta ``` ...and use it like this: ```js const project = readProject() const { hasMta, srvPath } = project if (hasMta) { const srv = registries.mta.srv4(srvPath) const postgres = { in: 'resources', where: { 'parameters.service': 'postgresql-db' } } await merge(__dirname, 'lib/add/mta.yml.hbs').into('mta.yaml', { project, additions: [srv, postgres, postgresDeployer], relationships: [ ... ] }) } ``` ### `mvn.add()` {.method} For better Java support, plugins can easily invoke `mvn com.sap.cds:cds-maven-plugin:add` goals using `mvn.add`: ```js async run() { const { isJava } = readProject() const { mvn } = cds.add if (isJava) await mvn.add('postgres') } ``` ## Checklist for Production Key to the success of your `cds add` plugin is seamless integration with other technologies used in the target projects. As CAP supports many technologies out of the box, consider the following when reasoning about the scope of your minimum viable product: - Single- and Multitenancy - Node.js and Java runtimes - Cloud Foundry (via MTA) - Kyma (via Helm) - App Router - Authentication ## Best Practices Adhere to established best practices in CAP-provided plugins to ensure your plugin meets user expectations. ### Consider `cds add` vs `cds build` {.good} In contrast to `cds build`, `cds add` is concerned with source files outside of your _gen_ folder. Common examples are deployment descriptors such as _mta.yaml_ for Cloud Foundry or _values.yaml_ for Kyma deployment. Unlike generated files, those are usually checked in to your version control system. ### Don't do too much work in `cds add` {.bad} If your `cds add` plugin creates or modifies a large number of files, this can be incidental for high component coupling. Check if configuration for your service can be simplified and provide sensible defaults. Consider generating the files in a `cds build` plugin instead. ### Embrace out-of-the-box{.good} From a consumer point of view, your plugin is integrated by adding it to the _package.json_ `dependencies` and provides sensible default configuration without further modification. ### Embrace grow-as-you-go and separate concerns {.good} A strength of `cds add` is the gradual increase in project complexity. All-in-the-box templates pose the danger of bringing maintainability and cost overhead by adding stuff you might not need. Decrease dependencies between plugins wherever possible. # CDS Import API ## cds.import() {.method} As an application developer, you have the option to convert OData specification (EDMX / XML), OpenAPI specification (JSON) or AsyncAPI specification (JSON) files to CSN from JavaScript API as an alternative to the `cds import` command. > `cds.import` is available in the CDS development tool kit *version 4.3.1* onwards . The API signature looks like this: ```js const csn = await cds.import(file, options) ``` ##### Arguments: * `file` — Specify the path to a single input file to be converted for CSN. * `options` — `cds.import()` support the following `options`: #### options.keepNamespace _This option is only applicable for OData conversion._
| Value | Description | |---------|----------------------------------------------------| | `true` | Keep the original namespace from the EDMX content. | | `false` | Take the filename as namespace. | > If the option is not defined, then the CSN is generated with the namespace defined as EDMX filename.
#### options.includeNamespaces _This option is only applicable for OData conversion._
It accepts a list of namespaces whose attributes are to be retained in the CSN / CDS file. To include all the namespaces present in the EDMX pass "*". > For OData V2 EDMX attributes with the namespace "sap" & "m" are captured by default.
## cds.import.from.edmx() {.method} This API can be used to convert the OData specification file (EDMX / XML) into CSN. The API signature looks like this: ```js const csn = await cds.import.from.edmx(ODATA_EDMX_file, options) ```
## cds.import.from.openapi() {.method} This API can be used to convert the OpenAPI specification file (JSON) into CSN. The API signature looks like this: ```js const csn = await cds.import.from.openapi(OpenAPI_JSON_file) ```
## cds.import.from.asyncapi() {.method} This API can be used to convert the AsyncAPI specification file (JSON) into CSN. The API signature looks like this: ```js const csn = await cds.import.from.asyncapi(AsyncAPI_JSON_file) ```
Example: ```js const cds = require('@sap/cds-dk') module.exports = async (srv) => { const csns = await Promise.all([ // for odata cds.import('./odata_sample.edmx', { includeNamespaces: 'sap,c4c', keepNamespace: true }), // for openapi cds.import('./openapi_sample.json'), // for asyncapi cds.import('./asyncapi_sample.json'), // for odata cds.import.from.edmx('./odata_sample.xml', { includeNamespaces: '*', keepNamespace: false }), // for openapi cds.import.from.openapi('./openapi_sample.json') // for asyncapi cds.import.from.asyncapi('./asyncapi_sample.json') ]); for (let i = 0; i < csns.length; i++) { let json = cds.compile.to.json (csns[i]) console.log (json) } } ``` ## OData Type Mappings The following mapping is used during the import of an external service API, see [Using Services](../../guides/using-services#external-service-api). In addition, the [Mapping of CDS Types](../../advanced/odata#type-mapping) shows import-related mappings. | OData | CDS Type | |--------------------------------------------------------|------------------------------------------------------------------------------| | _Edm.Single_ | `cds.Double` + `@odata.Type: 'Edm.Single'` | | _Edm.Byte_ | `cds.Integer` + `@odata.Type: 'Edm.Byte'` | | _Edm.SByte_ | `cds.Integer` + `@odata.Type: 'Edm.SByte'` | | _Edm.Stream_ | `cds.LargeBinary` + `@odata.Type: 'Edm.Stream'` | | _Edm.DateTimeOffset
Precision : Microsecond_ | `cds.Timestamp` + `@odata.Type:'Edm.DateTimeOffset'` + `@odata.Precision:<>` | | _Edm.DateTimeOffset
Precision : Second_ | `cds.DateTime` + `@odata.Type:'Edm.DateTimeOffset'` + `@odata.Precision:0` | | _Edm.DateTime
Precision : Microsecond_ 1 | `cds.Timestamp` + `@odata.Type:'Edm.DateTime'` + `@odata.Precision:<>` | | _Edm.DateTime
Precision : Second_ 1 | `cds.DateTime` + `@odata.Type:'Edm.DateTime'` + `@odata.Precision:0` | 1 only OData V2 # CAP Plugins & Enhancements Following is a curated list of plugins that are available for the SAP Cloud Application Programming Model (CAP) which provide integration with SAP BTP services and technologies, or other SAP products. ::: tip Maintained by CAP and SAP The `@cap-js`-scoped plugins are created and maintained in close collaboration and shared ownership of CAP development teams and other SAP development teams. ::: ## As _cds-plugins_ for Node.js For Node.js all these plugins are implemented using the [`cds-plugin`](../node.js/cds-plugins) technique, which features minimalistic setup and **plug & play** experience. Usually usage is as simple as that, like for the [Audit Logging](../guides/data-privacy/audit-logging) plugin: 1. Add the plugin: ```sh npm add @cap-js/audit-logging ``` 2. Add annotations to your models: ```cds annotate Customer with @PersonalData ...; ``` 3. Test-drive locally: ```sh cds watch ``` > → audit logs are written to console in dev mode. 4. Bind the platform service. > → audit logs are written to Audit Log service in production. ## As Plugin for CAP Java The [CAP Java plugin technique](../java/building-plugins) makes use of _jar_-files which are distributed as Maven packages. By adding an additional Maven dependency to the project, the plugin automatically adds functionality or extensions to the CDS model. For [Audit Logging V2](../java/auditlog#handler-v2) it looks like this: 1. Add the Maven dependency (in _srv/pom.xml_): ```xml com.sap.cds cds-feature-auditlog-v2 runtime ``` 2. Add annotations to your model: ```cds annotate Customer with @PersonalData ...; ``` > → audit logs are written to console in dev mode. 3. Bind the platform service. > → audit logs are written to SAP Audit Log service. ## Support for Plugins Use one of the support channels below, in this order: 1. Open an issue in the **plugin's GitHub repository**. Find the link in the plugin list below (if the plugin has a public repository). 2. Ask a question in the [SAP community](/resources/ask-question-vscode). This applies to all plugins, especially those without public repositories. Or if you're not quite sure that the problem is caused by the plugin. 3. Open incidents through [SAP Support Portal](/resources/#support-channels). Note that plugins by external parties, like the [CAP JS](https://github.com/cap-js-community/) community, are out of scope for incidents. :::tip Public channels help everyone. Prefer public repositories and issues over private/internal ones, as they help everyone using CAP to find solutions quickly. :::

:::info Complete list of plugins As CAP is blessed with an active community, there are many useful plugins available created by the community. Have a look at the [CAP JS community](https://github.com/cap-js-community) to browse all available plugins. A broader collection of plugins can be found at [bestofcapjs.org](https://bestofcapjs.org/) ::: ## OData V2 Adapter {#odata-v2-proxy} OData V2 has been deprecated. Use the plugin only if you need to support existing UIs or if you need to use specific controls that don't work with V4 **yet** like, tree tables (sap.ui.table.TreeTable). The CDS OData V2 Adapter is a protocol adapter that allows you to expose your services as OData V2 services. For Node.js, this is provided through the [@cap-js-community/odata-v2-adapter](https://www.npmjs.com/package/@cap-js-community/odata-v2-adapter) plugin, which converts incoming OData V2 requests to CDS OData V4 service calls and responses back. For Java, this is built in. Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js-community/odata-v2-adapter#readme) [![Java](../assets/logos/java.svg 'Link to the documentation of the OData feature.'){}](../java/migration#v2adapter) See also [Cookbook > Protocols/APIs > OData APIs > V2 Support](../advanced/odata#v2-support) {.learn-more} ## WebSocket Exposes a WebSocket protocol via WebSocket standard or Socket.IO for CDS services. ```cds @protocol: 'websocket' service ChatService { function message(text: String) returns String; event received { text: String; } } ``` Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js-community/websocket#readme) ## UI5 Dev Server The UI5 Dev Server is a CDS server plugin that enables the integration of UI5 (UI5 freestyle or Fiori elements) tooling-based projects into the CDS server via the UI5 tooling express middlewares. It allows to serve dynamic UI5 resources, including TypeScript implementations for UI5 controls, which get transpiled to JavaScript by the plugin automatically. Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/ui5-community/ui5-ecosystem-showcase/tree/main/packages/cds-plugin-ui5#cds-plugin-ui5) ## GraphQL Adapter The GraphQL Adapter is a protocol adapter that generically generates a GraphQL schema for the models of an application and serves an endpoint that allows you to query your services using the [GraphQL](https://graphql.org) query language. All you need to do is to add the `@graphql` annotation to your service definitions like so: ```cds @graphql service MyService { ... } ``` Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/graphql#readme) ## Attachments The Attachments plugin provides out-of-the-box handling of attachments stored in, for example, AWS S/3 through [SAP BTP's Object Store service](https://discovery-center.cloud.sap/serviceCatalog/object-store). To use it, simply add a composition of the predefined aspect `Attachments` like so: ```cds using { Attachments } from '@cap-js/attachments'; entity Incidents { ... attachments: Composition of many Attachments // [!code focus] } ``` That's all we need to automatically add an interactive list of attachments to your Fiori UIs as shown below. ![Screenshot showing the Attachments Table in a fiori app](assets/index/attachments-table.png) Features: - Pre-defined type `Attachment` to use in entity definitions - Automatic handling of all upload and download operations - Automatic malware scanning for uploaded files - (Automatic) Fiori Annotations for Upload Controls - Streaming and piping to avoid memory overloads - Support for different storage backends Outlook: - Multitenancy intrinsically handled by the plugin Available for: [![Node.js logo](../assets/logos/nodejs.svg 'Link to the repository for cap-js attachments.'){}](https://github.com/cap-js/attachments#readme) [![Java](../assets/logos/java.svg 'Link to the repository for cap-java-attachments.'){}](https://github.com/cap-java/cds-feature-attachments#readme) ## SAP Document Management Service {#sdm} {#@cap-js/sdm} The SAP Document Management Service plugin lets you easily store attachments (documents) in an [SAP Document Management service Repository](https://help.sap.com/docs/document-management-service). To use this CAP-level integration, extend a domain model by using the predefined aspect called Attachments: ```cds extend my.Incidents with { attachments: Composition of many Attachments } ``` ![Screenshot showing the Attachments Table in a Fiori app](assets/index/sdm-table.png) Features: - **Pre-defined Type Attachment for Entity Definitions**: Seamlessly integrate attachments into your entity definitions with our pre-defined type, simplifying the process of linking files. - **Automatic Management of File Operations**: Effortlessly manage file operations, including upload, view, download, delete, and rename functions, with built-in automation. This ensures a smooth and user-friendly experience. - **Automated Malware Scanning for Uploaded Files**: Enhance security by automatically scanning all uploaded files for malware, ensuring the integrity and safety of your data. - **Automatic Fiori Annotations for Upload Controls**: Streamlined user interactions with automatic SAP Fiori annotations that enhance upload controls, providing a more intuitive and seamless user experience. - **Support for SAP Document Management Service-Hosted Cloud Repository**: Leverage the robust capabilities of the SAP Document Management service-hosted cloud repository to store and manage your documents efficiently. - **Support for Third-Party CMIS-Compliant Repositories**: Integrate with third-party repositories that adhere to the Content Management Interoperability Services (CMIS) standard, offering flexibility and compatibility with various document management systems. - **Intrinsic Multitenancy Handling**: Benefit from built-in multi-tenancy support, allowing for efficient management of multiple tenants without additional configuration. Outlook: - **Support for Versioned Repository**: Ensure better document control and historical tracking with native support for versioned repositories, enabling you to manage document revisions effectively. - **Permission Management**: Implement granular permission handling to ensure that only authorized users can access, modify, or manage documents, bolstering security and compliance. - **Native Document Management Features with SAP Document Management Service**: Access a wide array of native document management features provided by the SAP Document Management service, including metadata management, advanced search capabilities, and audit trails. For more information, see [SAP Document Management Service](https://help.sap.com/docs/document-management-service/sap-document-management-service/what-is-document-management-service). Available for: [![Node.js logo](../assets/logos/nodejs.svg){}](https://github.com/cap-js/sdm/#readme) [![Java](../assets/logos/java.svg){}](https://github.com/cap-java/sdm/#readme) ## Audit Logging The new Audit Log plugin provides out-of-the box support for logging personal data-related operations with the [SAP Audit Log Service](https://discovery-center.cloud.sap/serviceCatalog/audit-log-service). All we need is annotations of respective entities and fields like that: ```cds annotate my.Customers with @PersonalData : { DataSubjectRole : 'Customer', EntitySemantics : 'DataSubject' } { ID @PersonalData.FieldSemantics: 'DataSubjectID'; name @PersonalData.IsPotentiallyPersonal; email @PersonalData.IsPotentiallyPersonal; creditCardNo @PersonalData.IsPotentiallySensitive; } ``` Features: - Simple, Annotation-based usage → automatically logging personal data-related events - CAP Services-based programmatic client API → simple, backend-agnostic - Logging to console in development → fast turnarounds, minimized costs - Logging to [SAP Audit Log Service](https://discovery-center.cloud.sap/serviceCatalog/audit-log-service) in production - Transactional Outbox → maximised scalability and resilience Available for: [![Node.js logo](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/audit-logging#readme) ![Java](../assets/logos/java.svg){} Learn more about audit logging in [Node.js](../guides/data-privacy/audit-logging) and in [Java](../java/auditlog) {.learn-more} ## Change Tracking The Change Tracking plugin provides out-of-the box support for automated capturing, storing, and viewing of the change records of modeled entities. All we need is to add @changelog annotations to your models to indicate which entities and elements should be change-tracked. ```cds annotate my.Incidents { customer @changelog: [customer.name]; title @changelog; status @changelog; } ``` ![Change history table in an SAP Fiori UI.](assets/index/changes.png) Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/change-tracking#readme) [![Java](../assets/logos/java.svg 'Link to the documentation of the change-tracking feature.'){}](../java/change-tracking) ## Notifications The Notifications plugin provides support for publishing business notifications in SAP Build WorkZone. The client is implemented as a CAP service, which gives us a very simple programmatic API: ```js let alert = await cds.connect.to ('notifications') await alert.notify({ recipients: [ ...supporters ], title: `New incident created by ${customer.info}`, description: incident.title }) ``` Features: - CAP Services-based programmatic client API → simple, backend-agnostic - Logging to console in development → fast turnarounds, minimized costs - Transactional Outbox → maximised scalability and resilience - Notification templates with i18n support - Automatic lifecycle management of notification templates Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/notifications#readme) ## Telemetry The Telemetry plugin provides observability features such as tracing and metrics, including [automatic OpenTelemetry instrumentation](https://opentelemetry.io/docs/concepts/instrumentation/automatic). By enabling the plugin in your project, various kinds of telemetry data will be automatically collected. For Node.js, you will find telemetry output written to the console as follows: ```txt [odata] - GET /odata/v4/processor/Incidents [telemetry] - elapsed times: 0.00 → 2.85 = 2.85 ms GET /odata/v4/processor/Incidents 0.47 → 1.24 = 0.76 ms ProcessorService - READ ProcessorService.Incidents 0.78 → 1.17 = 0.38 ms db - READ ProcessorService.Incidents 0.97 → 1.06 = 0.09 ms @cap-js/sqlite - prepare SELECT json_object('ID',ID,'createdAt',createdAt,'creat… 1.10 → 1.13 = 0.03 ms @cap-js/sqlite - stmt.all SELECT json_object('ID',ID,'createdAt',createdAt,'crea… 1.27 → 1.88 = 0.61 ms ProcessorService - READ ProcessorService.Incidents.drafts 1.54 → 1.86 = 0.32 ms db - READ ProcessorService.Incidents.drafts 1.74 → 1.78 = 0.04 ms @cap-js/sqlite - prepare SELECT json_object('ID',ID,'DraftAdministrativeData_Dra… 1.81 → 1.85 = 0.04 ms @cap-js/sqlite - stmt.all SELECT json_object('ID',ID,'DraftAdministrativeData_Dr… ``` Telemetry data can be exported to [SAP Cloud Logging](https://help.sap.com/docs/cloud-logging) and Dynatrace. Node.js additionally supports Jaeger. Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/telemetry#readme) [![Java](../assets/logos/java.svg 'Link to the documentation of the telemetry feature.'){}](../java/operating-applications/observability#open-telemetry) ## ORD (Open Resource Discovery) This plugin enables generation of [Open Resource Discovery (ORD)](https://sap.github.io/open-resource-discovery/) documents for CAP based applications. When you adopt ORD, your application gains a single entry point, known as the Service Provider Interface. This interface allows you to discover and gather relevant information or metadata. You can use this information to construct a static metadata catalog or to perform a detailed runtime inspection of your actual system landscapes. ![](./assets/index/ordCLI.png){ .mute-dark} You can get the ORD document in multiple ways, see the readme of the plugin. Available for: [link to the repository for cap-js attachments](https://github.com/cap-js/ord) ## CAP Operator for Kubernetes {#cap-operator-plugin} The [CAP Operator](https://sap.github.io/cap-operator/) manages and automates the lifecycle operations involved in running multitenant CAP applications on Kubernetes (K8s) clusters. If you deploy an application using the CAP Operator, you must manually define the custom resources for the application in a helm chart, which needs time and deep knowledge of helm concepts. This is where the CAP Operator **plugin** is very useful, as it provides an easy way to generate such a helm chart, which can be easily modified. Available for: [![Node.js logo](../assets/logos/nodejs.svg){}](https://github.com/cap-js/cap-operator-plugin#readme) ![Java logo](../assets/logos/java.svg){} ## SAP Cloud Application Event Hub {#event-broker-plugin} The plugin provides out-of-the-box support for consuming events from [SAP Cloud Application Event Hub](https://discovery-center.cloud.sap/serviceCatalog/sap-event-hub) -- for example emitted by SAP S/4HANA Cloud -- in stand-alone CAP applications. ```js const S4Bupa = await cds.connect.to ('API_BUSINESS_PARTNER') S4bupa.on ('BusinessPartner.Changed', msg => {...}) ``` For more details, please see [Events and Messaging → Using SAP Cloud Application Event Hub](../guides/messaging/#sap-event-broker). Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugins repository.'){}](https://github.com/cap-js/event-broker#readme) ## ABAP RFC The `@sap/cds-rfc` plugin allows to import the API of RFC-enabled function modules from ABAP systems and to call these functions in your custom code. Available for: [![Node.js](../assets/logos/nodejs.svg 'Link to the plugin page.'){}](https://www.npmjs.com/package/@sap/cds-rfc)
# Release Notes This section provides information about what is new and what has changed in SAP Cloud Application Programming Model (CAP) since the last release. :::tip For important updates for SAP Business Technology Platform (BTP), refer to section [What's New](https://help.sap.com/whats-new/cf0cb2cb149647329b5d02aa96303f56) published for the SAP BTP. :::
## Major Versions Here is a list of release notes for [CAP major versions](schedule#yearly-major-releases). They can help you migrate your applications to the recent major version. - [cds 8, CAP Java 3 (June 2024)](./archive/2024/jun24) - [cds 7, CAP Java 2 (June 2023)](./archive/2023/jun23) - [cds 6 (June 2022)](./archive/2022/jun22) - [cds 5 (May 2021)](./archive/2021/may21) - [cds 4 (February 2020)](./archive/2020/feb20) # CAP Release Schedule New **major versions** of CAP will be released **every 12 months**, in May 2024, 2025, and so forth. Active CAP-based projects are strongly recommended to adopt new majors as soon as possible, as **former releases will receive critical bug fixes only**. This schedule gives a reliable basis for planning adoption accordingly. ![A kind of Gantt chart, showing the active and maintenance version of CAP](assets/schedule-overview.drawio.svg) ## Major Releases {#yearly-major-releases} ### CAP Node.js CAP releases are linked to the [Node.js Release Schedule](https://github.com/nodejs/release#release-schedule/): New major releases are triggered by end of life of Node.js LTS releases, as depicted in the following figure. Active releases always only support the *two* Active and Maintenance LTS versions of Node.js. ![A kind of Gantt chat, showing which CAP version supports which Node.js version.](assets/schedule-yearly-overview.drawio.svg) Example: CAP v7 - Was released in April 2023, when Node.js 14 reached end of life - Dropped support for Node 14, as that became out of maintenance - Supports Node 16 and Node 18 Major version upgrades *may* incorporate **breaking changes** to public APIs, yet we will avoid that as much as possible. Public APIs are explicitly documented in public and official docs only – that is [capire](https://cap.cloud.sap); excluding tutorials, sample code, blogs, or similar. **Individual components of CAP** can have independent major, minor, and patch version numbers. Yet, all major version upgrades will be synchronized to the yearly major version upgrades of CAP overall, without intermediate major version upgrades of individual components in between. ### CAP Java CAP Java **major versions** are usually developed and offered over a period of one or even several years. In general, public APIs are kept *compatible* within a major version. Incompatible changes are only done if *unavoidable* and are documented in [release notes](../releases/) accordingly. A new major version may introduce incompatible changes to APIs or may adjust behavior of APIs. Such changes are done for good reason only and are documented in a [migration guide](../java/migration). A new major version might be driven by the release and maintenance schedule of crucial dependencies such as [Spring Boot](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions) or if the minimum JDK version needs to be increased. Hence, there is no fixed schedule of major releases. For instance, CAP Java 2.0 has been introduced to support Spring Boot 3 on basis of JDK 17. :::tip New major versions are announced several months in advance You will find information here and in the [release notes](../). → Next planned major release CAP Java 3.0 is planned around May 2024. ::: Only the current major version has the [active status](#active). New (CAP) features are provided in this version only. Whereas the previous major version (currently 1.34) has [maintenance status](#maintenance-status). This version will be maintained for a period of time appropriate for migrations. ::: warning Important Announcement❗️ The free [OSS support](https://spring.io/projects/spring-boot#support) for Spring Boot `2.7.x` has ended in November. **Planned end of current maintenance version CAP Java 1.34 is May 24.** ::: ::: tip Stay updated Active CAP-based projects are strongly encouraged to adopt new major versions as soon as possible, as **a version in maintenance status will receive critical bug fixes only**. :::
[See the release notes of recent major versions.](./#major-versions){.learn-more} ## Monthly Minor Releases {#minor} Releases in [active status](#active) are equal to the latest development branches of CAP components, hence receiving ongoing feature development. Such new features are *published* in monthly *minor releases*, with accompanying [release notes](../releases/). Minor version upgrades come **without breaking changes** to public APIs. They *may* incorporate breaking changes to undocumented, hence private interfaces though, which should never be used in projects using CAP. **In between official releases**, we publish new [patch versions](#patch) or minor version updates of individual CAP packages to *[npmjs.com](https://www.npmjs.com)* or to *[Maven](https://search.maven.org)*. ## Patch Releases {#patch} A patch release of a minor release receives critical bug fixes only. It could also include code for new features, which are not considered public until officially released with according documentation. Such features will not be active by default.
## Active Release Status { #active } New major releases enter *active* status on date of release. Active releases are updated with [monthly minor releases](#minor) to receive the following: - New CAP features - Support for new versions of platform services, including databases - Support for new major versions of Node.js and Java - Support for new major versions of 3rd party libraries - All kinds of minor fixes They are updated with a [patch release](#patch) in case of urgent hot fixes. Only the latest minor release of the active version receives patches. ::: tip Stay updated CAP-based projects are strongly encouraged to adopt the [latest minor release of the active version as soon as possible](#adoption-strategy) during their development cycles to benefit from these updates. ::: ## Maintenance Status Whenever a new major CAP version is released, the former major version enters *maintenance* status. It receives **critical bug fixes only** and for a period of at most twelve more months. After this, it reaches [end of life](#end-of-life-status). A release in *maintenance* status **does not** receive the following: - Updates with new features at all - Support for new versions of platform services or databases - Support for new versions of Node.js and Java - Fixes for *minor* bugs and gaps - Support for new (major) versions of 3rd party libraries In essence, critical bugs are security incidents, and bugs showing up in customer usages of already developed and shipped applications. Gaps and bugs detected in new developments with functional enhancements are not considered critical bugs. ## End of Life Status After at most twelve months in [maintenance status](#maintenance-status) former releases reach *end of life*. They **don't receive any fixes at all** from that point on – all bug reports are rejected by default. Projects sticking to *end of life* versions of CAP must ensure to also stick to non-changing environments, that means: - Freeze on Node.js, Spring Boot or Java versions, only patch updates allowed - No updates of platform services or databases beyond hotfixes - No updates of 3rd party libraries beyond patch versions - No new development beyond hotfixes In essence, projects sticking to *end of life* releases of CAP can continue to run *'as is'*, but should not be touched beyond hotfixes and cosmetic changes. ## Adoption Strategy As stated already, projects using CAP are recommended to upgrade to *latest minor* release of the *active* version as soon as possible. Assumed a project is planned to have a big go-live release R1 to customers (RTC) in May 2022, the project's dev schedule might look like that: - Current development for R1 is on **CAP v5** - Dev close for R1 in March/April 2022 → intensive testing - Start of R1.1 dev cycle in parallel on **CAP v6** - Release of R1 in May 2022 on CAP v5 - Main development for R1.1 on CAP v6 - Release of R1.1 in September 2022 on CAP v6 In general, upgrading as soon as possible doesn't mean that deployed applications need to upgrade, they continue to run with latest frozen versions, of course. Also near-term go-lives should not be endangered by adopting new major versions. But all forward-looking development should happen on *active* releases only. # January Release ## Prepare for Major Release We understand that you want to be well prepared for changes in our major releases. This section will be a regular part of all upcoming release notes and includes links to all changes relevant to the next major release. As the legacy variants will be removed after the major release, it's crucial that you start adopting and testing these changes now to ensure a smooth transition. Please provide feedback if you encounter any issues or if you're satisfied with the updates. #### `@sap/cds-compiler^6` - [New Parser](#reminder-new-parser) #### `@sap/cds^9` - [Upgrade to `@sap/xssec 4`](#upgrade-to-sapxssec-4) - [OData Containment](dec24#odata-containment) - [Consolidated Authorization Checks](dec24#consolidated-authorization-checks) - [@cap-js Database Services](archive/2024/jun24#new-database-services-ga) - [Protocol Adapters](archive/2024/jun24#new-protocol-adapters-ga) - [Draft Handler Compatibility](archive/2024/jun24#lean-draft) ## AI-friendly Content in Capire This documentation site now exposes two new files, [llms.txt](/llms.txt) and [llms-full.txt](/llms-full.txt), that help LLMs better understand the content of the pages. You can link it in your prompts, for example, to give more context. [Learn more about llms.txt](https://towardsdatascience.com/llms-txt-414d5121bcb3){.learn-more} ## CDS Language & Compiler {#cds} ### Reminder: New Parser As already announced with the November '24 release, we're in full swing finalizing the new CDS parser. Replacing the old parser brings significantly reduced installations and faster parsing, as well as improved code completion. While we rolled it out as alpha last November, it's in a Release Candidate status now. > [!tip] > > 1. The new parser doesn't come with any breaking changes. > 2. We already started using the new parser by default in all CAP development and tests. > 3. Current status is **Release Candidate** → you **can, and should** start using it. Roadmap is as follows: | Date | Status | Remarks | | ------ | ------------------------ | ----------------------------------------------------- | | Nov 24 | Alpha | opt-in usage; default still with old parser | | Dec 24 | Beta | opt-in usage; default still with old parser | | Jan 25 | Release Candidate | opt-in usage; default still with old parser | | May 25 | Release | new parser by default; **w/o fallback to old parser** | > [!warning] No fallback as of May '25 > > As there won't be a fallback to the old parser in May anymore, we **strongly recommend testing your projects** already now with the new parser to detect issues before it becomes the default. Set option cds.cdsc.newParser: true in your private `~/.cdsrc.json` to do so on your local machine. If that’s successful, start using it in development and test pipelines of your project. ### Use Enums for Defaults It's now possible to use enum symbols when defining a default value. ```cds type Status : String enum { open = 'O'; closed = 'C' }; entity Orders { key id : Integer; status : Status default #open; // [!code highlight] } ``` Enum symbols are going to be supported in more places, for example `where status = #open`, in one of the next releases. ## Node.js {#cds-js} ### Basic Support for cds.Map The [built-in `cds.Map` type](/cds/types) for storing and retrieving arbitrary structured data is now available. Values of elements with type `Map` are represented as plain Javascript objects. ```cds entity Person { key ID : UUID; name : String; details : Map; // [!code highlight] } ``` Given this CDS model using the new `Map` type for `Person.details`, you can store arbitrary data in `details`: ```js await INSERT.into(Person).entries({ name: 'Peter', details: { // [!code highlight] age: 40, // [!code highlight] address: { // [!code highlight] city: 'Walldorf', // [!code highlight] street: 'Hauptstrasse' // [!code highlight] } // [!code highlight] } // [!code highlight] }) await SELECT.from(Person).columns('name', 'details') ``` ::: info OData v4 only This feature is available for OData v4 services only. ::: ::: info Temporary Limitations In this version, `cds.Map` serves as a _dumb_ object storage which can be retrieved/written as a whole. Filters as well as partial selects, updates, and deletes are not yet supported. ::: ### Upgrade to `@sap/xssec 4` {#upgrade-to-sapxssec-4} The [authentication strategies](../node.js/authentication#strategies) migrated to the new API of [`@sap/xssec` version 4](https://www.npmjs.com/package/@sap/xssec). Even though there's no change in behavior, you can fall back to the previously used compatibility API of `@sap/xssec` in case of issues with cds.features.xssec_compat = true. Compatibility as well as support for `@sap/xssec@3` will be dropped with the next major version of `@sap/cds`. **Please upgrade to `@sap/xssec@4` now** if not yet done. ## Java {#cds-java} ### Predicates as Select List Items Use predicates as select items to evaluate boolean expressions on the database. ```java Select.from(BOOKS).byId(17).columns( b -> b.year().gt(2000).as("isFrom21stCentury"), b -> b.author().name().eq("J.K. Rowling").as("byJKRowling")); ``` This query tests whether a given book was written in the 21st century by J.K. Rowling. The query result maps the aliases `isFrom21stCentury` and `byJKRowling` to boolean values indicating the result of the evaluation. The evaluation is performed on the database without transferring the underlying data to the client. ### Typed Entity References With the new `CQL.entity(Class, ref)` method you can use a generic `CqnStructuredTypeRef` with generated [model interfaces](../java/cqn-services/persistence-services#staticmodel) to build a query in fluent style: ```java import static bookshop.Bookshop_.BOOKS; // generated model type import static com.sap.cds.ql.CQL.entity; CqnStructuredTypeRef ref; // generic entity reference Select.from(entity(BOOKS, ref)).where(b -> b.author().name().eq("J.K. Rowling")); ``` ### Invoke Functions with Parameters Aliases OData V4.01 allows you to invoke functions using [implicit parameter aliases](https://docs.oasis-open.org/odata/new-in-odata/v4.01/cn04/new-in-odata-v4.01-cn04.html#sec_NewInvokingFunctionswithImplicitPara). This invocation style is now also supported by the CAP Java runtime. The following example illustrates the usage: ```http GET sue/stock(id=2) // traditional syntax GET sue/stock(id=@ID)?@ID=2 // explicit parameter alias GET sue/stock?id=2 // implicit parameter alias ``` [Learn more about functions in CAP Java](../java/cqn-services/application-services#actions) {.learn-more} ### Expand all Subnodes in Hierarchy In the SAP Fiori Tree Table, you can now expand all children of a selected node: ![expandEntireNode.png](assets/jan25/expandEntireNode.png){width=80%} ### Default Runtime Messages Bundle CAP Java has a built-in mechanism to [localize runtime messages](../java/event-handlers/indicating-errors#formatting-and-localization) being sent to the UI, for example, resulting from input validation. Previously, applications had to [provide resource bundle files](../java/event-handlers/indicating-errors#exporting-the-default-messages) to provide the translations for such standard runtime messages. With this update, CAP Java retrieves translated text from a prepared resource bundle file containing all CAP-supported languages, streamlining the localization process. CAP Java now sends more user-friendly message texts to the UI. This enhancement is designed to improve the user experience, while still maintaining detailed technical information in the logs for debugging purposes. To benefit from this feature, you need to set property [cds.errors.defaultTranslations.enabled: true](../java/developing-applications/properties#cds-errors-defaultTranslations-enabled). ::: warning Impact for unit tests Rewrite unit tests in your application which contain assertions about message texts. Alternatively, use message codes instead. ::: [Learn more about messages in CAP Java](../java/event-handlers/indicating-errors#messages) {.learn-more} ### Authorization Checks On Input Data CAP Java now can also validate input data of `CREATE` and `UPDATE` events with regards to instance-based authorization conditions. Invalid input that does not meet the condition is rejected with response code `400`. Let's assume an entity `Orders` which restricts access to users classified by assigned accounting areas: ```cds annotate Orders with @(restrict: [ { grant: '*', where: 'accountingArea = $user.accountingAreas' } ]); ``` A user with accounting areas `[Development, Research]` is not able to send an `UPDATE` request, that changes `accountingArea` from `Research` or `Development` to `CarFleet`, for example. Note that the `UPDATE` on instances _not matching the request user's accounting areas_ (for example, `CarFleet`) are rejected by standard instance-based authorization checks. Activate this feature: cds.security.authorization.instanceBased.checkInputData: true. ### `cds debug` for Java Applications We have extended [`cds debug`](./nov24#cds-debug) to Java, so that you can easily debug local and remote Java applications. Without an app name, `cds debug` starts Maven with debug arguments **locally**:
$ cds debug
Starting 'mvn spring-boot:run -Dspring-boot.run.jvmArguments="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000"'
...
Listening for transport dt_socket at address: 8000
...
If you add a **remote application** name from the currently targeted Cloud Foundry space, it opens an [SSH tunnel](https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html) and puts the remote application into debug mode:
$ cds debug <app-name>
...
Debugging has been started.
Address : 8000

Opening SSH tunnel on 8000:127.0.0.1:8000

> Keep this terminal open while debugging.
Then connect a debugger in your IDE at the given port. [Learn more about `cds debug`](../tools/cds-cli#cds-debug){.learn-more} ## Tools { #tools} ### `cds watch` with Include and Exclude Paths You can now specify which additional paths `cds watch` watches and ignores: ```sh cds watch --include ../other-app --exclude .idea/ ``` Alternatively, you can add these paths through settings cds.watch.include: ["../other-app"] and cds.watch.exclude: [".idea"] to your project configuration. [Learn more about `cds watch` and its options.](../tools/cds-cli#cds-watch){.learn-more} ### Sample Code for TypeScript When executed in a TypeScript project, `cds add sample` now creates proper TypeScript code for the service handlers instead of `.js` files. This means you can now create a full-fledged TypeScript project with: ```sh cds init bookshop --add typescript,sample ``` [Also see the SFlight application on TypeScript.](https://github.com/SAP-samples/cap-sflight){.learn-more} ### Code Formatting in CDS IntelliJ Plugin [SAP CDS Language Support for IntelliJ](https://plugins.jetbrains.com/plugin/25209-sap-cds-language-support) now provides all CDS formatting options for configuration under **Settings > Editor > Code Style > CDS**. The plugin adds any non-default settings to a _.cdsprettier.json_ file in the root of a CDS project for consumption by the included LSP server. ![intellij-formatting-options.png](assets/jan25/intellij-formatting-options.png){.ignore-dark} Additionally, the most suitable Node.js runtime for the server is now automatically selected from the Node.js interpreters registered under **Settings > Languages & Frameworks > Node.js**. # December Release ## CDS Language & Compiler {#cds} ### New `CQN` Spec Using `.d.ts` We rewrote the [CQN specification](../cds/cqn) using TypeScript declarations ([`.d.ts` files](https://www.typescriptlang.org/docs/handbook/declaration-files/introduction.html)). This not only fills in many gaps that we had in our former documentation, it also allows for better IntelliSense and easier integration with other projects. ```tsx class SELECT { SELECT: { distinct? : true count? : true one? : true from : source columns? : column[] where? : xo[] having? : xo[] search? : xo[] groupBy? : expr[] orderBy? : order[] limit? : { rows: val, offset: val } }} ``` [See the new _CQN specification_](../cds/cqn) {.learn-more} ### Annotating Foreign Keys Now, you can specifically annotate a _foreign key element_ of a [managed association](../cds/cdl#managed-associations): ```cds entity Authors { key ID : Integer; } entity Books { author : Association to Authors; } annotate Books:author.ID with @label: 'Author'; // [!code highlight] ``` Previously it wasn't possible to specifically annotate the foreign key elements of a managed association. The workaround was a mechanism in the OData API generation that copied all annotations assigned to a managed association to the respective foreign key elements. ## Node.js {#cds-js} ### `cds.on` w/ new `compile` Events We introduced new [lifecycle events](../node.js/cds-compile#lifecycle-events) emitted by different [`cds.compile`](../node.js/cds-compile) commands. In contrast to [`cds.on('loaded')`](../node.js/cds-server#loaded) that was used before, these new events allow plugins to transform models for specific usages, and even more important, also work for multitenant usages. The individual events are: - [`compile.for.runtime`](../node.js/cds-compile#compile-for-runtime) - [`compile.to.dbx`](../node.js/cds-compile#compile-for-dbx) - [`compile.to.edmx`](../node.js/cds-compile#compile-for-edmx) > [!note] > > You can already try out using these new events, but there's not much documentation yet, and they're still beta, so could change in the final release. Next, we'll adapt the plugins maintained by us and with that, validate, document, and showcase the new events. ### `cds.env` Enhancements The [`cds.env`](../node.js/cds-env) module has been optimized and enhanced with these new features: - Config can be provided also in `.cdsrc.js` and `.cdsrc.yaml` files, also in plugins. - Profile-specific `.env` files can be used, for example, `.hybrid.env` or `.attic.env`. ::: code-group ```yaml [.cdsrc.yaml] cds: requires: db: kind: sql "[hybrid]": kind": hana ``` ```js [.cdsrc.js] module.exports = { cds: { requires: { db: { kind: 'sql', '[hybrid]': { kind: 'hana' } } } } } ``` ```json [.cdsrc.json] { "cds": { "requires": { "db": { "kind": "sql", "[hybrid]": { "kind": "hana" } } } } } ``` ```properties [.hybrid.env] cds.requires.kind = hana ``` ::: > [!tip] > > As these enhancements apply also to any configuration for `cds-dk` and `cds-mtxs`, you can now use the same configuration files for all tools, even in Java projects. ::: warning Do not load `@sap/cds` in _.cdsrc.js_ You can generally use any JavaScript code within a _.cdsrc.js_ file. However, you **must not** import or load any `@sap/cds` module, as this can create circular dependencies in `cds.env`, leading to undefined behaviors. ::: ### `cds.ql` Enhancements The `cds.ql` module has been optimized, consolidated, and improved for robustness, as well as enhanced with new functions to facilitate programmatic construction of CQN objects. In detail: - Besides being a facade for all related features, [`cds.ql`](../node.js/cds-ql) now also is a function to turn any respective input into an instance of respective subclasses of [`cds.ql.Query`](../node.js/cds-ql#class-cds-ql-query); for example, the following all produce an equivalent instance of [`cds.ql.SELECT`](../node.js/cds-ql#select): ```js let q = cds.ql `SELECT from Books where ID=${201}` let q = cds.ql (`SELECT from Books where ID=${201}`) let q = cds.ql ({ SELECT: { from: { ref: [ 'Books' ] }, where: [ { ref: [ 'ID' ] }, '=', { val: 201 } ] } }) let q = SELECT.from('Books').where({ID:201}) ``` - New [CXL-level helper functions](../node.js/cds-ql#expressions) to facilitate construction of CQN objects have been added, which you can use like that: ```js const { expr, ref, val, columns, expand, where, orderBy } = cds.ql let q = { SELECT: { from: ref`Authors`, columns: [ ref`ID`, ref`name`, expand (ref`books`, where`stock>7`, orderBy`title`, columns`ID,title` ) ], where: [ref`name`, 'like', val('%Poe%')] } } await cds.run(q) ``` - All `cds.ql` functions, as well as all `cds.parse` functions, and all related `srv.run` methods now consistently support [tagged template literals](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates). For example, all of these work now: ```js await cds.run (cds.parse.cql `SELECT ID,title from Books`) await cds.run `SELECT ID,title from Books` await cds.ql `SELECT ID,title from Books` await SELECT `ID,title from Books` await SELECT `ID,title`.from`Books` await SELECT.from `Books {ID,title}` await cds.read `ID,title from Books` await cds.read `Books` await cds.read `Books where ID=201` ``` > [!warning] > In course of this, former globals `CDL`, `CQL`, and `CXL` have been deprecated > in favor of respective [`cds.parse.cdl`](../node.js/cds-compile#parse-cdl), [`.cql`](../node.js/cds-compile#parse-cql), and [`.expr`](../node.js/cds-compile#parse-cxl) counterparts. [Learn more in the reference docs for _`cds.ql`_](../node.js/cds-ql) {.learn-more} [Note in there the recommendation for _`cds repl`_](../node.js/cds-ql#using-cds-repl) {.learn-more} ### OData Containment The new config option cds.odata.containment: true allows to switch on containment mode which maps CDS Compositions to effective OData Containment Navigation Properties as [introduced in OData v4](http://docs.oasis-open.org/odata/odata/v4.0/cos01/part3-csdl/odata-v4.0-cos01-part3-csdl.html#_Toc372793924) and supported meanwhile by Fiori clients. For example, given the following CDS model: ```cds service Sue { entity Orders { //... Items : Composition of many { /*...*/ } } } ``` That will be exposed like that with containment mode enabled (the removed line indicates what is not exposed anymore): ```xml // [!code --] ``` Contained entities can only be reached via navigation from their roots reducing the entry points of the OData service. > [!tip] > While we think that containment mode is best for most applications, and fully supported by Fiori clients, we provide it as an opt-in for the time being, for you to test it in your apps. It's planned to become the default in the next major release. ### Function Parameters via Query Options The [OData V4.01 specification](https://docs.oasis-open.org/odata/new-in-odata/v4.01/cn04/new-in-odata-v4.01-cn04.html#sec_NewInvokingFunctionswithImplicitPara) allows providing parameters of functions as query options which is now supported by the Node.js runtime. The example below illustrates the usage: ```http GET sue/stock(id=2) // traditional syntax GET sue/stock?id=2 // new syntax ``` [Learn more about functions in CDS](../guides/providing-services#calling-actions-functions) {.learn-more} ### Consolidated Authorization Checks The processing of `@restrict.where` was aligned with the CAP Java stack. As a result, there are the following behavioral changes in edge cases, each with their own compat feature flag to deactivate the change until the next major: - Read restrictions on the entity are no longer taken into consideration when evaluating restrictions on bound actions/ functions. Instead, only the restrictions that apply to the bound action/ function are evaluated.
Deactivate via cds.features.compat_restrict_bound: true. - For `UPDATE` and `DELETE` requests, additional filters (these are, those not originating from key predicates) are no longer considered during the authorization check. For example, assumed we got the equivalent of this query: ```sql UPDATE Books SET title = 'foo' WHERE title = 'bar' ``` The filter `title = 'bar'` is ignored for access control checks, and, effectively, the user needs to be allowed to update all books. Please note that `UPDATE` and `DELETE` requests from a client always contain key predicates, making this change only affect service calls executed in custom handlers. In case you encounter issues with the new behavior, you can deactivate it via cds.features.compat_restrict_where: true. ## Java {#cds-java} ### Important Changes ❗️ { .important #important-changes-in-java} #### NPM Build-Plugin Support { #npm-build-plugins } To support a growing number of NPM build-plugins for CDS build, we recommend a slightly different CAP Java project setup which uses `devDependencies` section in the _package.json_ file. Consequently, also the required dependency to `@sap/cds-dk` now should be added there. To ensure stable versions of the packages, `npm ci` should be configured for the CDS build: ```xml [srv/pom.xml] cds.npm-ci npm ci ``` The goal `install-cdsdk` of the cds-maven-plugin has been deprecated and should be removed from the project. [Learn how to Migrate From Goal `install-cdsdk` to `npm ci`.](/java/developing-applications/building#migration-install-cdsdk){.learn-more} ::: info New projects in recommended setup The built-in [Maven Archetype](../java/getting-started#run-the-cap-java-maven-archetype) creates a Java project with the recommended setup. ::: ### SAP Document Management Service Plugin The new Calesi plugin [com.sap.cds/sdm](https://central.sonatype.com/search?q=com.sap.cds.sdm) is now available as [open source on GitHub](https://github.com/cap-java/sdm). You can easily add the dependency to your application's dependencies and use the `Attachments` type in your model. ::: code-group ```xml [srv/pom.xml] com.sap.cds sdm 1.0.0 ``` ::: ![Screenshot showing the attachments table in an SAP Fiori UI.](archive/2024/assets/jun24/sdm-table.png) [Find more details about the SAP Document Management Service Plugin.](https://github.com/cap-java/sdm#readme) ### IAS Support for Kyma CAP Java now offers out-of-the-box [integration](../java/security#xsuaa-ias) for [SAP Cloud Identity Authentication](https://help.sap.com/docs/cloud-identity-services) (IAS) in the [SAP BTP, Kyma runtime](https://discovery-center.cloud.sap/serviceCatalog/kyma-runtime). It performs proof-of-possession checks on the client certificates passed by calling IAS applications in the context of Kyma runtime. ### SAP HANA Connection Pooling Optimized Multitenant applications configured with a [shared database pool](../java/multitenancy#combine-data-pools) for all tenants help reduce resource consumption from database connections. However, this mode requires logging in with the technical database user of the current business tenant for each request. To optimize performance, CAP Java now skips the login if the pooled connection is already connected to the corresponding user, saving an extra roundtrip and reducing CPU consumption in the database. ### Outbox Message Versioning Messages written to [Transactional Outbox](../java/outbox#transactional-outbox) can originate from application instances of different versions. Instances of an outdated version might introduce failures or inconsistencies when trying to collect messages of younger versions. To avoid such a situation, you can now configure CAP Java to write an application version outbox message being published. Outbox collectors of an application instance will not collect messages of younger versions. Using [cds.environment.deployment.version: ${project.version}](../java/developing-applications/properties#cds-environment-deployment-version), we recommend configuring the application with the version identifier from the Maven build automatically. This requires the build version available in the resources folder: ::: code-group ```xml [srv/pom.xml] src/main/resources true ``` ::: CAP Java can only support version identifiers which have an ordering. [Learn more about Outbox Event Versions.](/java/outbox#outbox-event-versions){.learn-more} ### CDS Config in `.cdsrc.yaml` Files Alternative to `.cdsrc.json` files, Java projects can now also use `.cdsrc.yaml` files to configure the CDS compiler and `cds-dk`. [See respective entry in the Node.js section.](#cds-env-enhancements) {.learn-more} ## Tools { #tools} ### `cds repl` Enhancements As you know, [`cds repl`](../tools/cds-cli#cds-repl) is your friend when you want to find out, how things work. While this is especially relevant for Node.js projects, it also applies to Java projects. For example, to find out how a CSN or CQN object notation for a given CDL or CQL could look like that: ```sh cds repl # from your command line ``` ```js cds.parse` entity Foo { bar : Association to Bar } entity Bar { key ID : UUID } ` ``` ```js cds.ql`SELECT from Authors { ID, name, books [order by title] { ID, title, genre.name as genre } } where exists books.genre[name = 'Mystery']` ``` This release brings a few new enhancements to `cds repl` as follows: - New [REPL dot command](https://nodejs.org/en/learn/command-line/how-to-use-the-nodejs-repl#dot-commands) `.run` allows to start Node.js `cds.server`s: ```sh .run cap/samples/bookshop ``` - New CLI option `--run` to do the same from command line, for example: ```sh cds repl --run cap/samples/bookshop ``` - New CLI option `--use` to easily use the features of a `cds` module, for example: ```sh cds repl --use ql # as a shortcut of that within the repl: ``` ```js var { expr, ref, columns, /* ...and all other */ } = cds.ql ``` - New [REPL dot command](https://nodejs.org/en/learn/command-line/how-to-use-the-nodejs-repl#dot-commands) `.inspect` to display objects with configurable depth: ```sh .inspect cds .depth=1 .inspect CatalogService.handlers .depth=1 ``` ### `cds watch` for TypeScript In a TypeScript project, you can now just run `cds watch` as if it was a JavaScript project. It will automatically detect TypeScript mode based on a `tsconfig.json` and run [`cds-tsx`](../node.js/typescript#cds-tsx) under the hood. In other words, it's not necessary anymore to use `cds-tsx watch`. ```sh cap/sflight $ cds watch Detected tsconfig.json. Running with tsx. ... [cds] serving TravelService { impl: 'srv/travel-service.ts', path: '/processor' } ... ``` # November Release ## Documentation (Capire) Quite some time has passed, and many things happened, and were added, since we wrote the current versions of our central cookbook guides, so it was time to give them a thorough overhaul... We did that now... ### Revised Getting Started Guides As so many things happened since we wrote our getting started and cookbook guides, many of them need major updates and thorough overhauls. We started that journey now with the Welcome page, and the guides in the Getting Started section: - [Welcome](../) page → got a minor face lift; the bullets in the four boxes are adjusted - [Getting Started](../get-started/) index page → got streamlined and reduced to the gist of initial setup - [Introduction - What is CAP?](../about/) → 90% newly written; replaces former *About CAP* - [Best Practices](../about/best-practices) → 100% new: **key concepts** and **do's**; was always missing - [Anti Patterns](../about/bad-practices) → 100% new: bad practices / the **don'ts** - [Learn More](../get-started/learning-sources) → combines info formerly spread across several pages ![image-20241204173314024](assets/nov24/image-20241204173314024.png){.zoom20 .ignore-dark} ### New: Aspect-oriented Modeling We added a new guide for [*Aspect-Oriented Modeling*](../cds/aspects) to the CDS reference. It explains how you can use aspects for separation of concerns, as well as to reuse and adapt definitions in your models. ![image-20241204173736533](assets/nov24/image-20241204173736533.png){.zoom20 .ignore-dark} ### Improved CDL Reference We improved the structure of the [*Conceptual Definition Language (CDL)*](../cds/cdl) reference, mainly putting *[Language Preliminaries](../cds/cdl#language-preliminaries)* sections to the top, introducing the basics for keywords, identifiers, built-in types, literals, and so on. ![image-20241204174043802](assets/nov24/image-20241204174043802.png){.zoom20 .ignore-dark} ## CDS Language & Compiler {#cds} ### New CDL Parser The CDL Parser currently has an installation size of nearly 2 MB (runtime code and generated files). We plan to transition to a new parser with a significant size reduction (to around 200 kB installation size and having no package dependency) in three phases: - Now: The new parser can be switched on via configuration for testing (see the following snippet). - The new parser is switched on by default; the old parser is still installed and can be reactivated, if necessary (Jan/Feb 25). - Completely remove the old parser including its dependency to the ANTLR4 runtime (next major release). Set the following option in your private `~/.cdsrc.json` to switch on the new parser on your local machine: ::: code-group ```json [~/.cdsrc.json] { "cdsc": { "newParser": true } } ``` ::: We appreciate any feedback that helps us to detect and fix issues before using the new parser by default. ### Enhanced `@assert.range` We now support open intervals with `@assert.range` by wrapping *min* or *max* values in parentheses: ```cds @assert.range: [(0),100] // 0 < input ≤ 100 @assert.range: [0,(100)] // 0 ≤ input < 100 @assert.range: [(0),(100)] // 0 < input < 100 ``` In addition an underscore `_` can be used as a stand-in for *infinity*: ```cds @assert.range: [(0),_] // positive numbers only, _ means +Infinity here @assert.range: [_,(0)] // negative number only, _ means -Infinity here ``` [Learn more in the documentation of `@assert.range`](../guides/providing-services#assert-range) {.learn-more} ## Node.js ### New Plugin for RFC With the new [cds-rfc plugin](https://www.npmjs.com/package/@sap/cds-rfc) you can import the API of RFC-enabled function modules from an SAP S/4HANA system: ![Shows the VS code editor with the service center pane open and a function of an API is selected. It can be added to the CAP project using a button.](assets/nov24/Screenshot-20241125113652.png){.ignore-dark} ... and call the functions as if you're calling them from a local CAP service: ```js const S4 = await cds.connect.to('SYS') const user = await S4.BAPI_USER_GET_DETAIL({ USERNAME: 'display', ... }) ``` [Learn more about the new plugin](../plugins/#abap-rfc){.learn-more} ### New `cds.i18n` API The new [`cds.i18n` API](../node.js/cds-i18n) is used consistently for both serving localized SAP Fiori UIs, as well as for localized messages at runtime. You can also use it in your own Node.js applications to localize your own messages. Here are some examples: ```js const cds = require('@sap/cds') cds.i18n.labels.at('CreatedAt','de') // Erstellt am cds.i18n.labels.at('CreatedAt') // Created At cds.i18n.messages.at('ASSERT_FORMAT',['wrong email',/\w+@\w+/]) ``` You can also lookup translated UI labels for CSN definitions: ```js let {Books} = CatalogService.entities, {title} = Books.elements cds.context = {locale:'fr'} // as automatically set by protocol adapters cds.i18n.labels.at(Books) //> 'Livre' cds.i18n.labels.at(title) //> 'Titre' ``` [Learn more about that in the documentation of `bundle.at(key,...)`.](../node.js/cds-i18n#at-key) {.learn-more} ::: warning Fixes to former i18n for runtime messages With this new implementation used consistently for all i18n, we also fixed some flaws of the former implementation for runtime messages. - Bundles are always loaded from the [*neighborhood* of *.cds* sources](../node.js/cds-i18n#from-models-neighborhood). - Only files from the *first* match of [`i18n.folders`](../node.js/cds-i18n#folders) are used, not all. - [Arguments for `{<>}` placeholders](../node.js/cds-i18n#at-key) aren't recursively localized anymore. While these changes are unlikely to affect any users or projects, take note of them, and take appropriate action if you relied on the former behavior. ::: ### Fuzzy Search The default fuzziness threshold used by CAP Node.js is 0.7 and is now configurable. If the default doesn't suite your needs, you can adapt it globally with cds.hana.fuzzy: 0.9. Besides the configurable default, now also the `@Search.fuzzinessThreshold`and `@Search.ranking` annotation is supported by the CAP Node.js runtime. ```cds entity Books { @Search.fuzzinessThreshold: 0.5 @Search.ranking: HIGH title : String; @Search.ranking: LOW description : String; } ``` In this example, the `title` is the important criteria while the search needs to be less exact compared to the default fuzziness. If you don't want to use the fuzzy search, you can set cds.hana.fuzzy: false and `LIKE` expressions are used instead. [Learn more about Fuzzy Search in CAP.](../guides/providing-services#fuzzy-search) {.learn-more} ### `cds debug` The new CLI command `cds debug` lets you easily debug local or remote Node.js applications in Chrome DevTools. For local applications, `cds debug` is simply a shortcut to `cds watch --debug`: ```sh cds debug ``` ```log Starting 'cds watch --debug' ... Debugger listening on ws://127.0.0.1:9229/... Opening Chrome DevTools at devtools://devtools/bundled/inspector.html?ws=... ``` For remote applications add the name of your application in the currently targeted Cloud Foundry space: ```sh cds debug bookshop-srv ``` ```log Opening SSH tunnel for CF app 'bookshop-srv' Opening Chrome DevTools at devtools://devtools/bundled/inspector.html?ws=... ``` This command opens an [SSH tunnel](https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html), puts the application in debug mode, and connects and opens the debugger in [Chrome DevTools](https://developer.chrome.com/docs/devtools/javascript).