Consuming Services

Every active thing in CAP is a service. CAP applications can define one or multiple services in their CDS model. The Service Consumption API provides a facade around services and their events. CDS services, as well as technical services (for example, the database) can be consumed through this API. The API is centered around CDS QL statements and provides a uniform and protocol-agnostic layer.


A Central API

Services in CAP are doing nothing more than dispatching events to event handlers. They never implement behavior themselves, but are always using events and their handlers to achieve something. Therefore, services provide generic capabilities to process synchronous as well as asynchronous events and offer a user-friendly API layer around these events.

In Java, every service implements the Service interface. This interface offers the generic event processing capabilities through its emit(EventContext) method. The emit method takes care of processing an event and its parameters (represented by the event context) by dispatching it to all event handlers registered on that event. All capabilities a service offers, can be consumed through this central emit method. Asynchronous and synchronous events can be processed by this central API.

Concrete services in Java usually implement an interface, which extends the Service interface. They provide a more user-friendly API layer around the emit method for the events defined by this service. Examples for these services are, the CDS service, or its specialized versions the persistence service and draft service, which define APIs around the CRUD events they define. CAP also treats technical components as services, for example the AuthorizationService or the MessagingService.

Consuming Services

The Service Provisioning API chapter describes how to get access to the service objects in Java.

Triggering Custom Events

CAP provides possibilities to define and implement custom events, for example through actions and functions. These custom events can be triggered through the emit method of the respective service, that defines the custom event. As the event isn’t known to CAP, the more user-friendly API layer around the emit method needs to be built manually, as shown in the example at the end of this section. The emit method requires an event context, which first needs to be created and filled with the input parameters of the event. Then, the event context is emitted and dispatched to the event handlers. After that, the return value of the event (if available) can be retrieved from the event context. If a custom event context is available for the event, we recommend using the custom event context over the generic event context.

The following example shows, how the action defined in the example given in the Service Provisioning API chapter, can be triggered using the emit method. It shows how a Spring bean can be built, that encapsulates triggering the custom event and exposes it as a more user-friendly API.

import static bookshop.Bookshop_.BOOKS;

public class CatalogServiceAPI {

    CdsService catalogService; // get access to the service

    public Reviews review(String bookId, Integer stars) {
        // create the event context for the review event
        ReviewEventContext context = EventContext.create(ReviewEventContext.class, Books_.CDS_NAME);
        // set target entity
        // set input parameters
        // emit the event
        // return the result
        return context.getResult();


CDS Services

The most prominent kind of service in CAP is the CDS service. A CDS service is defined in the CDS model using the service keyword. These services are typically served automatically through HTTP, for example by the OData V4 protocol adapter. A Java object, implementing the CdsService interface, is created by CAP for each such service. The interface can be used to consume the service directly in code.

A similar object is available for the persistence service, which is encapsulating the database and provides the same API as the CDS service.

The CdsService interface provides the ability to run CDS QL statements against the service and the entities it defines. A CDS service only accepts statements targeting the entities that are defined as part of the service.

Event handlers that are registered on the service, are automatically triggered, as the different run methods only provide a thin logical layer around the generic emit method of the service.

The following two sections explain, how to run these CDS QL statements on a CDS service and how the result returned needs to be interpreted.

Query Execution

CDS QL statements can be executed using the run method of the CdsService:

CdsService service = ...

CqnSelect query = Select.from("bookshop.Books")
    .columns("title", "price");

Result result =;

Parameterized Execution

Queries, as well as update and delete statements, can be parameterized with positional or named parameters.

Positional Parameters

The following query uses two positional parameters defined through param():

import static;

CqnSelect query = Select.from("bookshop.Books")
    .where(b -> b.get("ID").eq(param())

Result result =, 101, 102);

Before the execution of the statement the values 101 and 102 are bound to the defined parameters.

Named Parameters

The following query uses two parameters named “id1” and “id2”. The parameter values are given as a map:

import static;

CqnSelect query = Select.from("bookshop.Books")
    .where(b -> b.get("ID").eq(param("id1"))

Map paramValues = new HashMap<>();
paramValues.put("id1", 101);
paramValues.put("id2", 102);

Result result =, paramValues);

Querying Parameterized Views on SAP HANA

To query views with parameters on SAP HANA, you need to build a select statement and execute it with the corresponding named parameters.

Let’s consider the following Book entity and a parameterized view that returns the ID and title of Books with number of pages less than numOfPages:

entity Book {
    key ID : Integer;
    title  : String;
    pages  : Integer;

entity BookView(numOfPages : Integer) as SELECT FROM Book {ID, title} WHERE pages < :numOfPages;

The Java query that returns books with number of pages less than 200:

CqnSelect query = Select.from("BookView");

Result result =, Collections.singletonMap("numOfPages", 200));

Pessimistic Locking

To ensure that data returned by query execution isn’t modified by a concurrent transaction, you can set an exclusive write lock on it. To do that:

  1. Start a transaction (either manually or let the framework take care of it);
  2. Query the data and set a lock on it;
  3. Perform the processing and modify the data inside the same transaction (if required);
  4. Commit (or roll back) the transaction, which releases the lock.

To be able to query and lock the data until the transaction is completed, just call a lock() method and set an optional parameter timeout.

In the following example, a book with ID 1 is selected and locked until the transaction is finished. Thus, one can avoid situations when other threads or clients are trying to modify the same data in the meantime.

// Start transaction
// Obtain and set a write lock on the book with id 1"bookshop.Books").byId(1).lock());
// Update the book locked earlier
	Map<String, Object> data = Collections.singletonMap("title", "new title");"bookshop.Books").data(data).byId(1));
// Finish transaction

The lock() method has an optional parameter timeout that indicates the maximum number of seconds to wait for the lock acquisition. If a lock can’t be obtained within the timeout, a CdsLockTimemoutException is thrown. If timeout isn’t specified, a database-specific default timeout will be used.

Data Manipulation

The CDS Service API allows to manipulate data by executing insert, update, delete, or upsert statements.


The update operation can be executed as follows.

Map<String, Object> book = new HashMap<>();
book.put("title", "CAP");

CqnUpdate update = Update.entity("bookshop.Books").data(book).where(b -> b.get("ID").eq(101));
long updateCount =;

Working with Structured Documents

It’s possible to work with structured data as the insert, update, and delete operations are cascading along compositions.

Cascading Over Associations

To enable cascading insert, update and delete operations over associations, use the @cascade annotation.

Given the following CDS model with two entities and an association between them, only insert and update operations are cascaded trough author.

entity Book {
  key ID : Integer;
  title  : String;

  @cascade: {insert, update}
  author : Association to Author;

entity Author {
  key ID : Integer;
  name   : String;

Annotating an association with @cascade: {insert, update, delete} enables deep updates via the association. As a short form @cascade: {all} can be used.

Deep Insert / Upsert

Insert and upsert statements for an entity have to include the keys and optionally data for the entity’s composition targets, which are then inserted or upserted along with the root entity. A deep upsert is equivalent to a cascading delete followed by a deep insert.

Iterable<Map<String, Object>> books;

CqnInsert insert = Insert.into("bookshop.Books").entries(books);
Result result =;

CqnUpsert upsert = Upsert.into("bookshop.Books").entries(books);
Result result =;

Cascading Delete

The delete operation is cascaded along the entity’s compositions. All composition targets that are reachable from the (to be deleted) entity are deleted as well.

The following example deletes the order with ID 1000 including all its items.

CqnDelete delete = Delete.from("bookshop.Orders").matching(singletonMap("OrderNo", 1000));
long deleteCount =;

Updatable Views

On some views, the runtime supports Insert, Upsert, Update, and Delete operations. If possible it will resolve the projection to the underlying entity or view and perform the respective operation.

Operations on views that can’t be resolved by the runtime are directly executed on the database. In this case, it’s database-dependent if the operation can be executed.

Views using only columns and excluding clauses are updatable. For example:

// Supported
entity Order as projection on bookshop.Order;
entity OrderExcluding as SELECT from bookshop.Order excluding {status};
entity OrderStatus as projection on bookshop.Order {OrderNo, status as state};

The columns clause must ensure the following for a view to be updatable:

  1. All elements with not null constraint must be included unless a default value is specified.
  2. Only elements and no functions/expressions are used.
  3. All key elements are used in the projection. However, for Insert, if the key element’s value is generated, it need not be used in the projection.
  4. No element from an associated entity using path expressions can be used.
// Supported
entity AliasOrderHeader as projection on bookshop.OrderHeader { key HeaderID, createdAt, status as headerStatus, shippingAddress};
entity OrderWithHeader as projection on bookshop.Order excluding { items, fulfillment, fulfillment_id };

Currently, all other clauses when used with view definition, will render them not updatable unless it has native database support and can be executed by the corresponding database. In the view Books below, element name from associated entity author is selected, which isn’t permitted. Utilising JOINS or the where clause too will classify the view to become read-only as shown by the JoinOrder and DeliveredOrders view respectively.

// Unsupported
entity Books as SELECT from my.Books {*, as author
  } excluding { createdBy, modifiedBy };
entity JoinOrder as SELECT from bookshop.Order inner join bookshop.OrderHeader on Order.header.HeaderID = OrderHeader.HeaderID { Order.OrderNo, Order.items, OrderHeader.status };
entity DeliveredOrders as select from bookshop.Order where status='delivered';

Using I/O Streams in Queries

As described in section Predefined Types it’s possible to stream the data, if the element is annotated with @Core.MediaType. The following example demonstrates how to allocate the stream for element coverImage, pass it through the API to an underlying database and close the stream.

Entity Books has an additional annotated element coverImage : LargeBinary:

entity Books {
  key ID : Integer;
  title  : String;
  coverImage : LargeBinary;

Java snippet for creating element coverImage from file IMAGE.PNG using

// Transaction started

Result result;
try (InputStream resource = getResource("IMAGE.PNG")) {
    Map<String, Object> book = new HashMap<>();
    book.put("title", "My Fancy Book");
    book.put("coverImage", resource);

    CqnInsert insert = Insert.into("bookshop.Books").entry(book);
    result =;

// Transaction finished

Query Result Processing

The result of a query is abstracted by the Result interface, which is an iterable of Row. A Row is just a Map augmented with some convenience methods.

You can iterate over a Result:

Result result = ...

for (Row row : result) {

Or process it with the Stream API:

Result result = ...

result.forEach(r -> System.out.println(r.get("title"))); -> r.get("title")).forEach(System.out::println);

If your query is expected to return exactly one row, you can access it with the single method:

Result result = ...

Row row = result.single();

If it returns a result, like a find by id would, you can obtain it using first:

Result result = ...

Optional<Row> row = result.first();
row.ifPresent(r -> System.out.println(r.get("title")));

Typed Result Processing

The element names and their types are checked only at runtime. Alternatively you can use interfaces to get typed access to the result data:

interface Book {
  String getTitle();
  Integer getStock();

Row row = ...
Book book =;

String title = book.getTitle();
Integer stock = book.getStock();

Interfaces can also be used to get a typed list or stream over the result:

Result result = ...

List<Book> books = result.listOf(Book.class);

Map<String, String> titleToDescription =
  result.streamOf(Book.class).collect(Collectors.toMap(Book::getTitle, Book::getDescription));

For the entities defined in the data model, CAP Java SDK can generate interfaces for you through a Maven plugin.

Getting Entity References

If a result set row unambiguously originates from a single instance of an entity, a reference to this instance can be obtained by the row’s ref() method.

// SELECT from Author[101]
CqnSelect query = Select.from(AUTHOR).byId(101);
Author authorData =;

String authorName = authorData.getName();    // data access
Author_ author    = authorData.ref();        // typed reference to Author[101]

Similar for untyped results:

Row authorData =;
StructuredType<?> author = authorData.ref(); // untyped reference to Author[101]

Using these entity references you can easily write queries on the source entity, which can then be executed on the same or on a different service.

Author_ author = authorData.ref();

// SELECT from Author[101].books { sum(stock) as stock }
CqnSelect q = Select.from(author.books())
     .columns(b -> func("sum", b.stock()).as("stock"));

CqnInsert i = Insert.into(author.books())
     .entry("title", "The Work of " + authorData.getName());

CqnUpdate u = Update.entity(author.books())
     .data("price", 7.90).where(b -> b.stock().lt(10));

CqnDelete d = Delete.from(author.books())
     .where(b -> b.stock().lt(1));

The Persistence Service

Applications usually need to access the database to store and retrieve the entities, they’ve defined in their domain model. Typically, an SAP HANA database is used in production. For test and development, it’s also possible to use a light-weight, in-memory database such as SQLite.

The persistence service provides an API to access the data stored in the database, independently of the concrete database type in use. It exposes the data via the same API offered by CDS services, which is based on CDS QL statements. This consistent approach allows the Generic Providers of a CDS service, to simply forward the statements they received to the persistence service, if the data is stored in the database. Eventually, the persistence service executes the required operations on the database leveraging the CDS Data Store.

In Java, the PersistenceService interface is available. The PersistenceService simply extends the CdsService interface and provides access to the CDS Data Store.

The persistence service also takes care of initializing lazily and maintaining database transactions. The service ensures that transactions are managed as part of the active changeset context.

The persistence service isn’t bound to a particular service and its entities. It gives access to all entities defined in the CDS model of the CAP application, as long as they’re stored on the database. It’s possible to register event handlers for the CRUD events on the persistence service.

The CDS Data Store

The Data Store API is used to execute CQN statements against the underlying data store (typically a database). It’s a technical component that allows to execute CDS QL statements. The CDS Data Store is used to implement the persistence service, but is also available independent from the CAP Java SDK. So, it’s not a service and isn’t based on events and event handlers.

The CdsDataStore API is similar to the CdsService API. The only difference is, that the run method is called execute.

CdsDataStore dataStore = ...;
Select query = Select.from("bookshop.Books").where(b -> b.get("ID").eq(17));
Result result = dataStore.execute(query);

Use the CdsDataStore API to set user session context information. Utilize the SessionContext API which follows a builder pattern, as shown below.

SessionContext sessionContext = SessionContext.create().setUserContext(UserContext.create().setLocale(Locale.US).build()).build());

When implementing a CAP application, using the PersistenceService is preferred over the CDS Data Store.

Draft Services

As soon as a single entity within a service is draft enabled, the CDS service implements the DraftService interface in addition. It provides an API layer around the draft-specific events, and allows to create new draft entities, patch, cancel or save them, and put active entities back into edit mode. These APIs and events only operate on entities in draft-mode. The run APIs provided by the CDS service, operate on active entities only. There’s one exception from this behavior, which is the READ event: When reading from a draft service, active and inactive entities are both queried and the results are combined.

Important: The persistence service isn’t draft-aware. Use the respective CDS service, when running draft-aware queries.

The following example, shows the usage of the draft-specific APIs.

import static bookshop.Bookshop_.ORDERS;

DraftService adminService = ...;
// create draft
Orders order = adminService.newDraft(Insert.into(ORDERS)).single(Orders.class);
// set values
// patch draft
    .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(false))));
// save draft
CqnSelect orderDraft = Select.from(ORDERS)
    .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(false)));
// put draft back to edit mode
CqnSelect orderActive = Select.from(ORDERS)
    .where(o -> o.ID().eq(order.getId()).and(o.IsActiveEntity().eq(true)));
adminService.editDraft(orderActive, true);
// read entities in draft mode and activated entities -> o.ID().eq(order.getId())));