Executing CQL Statements
API to execute CQL statements on services accepting CQN queries.
Query Execution
CDS Query Language (CQL) statements can be executed using the run
method of any service that accepts CQN queries:
CqnService service = ...
CqnSelect query = Select.from("bookshop.Books")
.columns("title", "price");
Result result = service.run(query);
Parameterized Execution
Queries, as well as update and delete statements, can be parameterized with named, or indexed parameters. Update and delete statements with named parameters can be executed in batch mode using multiple parameter sets.
Named Parameters
The following statement uses two parameters named id1 and id2. The parameter values are given as a map:
import static com.sap.cds.ql.CQL.param;
CqnDelete delete = Delete.from("bookshop.Books")
.where(b -> b.get("ID").eq(param("id1"))
.or(b.get("ID").eq(param("id2"))));
Map<String, Object> paramValues = new HashMap<>();
paramValues.put("id1", 101);
paramValues.put("id2", 102);
Result result = service.run(delete, paramValues);
WARNING
The parameter value map must be of type Map<String, Object>
, otherwise the map is interpreted as a single positional/indexed parameter value, which results in an error.
Indexed Parameters
The following statement uses two indexed parameters defined through param(i)
:
import static com.sap.cds.ql.CQL.param;
CqnDelete delete = Delete.from("bookshop.Books")
.where(b -> b.get("ID").in(param(0), param(1)));
Result result = service.run(delete, 101, 102);
Before the execution of the statement the values 101 and 102 are bound to the defined parameters.
Batch Execution
Update and delete statements with named parameters can be executed as batch with multiple parameter sets. The named parameters example from above can be expressed using batch delete with a single parameter and two value sets:
import static com.sap.cds.ql.CQL.param;
CqnDelete delete = Delete.from("bookshop.Books").byParams("ID");
Map<String, Object> paramSet1 = singletonMap("ID", 101);
Map<String, Object> paramSet1 = singletonMap("ID", 102);
Result result = service.run(query, asList(paramSet1, paramSet2));
long deletedRows = result.rowCount();
From the result of a batch update/delete the total number of updated/deleted rows can be determined by rowCount(), and rowCount(batchIndex) returns the number of updated/deleted rows for a specific parameter set of the batch. The number of batches can be retrieved via the batchCount() method. Batch updates also return the update data.
The maximum batch size for update and delete can be configured via cds.sql.max-batch-size
and has a default of 1000.
Querying Parameterized Views on SAP HANA
To query views with parameters on SAP HANA, you need to build a select statement and execute it with the corresponding named parameters.
Let's consider the following Book
entity and a parameterized view that returns the ID
and title
of Books
with number of pages less than numOfPages
:
entity Book {
key ID : Integer;
title : String;
pages : Integer;
}
entity BookView(numOfPages : Integer) as SELECT FROM Book {ID, title} WHERE pages < :numOfPages;
The Java query that returns books with number of pages less than 200:
CqnSelect query = Select.from("BookView");
Map<String, Object> params = Collections.singletonMap("numOfPages", 200);
Result result = service.run(query, params);
Adding Query Hints for SAP HANA
To add a hint clause to a statement, use the hints
method and prefix the SAP HANA hints with hdb.
:
CqnSelect query = Select.from(BOOKS).hints("hdb.USE_HEX_PLAN", "hdb.ESTIMATION_SAMPLES(0)");
WARNING
Hints prefixed with hdb.
are directly rendered into SQL for SAP HANA and therefore must not contain external input!
Pessimistic Locking
Use database locks to ensure that data returned by a query isn't modified in a concurrent transaction. Exclusive locks block concurrent modification and the creation of any other lock. Shared locks, however, only block concurrent modifications and exclusive locks but allow the concurrent creation of other shared locks.
To lock data:
- Start a transaction (either manually or let the framework take care of it).
- Query the data and set a lock on it.
- Perform the processing and, if an exclusive lock is used, modify the data inside the same transaction.
- Commit (or roll back) the transaction, which releases the lock.
To be able to query and lock the data until the transaction is completed, just call a lock()
method and set an optional parameter timeout
.
In the following example, a book with ID
1 is selected and locked until the transaction is finished. Thus, one can avoid situations when other threads or clients are trying to modify the same data in the meantime:
// Start transaction
// Obtain and set a write lock on the book with id 1
service.run(Select.from("bookshop.Books").byId(1).lock());
...
// Update the book locked earlier
Map<String, Object> data = Collections.singletonMap("title", "new title");
service.run(Update.entity("bookshop.Books").data(data).byId(1));
// Finish transaction
The lock()
method has an optional parameter timeout
that indicates the maximum number of seconds to wait for the lock acquisition. If a lock can't be obtained within the timeout
, a CdsLockTimeoutException
is thrown. If timeout
isn't specified, a database-specific default timeout will be used.
The parameter mode
allows to specify whether an EXCLUSIVE
or a SHARED
lock should be set.
Data Manipulation
The CQN API allows to manipulate data by executing insert, update, delete, or upsert statements.
Update
The update operation can be executed as follows:
Map<String, Object> book = new HashMap<>();
book.put("title", "CAP");
CqnUpdate update = Update.entity("bookshop.Books").data(book).where(b -> b.get("ID").eq(101));
long updateCount = service.run(update).rowCount();
Working with Structured Documents
It's possible to work with structured data as the insert, update, and delete operations cascade along compositions.
Cascading over Associations
By default, insert, update and delete operations cascade over compositions only. For associations, this can be enabled using the @cascade
annotation.
WARNING
Cascading operations over associations isn't considered good practice and should be avoided.
Annotating an association with @cascade: {insert, update, delete}
enables deep updates/upserts through this association. Given the following CDS model with two entities and an association between them, only insert and update operations are cascaded through author
:
entity Book {
key ID : Integer;
title : String;
@cascade: {insert, update}
author : Association to Author;
}
entity Author {
key ID : Integer;
name : String;
}
❗ Warning
For inactive draft entities @cascade
annotations are ignored.
❗ Warning
The @cascade annotation is not respected by foreign key constraints on the database. To avoid unexpected behaviour you might have to disable a FK constraint with @assert.integrity:false
.
Deep Insert / Upsert
Insert and upsert statements for an entity have to include the keys and (optionally) data for the entity's composition targets. The targets are inserted or upserted along with the root entity.
Iterable<Map<String, Object>> books;
CqnInsert insert = Insert.into("bookshop.Books").entries(books);
Result result = service.run(insert);
CqnUpsert upsert = Upsert.into("bookshop.Books").entries(books);
Result result = service.run(upsert);
Cascading Delete
The delete operation is cascaded along the entity's compositions. All composition targets that are reachable from the (to be deleted) entity are deleted as well.
The following example deletes the order with ID 1000 including all its items:
CqnDelete delete = Delete.from("bookshop.Orders").matching(singletonMap("OrderNo", 1000));
long deleteCount = service.run(delete).rowCount();
Resolvable Views and Projections
The CAP Java SDK aims to resolve statements on non-complex views and projections to their underlying entity. When delegating queries between Application Services and Remote Services, statements are resolved to the entity definitions of the targeted service. Using the Persistence Service, only modifying statements are resolved before executing database queries. This allows to execute Insert, Upsert, Update, and Delete operations on database views. For Select statements database views are always leveraged, if available.
Views and projections can be resolved if the following conditions are met:
- The view definition does not use any other clause than
columns
andexcluding
. - The projection includes all key elements; with the exception of insert operations with generated UUID keys.
- The projection includes all elements with a
not null
constraint, unless they have a default value. - The projection must not include calculated fields when running queries against a remote OData service.
- The projection must not include path expressions using to-many associations.
For Insert or Update operations, if the projection contains functions or expressions, these values are ignored. Path expressions navigating to-one associations, can be used in projections as shown by the Header
view in the following example. The Header
view includes the element country
from the associated entity Address
.
// Supported
entity Order as projection on bookshop.Order;
entity Order as projection on bookshop.Order { ID, status as state };
entity Order as projection on bookshop.Order excluding { status };
entity Header as projection on bookshop.OrderHeader { key ID, address.country as country };
If a view is too complex to be resolved by the CDS runtime, the statement remains unmodified. Views that cannot be resolved by the CDS runtime include the use of join
, union
and the where
clause.
- For the Persistence Service, this means the runtime attempts to execute the write operation on the database view. Whether this execution is possible is database dependent.
- For Application Services and Remote Services, the targeted service will reject the statement.
Example of a view that can't be resolved:
// Unsupported
entity DeliveredOrders as select from bookshop.Order where status = 'delivered';
entity Orders as SELECT from bookshop.Order inner join bookshop.OrderHeader on Order.header.ID = OrderHeader.ID { Order.ID, Order.items, OrderHeader.status };
Using I/O Streams in Queries
As described in section Predefined Types it's possible to stream the data, if the element is annotated with @Core.MediaType
. The following example demonstrates how to allocate the stream for element coverImage
, pass it through the API to an underlying database and close the stream.
Entity Books
has an additional annotated element coverImage : LargeBinary
:
entity Books {
key ID : Integer;
title : String;
...
@Core.MediaType
coverImage : LargeBinary;
}
Java snippet for creating element coverImage
from file IMAGE.PNG
using java.io.InputStream
:
// Transaction started
Result result;
try (InputStream resource = getResource("IMAGE.PNG")) {
Map<String, Object> book = new HashMap<>();
book.put("title", "My Fancy Book");
book.put("coverImage", resource);
CqnInsert insert = Insert.into("bookshop.Books").entry(book);
result = service.run(insert);
}
// Transaction finished
Using Native SQL
CAP Java doesn't have a dedicated API to execute native SQL Statements. However, when using Spring as application framework you can leverage Spring's features to execute native SQL statements. See Execute SQL statements with Spring's JdbcTemplate for more details.
Query Result Processing
The result of a query is abstracted by the Result
interface, which is an iterable of Row
. A Row
is a Map<String, Object>
with additional convenience methods and extends CdsData.
You can iterate over a Result
:
Result result = ...
for (Row row : result) {
System.out.println(row.get("title"));
}
Or process it with the Stream API:
Result result = ...
result.forEach(r -> System.out.println(r.get("title")));
result.stream().map(r -> r.get("title")).forEach(System.out::println);
If your query is expected to return exactly one row, you can access it with the single
method:
Result result = ...
Row row = result.single();
If it returns a result, like a find by id
would, you can obtain it using first
:
Result result = ...
Optional<Row> row = result.first();
row.ifPresent(r -> System.out.println(r.get("title")));
The Row
's getPath
method supports paths to simplify extracting values from nested maps. This also simplifies extracting values from results with to-one expands using the generic accessor. Paths with collection-valued segments and infix filters are not supported.
CqnSelect select = Select.from(BOOKS).columns(
b -> b.title(), b -> b.author().expand()).byId(101);
Row book = dataStore.execute(select).single();
String author = book.getPath("author.name");
Null Values
A result row may contain null
values for an element of the result if no data is present for the element in the underlying data store.
Use the get
methods to check if an element is present in the result row:
if (row.get("name") == null) {
// handle mising value for name
}
Avoid using containsKey
to check for the presence of an element in the result row. Also, when iterating the elements of the row, keep in mind, that the data may contain null
values:
row.forEach((k, v) -> {
if (v == null) {
// handle mising value for element v
}
});
Typed Result Processing
The element names and their types are checked only at runtime. Alternatively you can use interfaces to get typed access to the result data:
interface Book {
String getTitle();
Integer getStock();
}
Row row = ...
Book book = row.as(Book.class);
String title = book.getTitle();
Integer stock = book.getStock();
Interfaces can also be used to get a typed list or stream over the result:
Result result = ...
List<Book> books = result.listOf(Book.class);
Map<String, String> titleToDescription =
result.streamOf(Book.class).collect(Collectors.toMap(Book::getTitle, Book::getDescription));
For the entities defined in the data model, CAP Java SDK can generate interfaces for you through a Maven plugin.
Using Entity References from Result Rows in CDS QL Statements
For result rows that contain all key values of an entity, you get an entity reference via the ref()
method. This reference addresses the entity via the key values from the result row.
// SELECT from Author[101]
CqnSelect query = Select.from(AUTHOR).byId(101);
Author authorData = service.run(query).single(Author.class);
String authorName = authorData.getName(); // data access
Author_ author = authorData.ref(); // typed reference to Author[101]
Similar for untyped results:
Row authorData = service.run(query).single();
StructuredType<?> author = authorData.ref(); // untyped reference to Author[101]
This also works for Insert
and Update
results:
CqnUpdate update = Update.entity(AUTHOR).data("name", "James Joyce").byId(101);
Author_ joyce = service.run(update).single(Author.class).ref();
Using entity references you can easily write CDS QL statements targeting the source entity:
// SELECT from Author[101].books { sum(stock) as stock }
CqnSelect q = Select.from(joyce.books())
.columns(b -> func("sum", b.stock()).as("stock"));
CqnInsert i = Insert.into(joyce.books())
.entry("title", "Ulysses");
CqnUpdate u = Update.entity(joyce.biography())
.data("price", 29.95);
CqnDelete d = Delete.from(joyce.address())
.where(b -> b.stock().lt(1));
Introspecting the Row Type
The rowType
method allows to introspect the element names and types of a query's Result
. It returns a CdsStructuredType
describing the result in terms of the Reflection API:
CqnSelect query = Select.from(AUTHOR)
.columns(a -> a.name().as("authorName"), a -> a.age());
Result result = service.run(query);
CdsStructuredType rowType = result.rowType();
rowType.elements(); // "authorName", "age"
rowType.getElement("age").getType().getQualifiedName(); // "cds.Integer"
rowType.findElement("ID"); // Optional.empty()