Integrity and confidentiality of data being transferred between any communication endpoints needs to be guaranteed. In particular, this holds true for communication between client and server (public zone resp. platform zone), but also for service-to-service communication (within a platform zone). That means the communication channels are established in a way that rules out undetected data manipulation or disclosure.
SAP BTP exclusively establishes encrypted communication channels based on HTTPS/TLS as shown in the architecture overview and hence fulfills the requirements out of the box. For all deployed (CAP) applications and platform services, the platform's API gateway resp. ingress router provides TLS endpoints accepting incoming request and forwards to the backing services via HTTP. The HTTP endpoints of microservices are only accessible for the router in terms of network technology (perimeter security) and therefore aren't visible for clients in public and platform zone. Likewise microservices can only serve a single network port, which the platform has opened for the hosting container.
The router endpoints are configured with an up to date TLS protocol version containing a state-of-the-art cipher suite. Server authentication is given by X.509 server certificates signed by a trusted certificate authority.
It's mandatory for public clients to authenticate the server and to verify the server's identity by matching the target host name with the host name in the server certificate.
As platform services and other applications deployed to BTP are only accessible via exposed TLS router endpoints, outbound connections are automatically secured as well. Consequently, technical clients have to validate the server certificate for proper server authentication. Also here CAP application developers don't need to deal with HTTPS/TLS connection setup provided the client code is build on CAP offerings such as HANA Cloud Service or CloudSDK integration.
The CAP application needs to ensure adequate protection of secrets that are injected into CAP microservices, e.g.
Depending on the target platform, closely coupled microservices of the application zone might also communicate via trusted network channels instead of using outbound connections. For instance, a CAP service could communicate to a CAP sidecar, which is deployed to the same container via localhost HTTP connection.
CAP allows to use alternative communication channels, but application operators are responsible to set up them in a secure manner.
CAP applications don't have to deal with TLS, communication encryption, or certificates, for inbound as well as outbound connections.
Reducing attack surface by filtering communication from or to public zone increases the overall security protection level. By default, the platform comes with a standard set of services and configurations to protect network communication building on security features of the underlying hyperscaler.
Measures to further restrict web access to your application can be applied at platform level and aren't offered by CAP. For instance, CF Route service can be used to implement route-specific restriction rules.
None-public resources may only be accessed by authenticated users. Hence, authentication plays a key role for product security on different levels:
Business users consume the application via web interface. In multitenant applications, they come from different subscriber tenants that need to be isolated from each other.
Platform users operate the application and have privileged access to its components on OS level (containers, configurations, logs etc.). Platform users come from the provider tenant.
Managing user pools, providing a logon flow, and processing authentication are complex and highly security-critical tasks that shouldn't be tackled by applications. Instead, applications should rely on an identity service provided by the platform which is seamlessly integrated by CAP.
User authentication flows (OpenID connect), for example, multifactor authentication
Federation of custom identity providers (IdPs)
Password and session policies etc.
The central platform service provides applications with a large set of industry-proven security features, which is why applications don't have to develop their own extensions and run the risk of security flaws.
CAP doesn't require any specific authentication strategy, but it provides out of the box integration with the platform identity service. On configured authentication, all CAP endpoints are authenticated by default.
❗ CAP applications need to ensure that an appropriate authentication method is configured. It's highly recommended to establish integration tests to safeguard a valid configuration.
CAP microservices consume remote services and hence need to be authenticated as technical client as well. Similar to request authentication, CAP saves applications from having to implement secure setup of service to service communication:
CAP microservices require authentication of all requests, but they don't support logon flows for UI clients. Being stateless, they neither establish a session with the client to store login information such as an OAuth 2 token that needs to be passed in each server request.
To close this gap, UI-based CAP applications can use an Application Router instance or service as reverse proxy as depicted in the diagram. The Application Router redirects the login to the identity service, fetches an OAuth2 token, and stores it into a secure session cookie.
❗ The Application Router endpoints don't hide CAP endpoints in the service backend. Hence, authentication is still mandatory for CAP microservices.
To run a CAP application that authenticates users and consumes remote services, it isn't required to manage any secrets such as keys, tokens, or passwords. Also CAP doesn't store any of them, but relies on platform injection mechanisms or destinations.
According to segregation of duties paradigm, user administrators need to control how different users may interact with the application. Critical combinations of authorizations must be avoided. Basically, access rules for business users are different from platform users.
To align with the principle of least privilege, applications need to enforce fine-grained access control for business users from the subscriber tenants.
Depending from the business scenario, users need to be restricted to operations they perform on server resources, e.g. reading an entity collection. Moreover, they might also be limited to a subset of data entries, that is, they may only operate on a filtered view on the data. The set of rules that apply to a user reflects a specific conceptual role that describes the interaction with the application to fulfill a business scenario. Obviously, the business roles are dependent from the scenarios and hence need to be defined by the application developers.
Enforcing authorization rules at runtime is highly security-critical and shouldn't be implemented by the application as this would introduce the risk of security flaws. Instead, CAP authorizations follow a declarative approach allowing applications to design comprehensive access rules in the CDS model.
Resources in the model such as services or entities can be restricted to users that fulfill specific conditions as declared in @requires or @restrictannotations. According to the declarations, server-side authorization enforcement is guaranteed for all requests. It's executed close before accessing the corresponding resources.
❗ By default, CAP services and entities aren't authorized. Application developers need to design and test access rules according to the business need.
To verify CAP authorizations in your model, it's recommended to use CDS lint rules.
The rules prepared by application developers are applied to business users according to grants given by the subscribers user administrator, that is, they're applied tenant-specific.
CAP authorizations can be defined dependently from user claims such as XSUAA scopes or attributes that are deployed by application developers and granted by the user administrator of the subscriber. Hence, CAP provides a seamless integration of central identity service without technical lock-in.
You can generate the xs-security.jsondescriptor file of the application's XSUAA instance by executing cds add xsuaa in the project root folder. The XSUAA scopes, roles, and attributes are derived from the CAP authorization model.
CAP authorization enforcement doesn't automatically log successful and unsuccessful authorization checks. Applications need to add corresponding custom handlers to support it.
In general, responses created by standard CAP handlers and services are created on need-to-know basis. This means, authorized users only receive server information according to their privilege. Therefore, business users won't gain information about server host names, any version of application server component, generated queries etc.
Based on the CDS model and configuration of CDS services, the CAP runtime exposes following endpoints:
Internal, technical user1, or technical roles cds.ExtensionDeveloper resp. cds.UIFlexDeveloper
1 The microservice running the MTXS CAP service needs to be deployed to the application zone) and hence has established trust with the CAP application client, for instance given by shared XSUAA instance.
Authentication for a CAP sidecar needs to be configured just like any other CAP application.
❗ Ensure that technical roles such as cds.Subscriber, mtcallback, or emcallbacknever are included in business roles.
Similar to business consumption, different scenarios apply on operator level that need to be separated by dedicated access rules: deployment resp. configuration, monitoring, support, audit logs etc. CAP doesn't cover authorization of platform users. Please refer to security documentation of the underlying SAP BTP runtime environment:
Multitenant SaaS-applications need to take care for security aspects on a higher level. Different subscriber tenants share the same runtime stack to interact with the CAP application. Ideally, from perspective of a single tenant, the runtime should look like a self-contained virtual system that doesn't interfere with any other tenant.
All directly or indirectly involved services that process the business request require to isolate with regards to several dimensions:
Having configured Multitenancy in CAP, when serving a business request, CAP automatically targets an isolated HDI container dedicated for the request tenant to execute DB statements. Here, CAP's data query API based on CQN is orthogonal to multitenancy, that is, custom CAP handlers can be implemented agnostic to MT.
During tenant onboarding process, CAP triggers the HDI container creation via SAP HANA Cloud Services. The containers have separated DB schemas and dedicated technical DB users for access. CAP guarantees that code for business requests runs on a DB connection opened for the technical user of the tenant's container.
Although CAP microservices are stateless, the CAP Java runtime (generic handlers inclusive) needs to cache data in-memory for performance reasons. For instance, filters for instance-based authorization are constructed only once and are reused in subsequent requests.
To minimize risk of a data breach by exposing transient data at runtime, the CAP Java runtime explicitly refrains from declaring and using static mutable objects in Java heap. Instead, request-related data such as the EventContext is provided via thread-local storage. Likewise, data is stored in tenant-maps that are transitively referenced by the CdsRuntime instance.
To achieve tenant-isolation, the CAP Node.js runtime dynamically adds data to the tenant's cds.model. Request-related data is propagated down the call stack (for instance cds.context).
Make sure that custom code doesn't break tenant data isolation.
Tenant-aware microservices also need to handle resource consumption of tenants, in particular with regards to CPU, memory, and network connections. Excessive use of resources requested by a single tenant could cause runtime problems for other consumers (noisy neighbor problem).
CAP helps to control resource usage:
Business request run in isolated Java threads and hence OS thread scheduling ensures fair distribution of CPU shares.
By default, tenants have dedicated DB connection pools.
Fine granular processing of request (CAP handlers) to avoid disproportionate blocking times of the event loop.
Tenants have dedicated DB connection pools.
Make sure that custom code doesn't introduce excessive memory or CPU consumption within a single request.
Because OS resources are strictly limited in a virtualized environment, a single microservice instance can handle load of a limited set of tenants, only. Adequate sizing of your microservice is mandatory, that is, adjusting memory settings, connection pool sizes, request size limits etc. according to the business needs.
Last but not least you need to implement a scaling strategy to meet increasing load requirements by additional microservice instances.
❗ Sizing and scaling is up to application developers and operators. CAP default values aren't suitable for all applications.
CAP's intrinsic data querying engine is immune with regards to SQL injections that are introduced by query parameter values that are derived from malicious user input. CQL statements are transformed into prepared statements that are executed in SQL databases such as SAP HANA. Be aware that injections are still possible even via CQL when the query structure (e.g. target entity, columns etc.) is based on user input:
String entity =<from user input>;String column =<from user input>;validate(entity, column);// validate entity and column, e.g. compare with positive listSelect.from(entity).columns(b -> b.get(column));
const entity =<fromuserinput>const column = <fromuserinput>validate(entity, column) // validate entity and column, e.g. compare with positive listSELECT.from(entity).columns(column)
Be careful with custom code when creating or modifying CQL queries. Additional input validation is needed when the query structure depends on the request's input:
Cross Site Scripting (XSS) is used by attackers to inject a malicious script, which is executed in the browser session of an unsuspecting user. By default, there are some protection mechanisms in place. For instance, CAP OData V4 adapter renders responses with HTTP, which prevents the browser from misinterpreting the context. On the client side, SAPUI5 provides input validation for all typed element properties and automatic output encoding in all standard controls.
Untrusted data being transferred may contain malware. SAP Malware Scanning Service is capable to scan provided input streams for viruses and is regularly updated.
❗ Currently, CAP applications need to add custom handlers to scan data being uploaded or downloaded.
Path traversal attacks aim to access parts of the server's file system outside the web root folder. As part of the application zone, an Application Router serves the static UI content of the application. The CAP microservice doesn't need to serve web content from file system. Apart from that the used web server frameworks such as Spring or Express already have adequate protection mechanisms in place.
CAP Node.js offers a CLRF-safe logging API that should be used for application logs.
❗ Currently, CAP applications need to care for escaping user data that is used as input parameter for application logging. It's recommended to make use of an existing Encoder such as OWASP ESAPI.
Deserialization of untrusted data can lead to serious exploits including remote code execution. The OData adapter converts JSON payload into an object representation. Here it follows a hardened deserialization process where the deserializer capabilities (e.g. no default types in Jackson) are restricted to a minimum. A strong input validation based on EDMX model is done as well. Moreover, deserialization errors terminate the request and are tracked in the application log.
In general, to achieve perfect injection resistance, applications should have input validation, output validation, and a proper Content-Security-Policy in place.
CAP provides built-in support for input validation. Developers can use the @assert annotation to define field-specific input checks.
❗ Applications need to validate or sanitize all input variables according to the business context.
With respect to output encoding, CAP OData adapters have proper URI encoding for all resource locations in place. Moreover, OData validates the JSON response according to the given EDMX schema. In addition, client-side protection is given by SAPUI5 standard controls
Applications should meet basic Content Security Policy (CSP) compliance rules to further limit the attack vector on client side. CSP-compatible browsers only load resources from web locations that are listed in the allowlist defined by the server. Content-Security-Policy header can be set as route-specific response header in the Application Router. SAPUI5 is CSP-compliant as well.
❗ Applications have to configure Content Security Policy to meet basic compliance.
Cross-Site Request Forgery (CSRF) attacks make end users executing unwanted actions on the server while having established a valid web session. By default, the Application Router, which manages the session with the client, enforces a CSRF token protection (on basis of x-csrf-token headers). Hence, CAP services don't have to deal with CSRF protection as long as they don't maintain sessions with the client. SAPUI5 supports CSRF tokens on client side out of the box.
Denial-of-service (DoS) attacks attempt to reduce service availability for legitimate users. This can happen by erroneous server behavior upon a single large or a few specially crafted malicious requests that bind an excessive amount of shared OS resources such as CPU, memory, or network connections.
Since OS resource allocations are distributed over the entire request, DoS-prevention needs to be addressed in all different layers of the runtime stack:
The used web server frameworks such as Spring/Tomcat or Express start with reasonable default limits, e.g.:
Maximum size of the HTTP request header.
Maximum size of the HTTP request body.
Maximum queue length for incoming connection requests
Maximum number of connections that the server accepts and processes at any given time.
Connection timeout. Additional size limits and timeouts (request timeout) are established by the reverse proxy components, API Gateway and Application Router.
If you want to apply an application-specific sizing, consult the corresponding framework documentation.
Moreover, CAP adapters automatically introduce query results pagination in order to limit memory peaks (customize with @cds.query.limit). The total number of request of OData batches can be limited by application configuration.
Settings `cds.odataV4.batch.maxRequests` resp. `cds.odataV2.batch.maxRequests` specify the corresponding limits.
❗ CAP applications have to limit the amount of $expands per request in a custom handler. Also the maximum amount of requests per $batch request need to be configured as follows:
Open transactions are expensive as they bind many resources such as a database connection as well as memory buffers. To minimize the amount of time a transaction must be kept open, the CAP runtime offers an Outbox Service that allows to schedule asynchronous remote calls in the business transaction. Hence, the request time to process a business query, which requires a remote call (such as to an audit log server or messaging broker), is minimized and independent from the response time of the remote service.
Avoid synchronous requests to remote systems during a transaction.
As already outlined, database connections are a expensive resource. To limit overall usage, by default, the CAP runtime creates connection pools per subscriber tenant. Similarly, the DB driver settings such as SQL query timeout and buffer size have reasonable values.
As outlined before, a well-sized microservice instance doesn't help to protect from service downtimes when excessive workload initiated by an attacker exceeds the available capacity. Rate limiting is a possible counter measure to restrict the frequency of calls of a client.
❗ Applications need to establish an adequate rate limiting strategy.
There's also the possibility to introduce request filtering and rate limiting on platform level via Route Service. It has the advantage that the requests can be controlled centrally before touching application service instances.
In addition, the number of instances need to be scaled horizontally according to current load requirements. This can be achieved automatically by consuming Application Autoscaler.
There are additional attack vectors to consider. For instance, naive URL handling in the server endpoints frequently introduces security gaps. Luckily, CAP applications don't have to implement HTTP/URL processing on their own as CAP offers sophisticated protocol adapters such as OData V2/V4 that have the necessary security validations in place. The adapters also transform the HTTP requests into a corresponding CQN statement. Access control is performed on basis of CQN level according to the CDS model and hence HTTP Verb Tampering attacks are avoided. Also HTTP method override, using X-Http-Method-Override or X-Http-Method header, is not accepted by the runtime.
The OData protocol allows to encode field values in query parameters of the request URL or in the response headers. This is, for example, used to specify:
Applications need to ensure by means of CDS modeling that fields reflecting sensitive data are excluded and don't appear in URLs.
It's recommended to serve all application endpoints via CAP adapters. Securing custom endpoints is left to the application.
In addition, CAP runs on a virtual machine with a managed heap that protects from common memory corruption vulnerabilities such as buffer overflow or range overflows.
CAP also brings some tools to effectively reduce the attack vector of race condition vulnerabilities. These might be exposed when the state of resources can be manipulated concurrently and a consumer faces an unexpected state. CAP provides basic means of concurrency control on different layers, e.g. ETags and pessimistic locks. Moreover, Messages received from the message queue are always in order.
Applications have to ensure a consistent data processing taking concurrency into account.