Best Practices
From generic Node.js best practices like dependency management and error handling to CAP-specific topics like transaction handling and testing, this video provides some tips and tricks to improve the developer experience and avoid common pitfalls, based on common customer issues. In the following section we explain these best practices.
Managing Dependencies
Projects using CAP need to manage dependencies to the respective tools and libraries in their package.json and/or pom.xml respectively. Follow the guidelines to make sure that you consume the latest fixes and avoid vulnerabilities and version incompatibilities. These guidelines apply to you as a consumer of reuse packages as well as a provider of such reuse packages.
Always Use the Latest Minor Releases → for Example, ^7.2.0
This applies to both, @sap packages as well as open source ones. It ensures your projects receive the latest features and important fixes during development. It also leverages NPM's dedupe to make sure bundles have a minimal footprint.
Example:
"dependencies": {
"@sap/cds": "^5.5.0",
"@sap/some-reuse-package": "^1.1.0",
"express": "^4.17.0"
}
We recommend using the caret form such as ^1.0.2
Caret form is the default for npm install
, as that format clearly captures the minimum patch version.
Keep Open Ranges When Publishing for Reuse
Let's explain this by looking at two examples.
Bad
Assume that you've developed a reuseable package, and consume a reuse package yourself. You decided to violate the previous rules and use exact dependencies in your package.json:
"name": "@sap/your-reuse-package",
"version": "1.1.2",
"dependencies": {
"@sap/cds": "3.0.3",
"@sap/foundation": "2.0.1",
"express": "4.16.3"
}
The effect would be as follows:
- Consuming projects get duplicate versions of each package they also use directly, for example,
@sap/cds
,@sap/foundation
, andexpress
. - Consuming projects don't receive important fixes for the packages used in your package.json unless you also provide an update.
- It wouldn't be possible to reuse CDS models from common reuse packages (for example, would already fail for
@sap/cds/common
).
Good
Therefore, the rules when publishing packages for reuse are:
- Keep the open ranges in your package.json (just don't touch them).
- Do an npm update before publishing and test thoroughly. (→ ideally automated in your CI/CD pipeline).
- Do the vulnerability checks for your software and all open-source software used by you or by packages you used (→ Minimize Usage of Open Source Packages).
- Don't do
npm shrinkwrap
→ see also npm's docs: "It's discouraged for library authors to publish this file, ..."
TIP
If both your package and a consuming package reuse the same CDS models, loading those models would fail because it's impossible to automatically merge the two versions, nor is it possible to load two independent versions. The reason for this is that it's reusing models that share the same single definitions.
Lock Dependencies Before Deploying
When releasing a service or an application to end consumers, use npm install
or npm update
to produce a package-lock.json file that freezes dependencies. This guarantees that it works correctly as it did the last time you tested it and checked it for vulnerabilities.
Overall, the process for your release should include these steps:
npm config set package-lock true # enables package-lock.json
npm update # update it with latest versions
git add package-lock.json # add it to version control
# conduct all test and vulnerability checks
The package-lock.json file in your project root freezes all dependencies and is deployed with your application. Subsequent npm installs, such as by cloud deployers or build packs, always get the same versions, which you checked upon your release.
This ensures that the deployed tool/service/app doesn't receive new vulnerabilities, for example, through updated open source packages, without you being able to apply the necessary tests as prescribed by our security standards.
Run npm update
frequently to receive latest fixes regularly
Tools like renovate or GitHub's dependabot can help you automate this process.
Minimize Usage of Open Source Packages
This rule for keeping open ranges for dependencies during development, as well as when publishing for reuse, also applies for open source packages.
Because open source packages are less reliable with respect to vulnerability checks, this means that end-of-chain projects have to ensure respective checks for all the open source packages they use directly, as well as those they 'inherit' transitively from reuse packages.
So, always take into account these rules:
When releasing to end consumers, you always have to conduct vulnerability checks for all open source packages that you used directly or transitively.
As a provider of reuse packages you should minimize the usage of open source packages to a reasonable minimum.
Q: Why not freeze open source dependencies when releasing for reuse?
A: Because that would only affect directly consumed packages, while packages from transitive dependencies would still reach your consumers.
A good approach is to also provide certain features in combination with third-party packages, but to keep them, and hence the dependencies, optional; for example, express.js does this.
Upgrade to Latest Majors as Soon as Possible
As providers of evolving SDKs we provide major feature updates, enhancements, and improvements in 6-12 month release cycles. These updates come with an increment of major release numbers.
At the same time, we can't maintain and support unlimited numbers of branches with fixes. The following rules apply:
- Fixes and nonbreaking enhancements are made available frequently in upstream release branches (current major).
- Critical fixes also reach recent majors in a 2-month grace period.
To make sure that you receive ongoing fixes, make sure to also adopt the latest major releases in a timely fashion in your actively maintained projects, that is, following the 6-12 month cycle.
Additional Advice
Using npm-shrinkwrap.json — only if you want to publish CLI tools or other 'sealed' production packages to npm. Unlike package-lock.json, it does get packaged and published to npm registries. See the npm documentation for more.
Securing Your Application
To keep builds as small as possible, the Node.js runtime doesn't bring any potentially unnecessary dependencies and, hence, doesn't automatically mount any express middlewares, such as the popular helmet
.
However, application developers can easily mount custom or best-practice express middlewares using the bootstrapping mechanism.
Example:
// local ./server.js
const cds = require('@sap/cds')
const helmet = require('helmet')
cds.on('bootstrap', app => {
app.use(helmet())
})
module.exports = cds.server // > delegate to default server.js
Consult sources such as Express' Production Best Practices: Security documentation for state of the art application security.
Content Security Policy (CSP)
Creating a Content Security Policy (CSP) is a major building block in securing your web application.
helmet
provides a default policy out of the box that you can also customize as follows:
cds.on('bootstrap', app => {
app.use(
helmet({
contentSecurityPolicy: {
directives: {
...helmet.contentSecurityPolicy.getDefaultDirectives()
// custom settings
}
}
})
)
})
Find required directives in the OpenUI5 Content Security Policy documentation
Cross-Site Request Forgery (CSRF) Token
Protect against cross-side request forgery (CSRF) attacks by enabling CSRF token handling through the App Router.
For a SAPUI5 (SAP Fiori/SAP Fiori Elements) developer, CSRF token handling is transparent
There's no need to program or to configure anything in addition. In case the server rejects the request with 403 and “X-CSRF-Token: required”, the UI sends a HEAD request to the service document to fetch a new token.
Learn more about CSRF tokens and SAPUI5 in the Cross-Site Scripting documentation.
Alternatively, you can add a CSRF token handler manually.
This request must never be cacheable
If a CSRF token is cached, it can potentially be reused in multiple requests, defeating its purpose of securing each individual request. Always set appropriate cache-control headers to no-store, no-cache, must-revalidate, proxy-revalidate
to prevent caching of the CSRF token.
Using App Router
The App Router is configured to require a CSRF token by default for all protected routes and all HTTP requests methods except HEAD and GET. Thus, by adding the App Router as described in the Deployment Guide: Using App Router as Gateway, endpoints are CSRF protected.
Learn more about CSRF protection with the App Router
Manual Implementation
On the backend side, except for handling the HEAD request mentioned previously, also the handlers for each CSRF protected method and path should be added. In the following example, the POST method is protected.
TIP
If you use SAP Fiori Elements, requests to the backend are sent as batch requests using the POST method. In this case, an arbitrary POST request should be protected.
As already mentioned, in case the server rejects because of a bad CSRF token, the response with a status 403 and a header “X-CSRF-Token: required” should be returned to the UI. For this purpose, the error handling in the following example is extended:
const csrfProtection = csrf({ cookie: true })
const parseForm = express.urlencoded({ extended: false })
cds.on('bootstrap', app => {
app.use(cookieParser())
// Must: Provide actual <service endpoint>s of served services.
// Optional: Adapt for non-Fiori Elements UIs.
.head('/<service endpoint>', csrfProtection, (req, res) => {
res.set({
'X-CSRF-Token': req.csrfToken(),
'Cache-Control': 'no-store, no-cache, must-revalidate, proxy-revalidate'
}).send()
})
// Must: Provide actual <service endpoint>s of served services.
// Optional: Adapt for non-Fiori Elements UIs.
.post('/<service endpoint>/$batch', parseForm, csrfProtection, (req, res, next) => next())
.use((err, req, res, next) => {
if (err.code !== 'EBADCSRFTOKEN') return next(err)
res.status(403).set('X-CSRF-Token', 'required').send()
})
})
Learn more about backend coding in the csurf documentation.
Use App Router CSRF handling when scaling Node.js VMs horizontally
Handling CSRF at the App Router level ensures consistency across instances. This avoids potential token mismatches that could occur if each VM handled CSRF independently.
Cross-Origin Resource Sharing (CORS)
With Cross-Origin Resource Sharing (CORS) the server that hosts the UI can tell the browser about servers it trusts to provide resources. In addition, so-called "preflight" requests tell the browser if the cross-origin server will process a request with a specific method and a specific origin.
If not running in production, CAP's built-in server.js allows all origins.
Custom CORS Implementation
For production, you can add CORS to your CAP server as follows:
const ORIGINS = { 'https://example.com': 1 }
cds.on('bootstrap', app => app.use ((req, res, next) => {
if (req.headers.origin in ORIGINS) {
res.set('access-control-allow-origin', req.headers.origin)
if (req.method === 'OPTIONS') // preflight request
return res.set('access-control-allow-methods', 'GET,HEAD,PUT,PATCH,POST,DELETE').end()
}
next()
})
Learn more about CORS in CAP in this article by DJ Adams
Learn more about CORS in general in the MDN Web Docs.
Configuring CORS in App Router
The App Router has full support for CORS. Thus, by adding the App Router as described in the Deployment Guide: Using App Router as Gateway, CORS can be configured in the App Router configuration.
Learn more about CORS handling with the App Router
Avoid configuring CORS in both App Router and CAP server
Configuring CORS in multiple places can lead to confusing debugging scenarios. Centralizing CORS settings in one location decreases complexity, and thus, improves security.
Availability Checks
To proactively identify problems, projects should set up availability monitoring for all the components involved in their solution.
Anonymous Ping
An anonymous ping service should be implemented with the least overhead possible. Hence, it should not use any authentication or authorization mechanism, but simply respond to whoever is asking.
From @sap/cds^7.8
onwards, the Node.js runtime provides such an endpoint for availability monitoring out of the box at /health
that returns { status: 'UP' }
(with status code 200).
You can override the default implementation and register a custom express middleware during bootstrapping as follows:
cds.on('bootstrap', app => app.get('/health', (_, res) => {
res.status(200).send(`I'm fine, thanks.`)
}))
More sophisticated health checks, like database availability for example, should use authentication to prevent Denial of Service attacks!
Error Handling
Good error handling is important to ensure the correctness and performance of the running app and developer productivity. We will give you a brief overview of common best practices.
Error Types
We need to distinguish between two types of errors:
- Programming errors: These occur because of some programming mistakes (for example,
cannot read 'foo' of undefined
). They need to be fixed. - Operational errors: These occur during the operation (for example, when a request is sent to an erroneous remote system). They need to be handled.
Guidelines
Let It Crash
'Let it crash' is a philosophy coming from the Erlang programming language (Joe Armstrong) which can also be (partially) applied to Node.js.
The most important aspects for programming errors are:
- Fail loudly: Do not hide errors and silently continue. Make sure that unexpected errors are correctly logged. Do not catch errors you can't handle.
- Don't program in a defensive way: Concentrate on your business logic and only handle errors if you know that they occur. Only use
try
/catch
blocks when necessary.
Never attempt to catch and handle unexpected errors, promise rejections, etc. If it's unexpected, you can't handle it correctly. If you could, it would be expected (and should already be handled). Even though your apps should be stateless, you can never be 100% certain that any shared resource wasn't affected by the unexpected error. Hence, you should never keep an app running after such an event, especially in multi-tenant apps that bear the risk of information disclosure.
This will make your code shorter, clearer, and simpler.
Don't Hide Origins of Errors
If an error occurs, it should be possible to know the origin. If you catch errors and re-throw them without the original information, it becomes hard to find and fix the root cause.
Example:
try {
// something
} catch (e) {
// augment instead of replace details
e.message = 'Oh no! ' + e.message
e.additionalInfo = 'This is just an example.'
// re-throw same object
throw e
}
In rare cases, throwing a new error is necessary, for example, if the original error has sensitive details that should not be propagated any further. This should be kept to an absolute minimum.
Further Readings
The following articles might be of interest:
Timestamps
When using timestamps (for example for managed dates) the Node.js runtime offers a way to easily deal with that without knowing the format of the time string. The req
object contains a property timestamp
that holds the current time (specifically new Date()
, which is comparable to CURRENT_TIMESTAMP
in SQL). It also stays the same until the request finished, so if it is used in multiple places in the same transaction or request it will always be the same.
Example:
srv.before("UPDATE", "EntityName", (req) => {
const now = req.timestamp;
req.data.createdAt = now;
});
Internally the timestamp is a JavaScript Date
object, that is converted to the right format, when sent to the database. So if in any case a date string is needed, the best solution would be to initialize a Date object, that is then translated to the correct UTC String for the database.
Custom Streaming beta
When using Media Data the Node.js runtime offers a possibility to return a custom stream object as response to READ
requests like GET /Books/coverImage
.
Example:
srv.on('READ', 'Books', (req, next) => {
if (coverImageIsRequested) {
const readable = new Readable()
return {
value: readable,
$mediaContentType: 'image/jpeg',
$mediaContentDispositionFilename: 'cover.jpg', // > optional
$mediaContentDispositionType: 'inline' // > optional
}
}
return next()
})
In the returned object, value
is an instance of stream.Readable and the properties $mediaContentType
, $mediaContentDispositionFilename
, and $mediaContentDispositionType
are used to set the respective headers.
Custom $count
When you write custom READ
on-handlers, you should also support requests that contain $count
, such as GET /Books/$count
or GET /Books?$count=true
. For more details, consider the following example:
srv.on('READ', 'Books', function (req) {
// simple '/$count' request
if (req.query.SELECT.columns?.length === 1 && req.query.SELECT.columns[0].as === '$count')
return [{ $count: 100 }]
// support other '/$count' requests
...
const resultSet = [ ... ]
// request contains $count=true
if (req.query.SELECT.count === true) resultSet.$count = 100
return resultSet
})