SAP Cloud Application Programming Model Getting Started (CAP)
This article describes how to get started with building Cloud Application Programming Model (CAP) projects and according artefacts with Core Data Services (CDS).
For local native development see SAP HANA Local Native Development
Sources and Further Reading
Description | URL | Comment |
---|---|---|
All-in-one Quick Start | https://github.com/SAP-samples/cloud-cap-walkthroughs/blob/master/exercises-node/intro/README.md | |
Back to basics SAP Cloud Application Programming Model (CAP) | https://www.youtube.com/playlist?list=PL6RpkC85SLQBHPdfHQ0Ry2TMdsT-muECxhttps://github.com/qmacro/capb2b | SAP Developers YouTube Playlist and GitHub Repository |
Qmacro / DJ Adams | https://qmacro.org/tags/cap/ | |
Qmacro @ SAP Community | https://community.sap.com/t5/user/viewprofilepage/user-id/53 | |
SAP CAP Documentation | https://cap.cloud.sap/ | |
SAP CAP Sample Repositories | https://github.com/SAP-samples/cloud-cap-samples | Use opensap*-Branches |
GitHub Repositories of CAP and Fiori Showcases | https://github.com/SAP-samples/cap-sflighthttps://github.com/SAP-samples/fiori-elements-feature-showcase | |
ABAP Freak Show Ep. 1 - HANA Cloud and the Business Application Studio | https://www.youtube.com/watch?v=a3WPQwmpbvI&list=PLoc6uc3ML1JR38-V46qhmGIKG07nXwO6X&index=71 |
Installation of Prerequisites
Relevant Tools from SAP
npm i -g @sap/cds npm i -g @sap/cds-dk
Additional 3rd Party Tools
npm i -g hana-cli
Setup of CAP Projects
Create CDS Project
cds init bookshop
or
cds init MyCDSProject --add hana, mta
Download and install dependencies:
npm install
Start Service
cds watch cds w
This also monitors changes to the underlying files and restarts when a file has changed.
Prevent Reloading
To prevent reloading the service, for example when saving temporary files in the project directory, save the files in one of the directoryies which are excluded from monitoring for file changes, which are:
- _out
- node_modules
- @types
- @cds-models
For more details see: https://qmacro.org/blog/posts/2024/04/10/avoid-design-time-cap-server-restarts-when-maintaining-local-data-files/
Add Persistency to CDS
In case of SQLite first install SQLite3 packages by executing in the root of the project folder:
npm i sqlite3 -D
Deploy data model to different database types:
cds deploy
This deploys the cds entity models and csv files to the database specified in package.json in section cds.requires.db. This should look like this:
{ "cds":
{ "requires": {
"db": {
"kind": "sqlite",
"credentials": { "url": "db.sqlite" }
}
}
}
}
For more explainations see for SQLite see https://cap.cloud.sap/docs/guides/databases-sqlite
cds deploy --to sqlite # Deploys to sqlite.db cds deploy --to sqlite:my.db # Deploys to my.db cds deploy --to hana # Deploys to HDI container on HANA # (Requires Cloud Foundry login)
This does not update the package.json secion cds.requires.db any more as opposed to older cds versions.
In order to see the actual created SQL statements:
cds compile srv/cat-service.cds --to sql # Creates SQL statements cds compile srv/cat-service.cds --to hana # Creates hdbcds or hdbtable/hdbview artefacts
When deployed to SQLite database use this statement to view the database:
sqlite3 my.db -cmd .dump
Create CSV header files in db location (by default db/data) for modeled entities:
cds add data
Deployment to SAP Business Technology Platform (BTP)
Login to CF space:
cf login
Build Deployment Package
cds add hana
package.json needs to contain "kind": "hana". (really? - for which CF / HANA Cloud Version?)
Also when deploying to BTP which is internally HANA 4.0 "deploy-format": "hdbtable" has to be set and hdi-deploy should be at least version 4. Example:
{
...
"devDependencies": {
"@sap/hdi-deploy": "^4"
},
...
"cds": {
"hana": {
"deploy-format": "hdbtable"
},
"requires": {
"db": {
"model": [
"db",
"srv"
],
"kind": "hana"
}
}
}
}
Deploy via Cloud Foundry CLI
Build the deployment artifacts:
cds build/all cds build --production # Only production profile?
Create a hana service for the hdi-container. The stated hdi-container name must correlate to the services-name in /gen/db/src/manifest.yaml.
cf create-service hana hdi-shared <app-db-hdi-container>
Now push the DB deployer application and also the actual service application:
cf push -f gen/db cf push -f gen/srv --random-route
Deploy via MTA
Install MTA Build Tool
npm install -g mbt
Generate MTA project descriptor file mta.yaml and build MTA archive:
cds add mta mbt build
Deploy MTA archive to CF space:
cf deploy mta_archives/<buildmta>.mtar
Troubleshooting
Real-time output of CF services can be displayed in BAS terminal with
cf logs <appname>
CDS Commands Cheatsheet
Noteworthy Commands
Command | Description |
---|---|
cds env | Displays the effective configuration... |
cds env ls | ... in .properties format |
cds compile services.cds | Compile JSON-like database structure |
cds compile services.cds | jq | dto, but in actual JSON |
cds compile services.cds --to sql | Compile SQL statements from CDS |
cds compile services.cds --to edmx | Compile EDMX file (metadata) from CDS |
cds compile services.cds --to csn | Output CSN = Internal CDS Schema Notation... |
cds compile services.cds --to csn | jq '.definitions | keys' | ... and process output with JSON processor by extracting keys of definition nodes |
cds compile schema.cds --to yaml | Output Yaml of database schema |
cds compile services.cds --to yaml | Output Yaml of all services including their db schema |
cds add data | Create CSV header files in db location (by default db/data) for modeled entities |
Using CDS REPL & JavaScript APIs
cds r cds repl
This launches an read-eval-print-loop which can be used as an interactive playground to experiment with CDS' JavaScript APIs.
API | Description |
---|---|
cds.utils.uuid() | Returns an UUID |
Accessing OData Services from Command Line
For further details on OData Services see OData Cheat Sheet
Examples for OData V4 Get Requests via URL
The following table shows examples of what can be requested with a given url.
What | URL |
---|---|
Bookshop service root | http://localhost:4004/odata/v4/bookshop |
Bookshop service, all Books entities | http://localhost:4004/odata/v4/bookshop/Books |
Bookshop service, Books entity with specified key ID in parentheses | http://localhost:4004/odata/v4/bookshop/Books(3b1e1d9e-9563-4cd3-a448-69b8b8f1c384) |
Bookshop service, Books entity with filter on title field | http://localhost:4004/odata/v4/bookshop/Books?$filter=contains(title,%27The%20Player%20Of%20Games%27) |
Bookshop service, value from field title from Books entity with key ID in parentheses | http://localhost:4004/odata/v4/bookshop/Books(3b1e1d9e-9563-4cd3-a448-69b8b8f1c384)/title |
totalStock function from Bookshop service | http://localhost:4004/odata/v4/bookshop/totalStock() |
Reading from Services
Read output of services via curl and pipe it through jq for nicer formatting:
curl -s 'localhost:4004/odata/v4/bookshop/Orders' | jq
Output (example for Bookshop repo):
{
"@odata.context": "$metadata#Orders",
"value": [
{
"ID": "6091d4ab-650e-4afa-90bb-163305e473a2",
"comment": "second order"
},
{
"ID": "ac5aeb9f-c7cd-4f52-ab4a-9c0313ded402",
"comment": "first order"
}
]
}
Reading from Services while returning Request Header
curl -is 'localhost:4004/odata/v4/bookshop/Orders'
Output:
HTTP/1.1 200 OK
X-Powered-By: Express
X-Correlation-ID: c8dbb1e1-6124-473f-b482-4efe091d1f5b
OData-Version: 4.0
content-type: application/json;odata.metadata=minimal
Date: Sat, 22 Jun 2024 14:57:24 GMT
Connection: keep-alive
Keep-Alive: timeout=5
Content-Length: 491
{"@odata.context":"$metadata#Books","value":[{"ID":"3b1e1d9e-9563-4cd3-a448-69b8b8f1c384","title":"The Player Of Games","author_ID":"d5adee57-ef4a-441e-bfa7-9acac6e647e4","stock":5},{"ID":"dc3c659f-29c5-4b21-b3b0-44c58bae5306","title":"The Hitch Hiker's Guide To The Galaxy","author_ID":"01afafdf-0b4a-475b-b107-77fd3c9157da","stock":42},{"ID":"f584c9c9-c076-423a-9638-4c5917a4cbac","title":"Mostly Harmless","author_ID":"01afafdf-0b4a-475b-b107-77fd3c9157da","stock":100,"urgency":"HIGH"}]}
Continuous Reading from Services after File Change
This section is not specitic to CAP, but in design time it perhaps helps to reload for example the read services from the section before after changing the source code.
ls srv/* | entr -c bash -c 'curl -s localhost:4004/odata/v4/bookshop/Orders | jq .'
entr
is a tool which looks for file changes on a list of supplemended files and executes a command. So ls srv/*
supplements a file list and pipes it to entr
which in turn executes the bash command curl
to query the OData service. The result here would be for example the following one after every change to a file in the srv subdirectory:
{
"@odata.context": "$metadata#Orders",
"value": [
{
"ID": "6091d4ab-650e-4afa-90bb-163305e473a2",
"comment": "second order"
},
{
"ID": "ac5aeb9f-c7cd-4f52-ab4a-9c0313ded402",
"comment": "first order"
}
]
}
Remarks: When saving a file in the srv subdirectory and when using cds watch
is used this will trigger a restart of the CDS server. As this takes some time the entr
command could trigger the curl
command before the CDS server has fully restarted. A possible solution to solve this issue is described below in section Custom Application Logic Within CAP, sub section Custom Server.
Deep Insert into Service with POST
For performing a deep insert we first need a at least two level deep structure.
The following neworder.json file with a two-level deep structure is given:
{
"comment": "New Order",
"Items": [
{
"pos": 1,
"quantity": 10
},
{
"pos": 2,
"quantity": 20
}
]
}
To perform the deep insert put the json file into the service by utilizing curl. The filename needs to be prefixed with a @. As we are connecting to a web service we also need to add a header and specify the content type, in this case application/json:
curl --verbose --data @neworder.json --header 'content-type: application/json' --url 'localhost:4004/odata/v4/bookshop/Orders'
By default curl with parameter --data uses the POST method. For more details on the HTTP methods see Notable RFCs Section HTTP Methods GET, POST, PUT, DELETE
We should receive an answer that is similar to the following one in that we are connected to localhost, and we posted to odata service and that the HTTP code was 201 Created:
* Trying 127.0.0.1:4004...
* Connected to localhost (127.0.0.1) port 4004 (#0)
> POST /odata/v4/bookshop/Orders HTTP/1.1
> Host: localhost:4004
> User-Agent: curl/7.74.0
> Accept: */*
> content-type: application/json
> Content-Length: 193
>
* upload completely sent off: 193 out of 193 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 201 Created
< X-Powered-By: Express
< X-Correlation-ID: 4eee1d5a-5b43-4577-8257-a99b026a0c43
< OData-Version: 4.0
< content-type: application/json;odata.metadata=minimal
< Location: Orders(9416b371-2355-402a-b06d-34f5b2c61704)
< Date: Sat, 20 Apr 2024 12:23:52 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Content-Length: 280
<
* Connection #0 to host localhost left intact
{"@odata.context":"$metadata#Orders(Items())/$entity","ID":"9416b371-2355-402a-b06d-34f5b2c61704","comment":"New Order","Items":[{"parent_ID":"9416b371-2355-402a-b06d-34f5b2c61704","pos":1,"quantity":10},{"parent_ID":"9416b371-2355-402a-b06d-34f5b2c61704","pos":2,"quantity":20}]}
Update Records via Service with PATCH
Following up the deep insert example above let's assume we want to update the Order with comment New Order. For this we create a neworder-update.json with a different content, i.e.:
{
"comment": "New Order",
"Items": [
{
"pos": 1,
"quantity": 1000
}
]
}
To perform the update we need to refer to the ID of the previously created Order to have this one updated. This happens by using the ID 9416b371-2355-402a-b06d-34f5b2c61704 from the server response above and adding it to the url after Order in parentheses:
curl -X PATCH --verbose --data @neworder-update.json --header 'content-type: application/json' --url 'localhost:4004/odata/v4/bookshop/Orders(9416b371-2355-402a-b06d-34f5b2c61704)' | jq .
The expected result is HTTP code 200 OK:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1:4004...
* Connected to localhost (127.0.0.1) port 4004 (#0)
> PATCH /odata/v4/bookshop/Orders(9416b371-2355-402a-b06d-34f5b2c61704) HTTP/1.1
> Host: localhost:4004
> User-Agent: curl/7.74.0
> Accept: */*
> content-type: application/json
> Content-Length: 121
>
} [121 bytes data]
* upload completely sent off: 121 out of 121 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Correlation-ID: 9a21bb8f-cfcc-4641-bc68-2f1bcbb7d5a8
< OData-Version: 4.0
< content-type: application/json;odata.metadata=minimal
< Date: Sat, 20 Apr 2024 12:34:13 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Content-Length: 198
<
{ [198 bytes data]
100 319 100 198 100 121 19800 12100 --:--:-- --:--:-- --:--:-- 31900
* Connection #0 to host localhost left intact
{
"@odata.context": "$metadata#Orders/$entity",
"ID": "9416b371-2355-402a-b06d-34f5b2c61704",
"comment": "New Order",
"Items": [
{
"parent_ID": "9416b371-2355-402a-b06d-34f5b2c61704",
"pos": 1,
"quantity": 1000
}
]
}
Check the results with queueing the service again:
curl -s 'localhost:4004/odata/v4/bookshop/Orders?$expand=Items' | jq
{
"@odata.context": "$metadata#Orders(Items())",
"value": [
{
...
},
{
"ID": "9416b371-2355-402a-b06d-34f5b2c61704",
"comment": "New Order",
"Items": [
{
"parent_ID": "9416b371-2355-402a-b06d-34f5b2c61704",
"pos": 1,
"quantity": 1000
}
]
},
{
...
}
]
}
Custom Application Logic Within CAP
Custom Event Handlers
Create a .js-file with the same name as the service definition in the same subdirectory as the service definition file.
Example: srv/main.js file for srv/main.cds service definition:
console.log("Hello, World!")
const cds = require('@sap/cds')
module.exports = cds.service.impl(function () {
console.log("I am in the anonymous function")
this.on('READ', 'Books', () => {
console.log("Handling READ of Books");
})
})
This will fire on the READ handler of the Books entity.
For more details see: https://cap.cloud.sap/docs/guides/providing-services#custom-event-handlers
Custom Server
The following section is a example for auto reloading after a server has properly restarted to execute the command explained in section Continuous Reading from Services after File Change above.
Create a server.js file:
const fs = require('node:fs/promises')
require('@sap/cds').on('listening', async () => {
await fs.writeFile('listening', '')
})
This creates a file called listening after the server restart has happened. We use this a a trigger to read after a file change:
ls listening | entr -c bash -c 'curl -s localhost:4004/odata/v4/bookshop/Orders | jq .'
For more details see:
https://cap.cloud.sap/docs/node.js/cds-server#custom-server-js
Custom Logger
Replace console.log with a custom logger:
const cds = require('@sap/cds')
const logger = cds.log('mylog')
module.exports = cds.service.impl(function () {
logger("Hello World")
this.after('READ', 'Books', (data, req) => {
logger(data);
})
})
In line 2 a logger is defined and in lines 5 and 7 utilized. This leads to custom labled output in the console like in lines 1 and 11:
[mylog] - Hello World
[cds] - using auth strategy { kind: 'mocked', impl: 'node_modules/@sap/cds/lib/auth/basic-auth' }
[cds] - serving bookshop { impl: 'srv/main.js', path: '/odata/v4/bookshop' }
[cds] - server listening on { url: 'http://localhost:4004' }
[cds] - launched at 6/1/2024, 2:27:09 PM, version: 7.7.2, in: 294.413ms
[cds] - [ terminate with ^C ]
[odata] - GET /odata/v4/bookshop/Books
[mylog] - [
{
ID:...
That way log levels can be implemented too. For more details see: