SAP AI Core Architecture and objects for beginners

I am writing this blog to describe SAP AI Core architecture overview.

There are many types of object on SAP AI Core, so it takes time to learn and ingest them.  Besides, SAP AI Core works with other systems like Repository managers and Data Storages.

When I learned AI Core with official tutorials, I was so confused with object relations on SAP AI Core.  To clear AI Core, I sorted our the architecture and objects as below.

Here is the top architecture overview of SAP AI Core and AI Launchpad.

On the top left side, there is “Python SDK”, which connects with SAP AI Core using REST API.  Actually, there are two Python SDKs below.

  1. SAP AI Core SDK: It provides tools to manage objects on SAP AI Core.
  2. SAP AI API Client SDK: It provides tools to manage any implementation of AI API.  So there are no functions for SAP AI Core specific objects.

I was confused with python packages, since there are two more Python packages.

  1. sap-ai-core-metaflow: Metaflow plug-in which generates Argo Workflows.
  2. sap-computer-vision-package: It helps to develop computer vision implementations on SAP AI Core.

There are many objects on SAP AI Core.  I sorted out them and drawn them and their linkages.


I believe that the Workflow/ Serving templates are one of the most important objects.

Let’s see a template, which is on tutorial “Generate Metrics and Compare Models in SAP AI Core”.

kind: WorkflowTemplate
metadata: name: house-metrics-train # executable id, must be unique across all your workflows (YAML files) annotations: "Learning how to ingest data to workflows" "House Price (Tutorial)" # Scenario name should be the use case "Generate metrics" "training-metrics" # Executable name should describe the workflow in the use case "dataset" # Helps in suggesting the kind of artifact that can be attached. "model" # Helps in suggesting the kind of artifact that can be generated. labels: "learning-datalines" "2.0"
spec: imagePullSecrets: - name: credstutorialrepo # your docker registry secret entrypoint: mypipeline arguments: parameters: # placeholder for string like inputs - name: DT_MAX_DEPTH # identifier local to this workflow templates: - name: mypipeline steps: - - name: mypredictor template: mycodeblock1 - name: mycodeblock1 inputs: artifacts: # placeholder for cloud storage attachements - name: housedataset # a name for the placeholder path: /app/data/ # where to copy in the Dataset in the Docker image outputs: artifacts: - name: housepricemodel # name of the artifact generated, and folder name when placed in S3, complete directory will be `../<executaion_id>/housepricemodel` globalName: housemodel # local identifier name to the workflow, also used above in annotation path: /app/model/ # from which folder in docker image (after running workflow step) copy contents to cloud storage archive: none: # specify not to compress while uploading to cloud {} container: image:<YOUR_DOCKER_USERNAME>/house-price:04 # Your docker image name command: ["/bin/sh", "-c"] env: - name: DT_MAX_DEPTH # name of the environment variable inside Docker container value: "{{workflow.parameters.DT_MAX_DEPTH}}" # value to set from local (to workflow) variable DT_MAX_DEPTH args: - "python /app/src/"
  • kind: WorkflowTemplate/ServingTemplate
  • metadata
    • name: executable ID
    • labels
      • scenario ID
  • spec
    • imagePullSecrets
      • name: Docker registry secret
    • arguments
      • parameters
        • name: parameters for training / serving.  Values for the parameters are defined with configurations.
    • templates
      • inputs/ outputs
        • artifacts: artifacts ID for training / serving. Values are defined with configurations.
      • container
        • image: Docker image name on docker registry secret
      • metadata
        • labels
          • Resource Plan for running

Though “” is optional, it should be defined for clarification.  When not defining “”, “Starter” was selected.  Default value “starter” may be changed, since there are no explanation on the help doc.