Hands-On Tutorial: Manage & Deploy Machine Learning in Kubernetes with Kyma 2.0 in the Business Technology Platform

I am very excited that the new Kyma 2.0 open source is now available for the Kyma Runtime in our SAP Business Technology Platform. If you want to know more about all the new features have a look at the following link. Of course, I want to update my older blog post to deploy a machine learning model in the BTP Kyma Runtime. Recall Kyma is an open-source project built on top of Kubernetes. It allows you to build extensions through microservices and serverless functions. I hope you are curious to try it out.

What are the requirements?

  • Python Environment (Version: 3.8)
  • Docker and Docker Hub
  • BTP Kyma Runtime (Version: Kyma 2.0.4)

What will you learn in this Hands-On tutorial?

  1. Create an API endpoint which enables a user to create predictions on the fly with a trained machine learning model.
  2. Containerize the needed Python script for our app in a Docker Image with all needed requirements and dependencies.
  3. Deploy and manage our container using Kubernetes in Kyma runtime, bringing the opensource world and SAP world together.

Creating a good machine learning model takes a lot of time and hard work. For this Hands-On tutorial this is already taken care of. You can find all the needed materials in the following GitHub repository.  But what does our machine learning model actually do? To put it simple, our optimized and trained random forest model predicts if a transaction is fraudulent or not. Of course, before we can use it in our app, we must import the required packages. Then, we load the trained model from our working directory and start building our API with Flask. Hence, we create the API endpoint ,,predict’’ and write the function which will create our predictions based on a new observation. Further, we need to transform the data from the GET request into a format which our model can digest. After that the data is passed into the predict function of our model and the result is saved into a new variable. At last, we return the result of our prediction and run the app on a local host and given port.

If you execute the Python script, you should see the following output in your terminal.

To test the API endpoint, we can for example use Postman. Please, execute the following GET request:

http://<hostname:port>/predict?c=0&a=4900&obo=10000&nbo=5100&obd=1000&nbd=5900&dl=1

We can also see the successful GET request in our terminal. This also allows us to debug our app in case of an error.

Now to our second task. Let us deploy our app into a Docker Image, such that we can easily share it with customers or colleagues. Our Dockerfile looks as follows:

In the Dockerfile we install the Python base image for our app. Then the requirement file is added, which contains the packages needed to run our Python script. After the installation of the required packages the app is added into the container. At last, we expose the port of our Flask app and run it. Make sure you have all the needed files in one folder including the app, the Dockerfile, requirements file and machine learning model.

Let’s move to the folder in our command tool. I am working with a Windows laptop.

cd C:\HandsOnKyma

Then, build the image.

docker build -t ml-app .

Just like before you can test the image through Postman.

docker run -p 8001:8001 ml-app

Of course, you may want to check the containers and stop it through the following commands:

docker container ls

Stop the container through:

docker stop <CONTAINER ID>

Then login into your Docker Hub.

docker login

Let’s tag our Docker Image.

docker tag ml-app <YourDockerID>/ml-app-image

Next, we push our Docker image into our Docker Hub repository.

docker push <YourDockerID>/ml-app-image

Please confirm that the push was successful.

Congratulations! We arrived at our third task, managing our container using Kubernetes in Kyma runtime. Therefore, move to your Kyma Environment.

Choose namespaces and create a new one.

Further, choose ,,Deploy new workload’’ and ,,Create Deployment’’.

Go into the Advanced settings and enter a Deployment Name. In addition, add your Docker image for example:

<YourDockerID>/<YourDockerTag>

Further, click ,,Expose separate Service’’ and change the Port as well as the Target Port to 8001.

Then choose ,,Create’’ and wait until the deployment is successful.

You can click on to the Pods and check if it is running successfully.

The click on to API Rules on the left and create a new API Rule.

Add a name and a subdomane all in small letters and choose your service. Of course, you can incorporate different access strategies. Then click on ,,Create’’.

Copy the API rule to your clipboard.

Back in Postman execute the following GET request.

I hope this Hands-On tutorial helped you to get started with Kyma and to bring more machine learning models into production. There are of course many other services and intelligent technologies available in our SAP Business Technology Platform. Have a look in our Service Catalog and get started with your use case. Of course, there is also a machine learning mission available with Kyma.

If you want to dig deeper into Kyma, try out the following Hands-On materials:

I want to thank Sarah Detzler, Stojan Maleschlijski and Mike Khatib for their support while writing this Hands-On tutorial.

Cheers!
Yannick Schaper