Cloud-based VPN to access behind-the-firewall applications for remote employees

The spread of coronavirus (COVID-19) is causing organizations to adopt remote work models on an accelerated timeline. One of the biggest challenges that many organizations are facing: their existing VPN solutions are set up to only accommodate a small percentage of their workforce, because most employees haven’t typically worked remotely. That is shifting dramatically, as organizations converge on a 100% remote workforce. Existing VPN setups are struggling to scale to accommodate the massive increase in load — and they’re certainly not doing it in a fast enough or a cost-efficient manner.

Cloudflare for Teams can help solve that. And we’re offering Cloudflare for Teams at no cost to organizations of all sizes through September 1, 2020 (to see how, see the end of this post).

How Cloudflare for Teams works

Cloudflare for Teams includes Cloudflare Access, which enables you to transform any behind-the-firewall application — including SAP applications — to a full zero trust model on the Internet, so that remote employees are able to securely access those applications from anywhere around the world. The advantage is they can access these applications without connecting to a VPN. Cloudflare built this product for itself, and we’re using it across a wide range of VPN-only applications: not just SAP applications, but also the Atlassian suite and also custom developed in-house applications. A number of organizations have moved their behind-the-firewall applications on to Access in order to lower the load on their VPN for widely-used applications.

Cloudflare for Teams also includes Cloudflare Gateway, designed to keep remote devices secure.

The legacy approach:

Cloudflare for Teams:

How to leverage this offer of help:
Cloudflare is offering Cloudflare for Teams to organizations of any size at no cost through September 1 to help with this. The program includes an optional 30-minute onboarding session with a technical expert.

To learn more about Teams and take advantage of this offer, please visit

From here, you can either fill out the form or begin the sign-up process and schedule an on boarding session.

Analyze ABAP Performance Traces with the Profile Data Analyzer

In this blog, we would like to introduce the “profile data analyzer”, a new standalone tool to analyze performance trace, including the idea behind it and the analysis steps. The profile data analyzer supports both ABAP performance trace and SAP JVM Profiler performance trace (*.prf format) as documented in KBA 2879724, but we will focus on the analysis of ABAP performance trace in this blog.

The profile data analyzer is inspired by Brendan Gregg’s FlameGraphs. It provides additional graphic and interactive views comparing to SAT/SE30/ST12 to make the performance analysis easy, for example, in the following scenarios.

    • Get an overall picture of the traced program, including the program logic and how the time is spent.
    • Find out the bottleneck in large and complex traces, especially when the time is averagely spent on different methods.
    • Compare two traces in the graph view. Sometimes it is helpful to compare two traces, for example, to understand why there is a performance downgrade after a support package upgrade.

In the following parts of this blog, we will demonstrate how to use the profile data analyzer to understand both the performance bottleneck and the program logic for optimization.

“The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.”
–   Donald Knuth, “Computer Programming as an Art (1974)

To start the analysis, we only need 2 prerequisites

    • Download the profile data analyzer from here following KBA 2879724. The profile data analyzer is a standalone jar file which needs Java 8 or higher. It can be run on Windows, Linux, and macOS environments by double-clicking or command java -jar ProfileDataAnalyzer.jar.
    • Collect a performance trace following KBA 2881237.  Profile data analyzer needs the ABAP trace without aggregation or aggregated by call stack which has more information than aggregation by call position. The file name is “AT*”.  We could get this file from OS level on the the dialog instance data folder( /usr/sap/<System-ID>/<Instance>/data) or by SE30 following KBA 2881237

This blog will use two SM21 traces as examples to show the usage of the profile data analyzer. The main difference between these two SM21 traces is the data volume. We will show how to use the profile data analyzer to understand SM21’s logic and the performance of these two traces, including their difference.

    • The short SM21 trace. Change the selection criteria in SM21(e.g. time) to make sure that no log entry is shown on the screen.
    • The long SM21 trace.  Change the selection criteria in SM21(e.g. time) to make sure that many log entries are shown on the screen.

Now we could drag and drop the collected ABAP trace into the profile data analyzer and click the “Analysis Report” button to show the report and flame graph.

Here is the flame graph with the default setting for the short SM21 trace (Tip: Please zoom this page in your browser to view the details of the flame graph). In the flame graph

    • Every column is a call stack. The main program is at the bottom (e.g. Transaction SM21 on the following picture). The lower method calls the upper method.
    • The width in the graph is proportional to the actual time used by that call stack.
    • We could zoom in or zoom out in the graphic view by clicking a method on the call stack.
    • Special method types have dedicated colors. For example, DB methods are blue.
    • We could search a keyword with the “Search” button with regular expressions. The search results will be highlighted in purple.
    • The call count of a call stack is shown like “Calls: XXX” on top of the call stack.

Looking at the above flame graph for the short SM21 trace, the following methods are highlighted in purple by searching with “MAIN|RSLG_DISPLAY|GET_INSTANCE_BY_FILTER”


In the above graph, we could find that

    • The form MAIN is the main part of Program RSYSLOG / TRANSACTION SM21.
    • In form MAIN, the important calls are function module “RSLG_DISPLAY” and class method “CL_SYSLOG.GET_INSTANCE_BY_FILTER”.
    • Because “RSLG_DISPLAY” has a long execution time than “GET_INSTANCE_BY_FILTER”, it is wider and longer in the flame graph.
    • Move the mouse to a method, the tooltip will show the detailed time information.

We can click “CL_SYSLOG.GET_INSTANCE_BY_FILTER” to focus on the details of “GET_INSTANCE_BY_FILTER” and hide other parts. Here is the focused view for “GET_INSTANCE_BY_FILTER”. In this focused view, we could find that

    • “GET_INSTANCE_BY_FILTER” calls other methods and function modules, e.g. “READ_ENTRIES” and “RSLG_READ_SLG” which is used to read the system log entries.

Another interesting part in this example is that, there are some “Not shown methods” on top of “RSLG_DISPLAY” and “CALL_SCREEN”. They are some SYSTEM programs that are filtered. The filter rule is the same as the default SAT setting. We could show these filtered methods by checking the “Show technical details” option.

Check the “Show technical details” option and click the “Analysis Report” button again, we will get the following un-filtered graph. In this graph, we could find that

    • “RSLG_DISPLAY” will call some screen function modules and finally use RFC “OLE_FLUSH_CALL” to communicate with SAPGUI.
    • Usually the percentage of “Not shown methods” is low, because most of the cases, the application programs use more time. We recommend using the default profile data analyzer options as the very beginning, and only check the “Show technical details” option when the time of “Not shown methods” is long.

Now let’s understand the long SM21 trace. Here is the flame graph of the long SM21 trace. In this graph, we could find out that

    • “GET_INSTANCE_BY_FILTER” is longer than the short SM21 trace because we select more data in the long SM21 trace.

Look at the above graph, it is also not so clear about the exact difference with the short SM21 trace. To improve it, we developed the diff function. We could compare the two SM21 traces with the following steps.

    • Drag and drop the short SM21 trace into the profile data analyzer. The first trace will be used as the benchmark.
    • Click the “Diff …” button and then select the long SM21 trace. The second one is the performance trace to analysis.
    • Then the diff report will be shown in your browser automatically.

Here is the flame graph in the diff report, and the tips to read this diff graph.

    • In the diff graph, The increased time is shown in green and marked with “+”. Reduced time is shown in red and marked with “-“.
    • To understand which methods spend more time in the second trace compared to the benchmark, we should check the green parts with “+” (increased time).

In the above diff graph, we could find out that the time of both “RSLG_DISPLAY” and “CL_SYSLOG.GET_INSTANCE_BY_FILTER” is increased (see the green parts). Let’s take “GET_INSTANCE_BY_FILTER” for example. Here is the focused graph for “GET_INSTANCE_BY_FILTER” by clicking it. In this graph, we could find that

    • Most of the increased time in “GET_INSTANCE_BY_FILTER” is because of “CL_SYSLOG.APPEND_ENTRY”.
    • In the short SM21 trace,  “CL_SYSLOG.APPEND_ENTRY” is not visible. This is because  “CL_SYSLOG.APPEND_ENTRY” is not called, or its execution time is too short.
    • In the long SM21 trace,  “CL_SYSLOG.APPEND_ENTRY” is longer. Its execution time and call counts (“Calls: XXX” on top of a call stack) are shown in the graph.

Now let’s have a look at the class “CL_SYSLOG” in SE24. Comparing to the short SM21 trace (no trace entry is selected and displayed), the long SM21 trace need to call “APPEND_ENTRY” many times to append entries and also call “DETERMINE_SYSLOG_TEXT” to determine the system log entry’s text, and as a consequence, the long SM21 trace runs longer than the short SM21 trace.

If memory information is collected in SE30/SAT trace, the “Allocations” information will be shown in the analysis report, otherwise, it will be hidden. The “With memory use” in the following picture is the option to control whether memory information will be collected. KBA 2881237 contains more detailed information about the trace variant.

Here is an example of the allocation stack flame by tracing SM21, including the normal view and diff view. The allocation flame graph follows the same analysis rules as the performance flame graph.

    • Focus on the widest flame in the graph. The width in the graph is proportional to the actual memory allocated in that call stack.
    • The tooltip of a method shows details, such as allocation information.
    • Zoom in or zoom out in the graphic view by clicking a method on the call stack.
    • Search a keyword with the “Search” button with regular expressions. The search results will be highlighted in purple in the flame graph.
    • In diff graph, comparing to the first benchmark trace, the increased memory allocation is shown in green and marked with “+”, and the reduced memory allocation is shown in red and marked with “-”.

For advanced analysis, the ABAP traces can be converted to the SAP JVM Profiler format(*.prf) via the export button and then analyzed via SAP JVM Profiler.

The detailed information and analysis steps could be found in the following documents.

Here we only list some screenshots to show the general idea about analyzing the converted ABAP trace in SAP JVM Profiler.

Please feel free to post any feedback to the profile data analyzer on this blog. In case the profile data analyzer is not working as expected or needs to be fixed, please let us know. Thanks!

Enhancing Safety For Essential Businesses (COVID-19) Free Access #C19FREEACCES

We’re facing unprecedented times. COVID-19 has fundamentally changed how we live. Organizational safety is no longer just the responsibility of the H&S leadership. Safety is now the responsibility of every employee, leader, partner and customer of an organization. Safety is the way of doing business.

Health & Safety is a joint responsibility of employees and employers. One critical outcome of COVID-19 has been the fact that employees, as well as employers, want to ensure that all safe work practices are being ensured. This culminating focus on health, safety and wellbeing is bringing a new emphasis on shared ideas, innovations, and responsibilities. Employers are going above and beyond to ensure that the workplace is safe, all employees are properly trained and aware of their work surroundings as well as employer’s expectations. At the same time, employees have taken ownership of reporting any unsafe events and hazards.

As many of our customers are considered “essential businesses”, we want to assure that Sodales is well-placed to continue to provide your workforce with enhanced safety tools for adhering to continuously evolving precautionary health procedures recommended by the relevant government agencies.  We also understand the localized site-based safety requirements of every business that are changing hour by hour. We know COVID-19 has made things difficult for many of our customers and we’re doing our part to help make things a little easier.

We have launched a specialized “Safety Impact Inspection” functionality within Sodales EHSEM software, specially designed for essential businesses for enabling safety in their workplace. This functionality is completely free of charge. We have launched this functionality by working with our customer’s Health and Safety leaders who are still working in their offices such as grocery stores, gas stations, construction sites, logistics, transportation and manufacturing etc.

The Safety Impact Inspection tool is specifically designed for essential businesses for assessing real-time job-safety impacts by utilizing the Sodales EHSEM Audit module for safety and risk assessment procedures.

Safety Impact Inspection checklist is a newly launched job safety audit tool for essential businesses that cover all safety aspects to satisfy the needs of employers and employees working in their facilities (such as grocery stores, construction sites, logistics, transportation and manufacturing etc.)

For the Employers, it provides the ability for supervisors to:

  • Conduct and store start of shift safety talks with all employees
  • Inspecting to ensure all employees have necessary PPEs
  • Capture all pre-job safety inspections in an easy to fill the checklist
  • Noting employee health statuses before the start of daily work
  • Record any workplace hazards and inform all the employees
  • Capture corrective actions to rectify and close the risks and hazards
  • Records capture any health & safety-related questions and concerns of employees
  • Capture the attendees by taking their consents

The solution also provides hazard and risk management capabilities to:

  • Identify any hazards and risks at work e.g. another employee not wearing PPE or coughing
  • Stay anonymous while reporting hazards
  • Select hazards from a pre-existing list of various hazards or type in a new type of hazard
  • Provide a ranking for the hazards identified from critical to a minimum.
  • Analytical Reports and Program Management

Using the Safety Impact Inspection, the H&S, HR, and management can run 360-degree analytical reports to do comparative analysis on all job sites and understand the gaps between safety checklist results, action items as well as critical hazards reported by employees. The 360-degree view reports enable the organization to close all gaps urgently in their work to ensure the safety of all employees. Workplace safety can easily put a program together to ensure that all sites are putting special attention on the safety of employees.

To learn more, please contact us to schedule a demonstration at

Or contact us at the SAP App Center

Leadership coach John C. Maxwell stated that “Healthy organizations are not about the one person who leads them, they are about everyone who’s in them.”

On behalf of Sodales, thank you for your trust, support and understanding during this time.

Please, be safe and stay well.

Sodales Solutions


As tribute to our colleagues who have been working for SAP Labs Poland for over
a decade we decided to interview them to about the past, their memories and future plans. In this series, we would love to thank them all and wish them all the best in their careers. Below the first anniversary interview with Radosław Michalczyk!

SAP LABS Poland: When exactly did you start working in our company?
It depends on what we mean saying “our company”. I haven’t changed my job for 14,5 years, but the next companies were taken over by larger ones.
The only CV I have written is in Polish language and it is about 15 years old.

  • 14.5 years ago I started working at Fargo
  • 12.5 years ago we started working with hybris
  •  8 years ago SAP bought hybris (if I’m not mistaken)

Perhaps 10 years ago, I officially switched to hybris contract (from Fargo). Honestly, I don’t remember exactly when, because it was only a change on the paper.

SLP: What position did you start at and where are you now?
At Fargo, I had worked as an intern programmer. At hybris, I was a Java developer
(no one cared about titles in this company, there were no senior or junior positions) and it remained until the SAP time. I’m a Senior Business Analyst now, and Technology Principal Consultant according to the internal nomenclature at SAP.

SLP: What was the biggest change for you in the last 10 years?
The creation of the Project Delivery team by Toby Dyer. Showing how to organize work team professionally, how to care of people. It was a big change for the better at the moment when a lot of people left the company and all the others were thinking about it.

SLP: 10 years has already passed. How do you feel about it?
Strangely, I have no strong emotions associated with that. It is a pity that so few people from the initial team from Gliwice and Munich have stayed at the company.

SLP: What was your plan for your career? Are you now in a completely different place than you originally planned to be?
I wanted to be a developer for a long time, despite the fact that I chose to study mathematics. I just thought it would be easier. On the other hand, I didn’t want to become a programmer forever. To be a good developer you have to be passionate, constantly improve languages and solutions knowledge. I prefer difficult problems that could be solved by common sense and ingenuity. The work of the analyst perfectly suits to this. Each time I start a new project for a new customer, it’s a bit like a changing the job. The same cool team remains, but the challenges change, including changes of the technologies. There is a reset and the possibility to correct things that didn’t work well last time. I think I’m exactly where I wanted to be.

SLP: What is the funniest memory or story from the 10 years?
Over the years, there have been a lot of funny things, unfortunately most of that what comes to my mind are too long stories or the ones which shouldn’t be shared with anybody 😊 Something quick, a skill acquired at the terrace during the cool barbecue evening (quite warm for the late autumn), making a mulled non-alcoholic beer by an electric kettle. The beer boiled over a little, non-alcoholic wine would be better, but the fun was great.

SLP: Where do you see yourself in the next 10 years? 😊
A beach in the Dominican Republic? In a hut in the Alps? To be honest, I don’t know. We have been living in a dynamic world, on one hand, I’m too lazy to change anything, on the other hand, it seems to me that the experience gathered over the years- working in various teams/projects for different customers, has given me experience that can be used outside the IT world. Our branch shows how to work efficiently under time pressure, in some places such knowledge could be almost priceless. We will see.

Photo: Ten years ago

Photo: Currently

Thank you Radek for the interview. We wish you continued your career successfully. Happy Anniversary! The next story will be published soon.

Written by SAP Labs Poland

Cloud Integration – How to Connect to an Amazon MQ service using the AMQP Adapter

This blog describes how you can connect to an Amazon MQ service which is a managed message broker service for Apache ActiveMQ for configuring asynchronous message processing using the AMQP (Advanced Message Queuing Protocol) adapter. The AMQP adapter is available for SAP Cloud Platform Integration customers with the 08-December-2019 release. Kindly read the blog from Mandy Krimmel to know more about the configuration, prerequisites and limits of this new AMQP adapter.

Note: Please note that this blog talks about the Non-SAP integration and the screenshots and configuration options given below might differ in visual appearance and technical capabilities due to the future upgrades of the Amazon MQ service.

Before you can use Amazon MQ, you must complete the following steps:

Now that you’re prepared to work with Amazon MQ, follow the following steps to create an ActiveMQ message broker:

  1. Login to AWS account and navigate to Amazon MQ home page.
  2. Click on the Get started button which is visible inside the Create brokers tile.
  3. Select deployment and storage type: Based on your requirement, choose the appropriate options. For this blog, I have choosen the options as shown in the given screenshot and then click on the Next button.
  4. Configure Settings: Based on your requirement, do the required configurations. For this blog, I have configured the minimum options like Broker Name, Broker instance type, Username and Password for ActiveMQ Web Console access as shown in the given screenshot and then click on the Create broker button.
  5. This will then start creating the broker which takes about 15 minutes.Refresh the screen to check the status change.
  6. Once the status changes to Running, click on the newly created broker.
  7. Scroll down to the Connections, this will lists the ActiveMQ Web Console URL and wire-level protocol endpoints including AMQP. By default all the inbound traffic is resricted. To be able to access your broker’s ActiveMQ Web Console URL or wire-level protocol endpoints, you must configure security groups to allow inbound traffic.

Enable connections to your broker

  1. In the broker Details section, under Security and network, choose the name of your security group.
  2. The Security Groups page of the EC2 Dashboard is displayed. From the security group list, choose your security group.
  3. At the bottom of the page, choose Inbound rules tab, and then click on the Edit inbound rules button.
  4. In the Edit inbound rules dialog box, we need to add the following two rules:
    1. A rule for an Active MQ Web Console access from your system IP.
      • Choose Add Rule.
      • For Type, leave Custom TCP selected.
      • For Port Range, type the ActiveMQ Web Console port i.e. 8162.
      • For Source, select anything from the three options i.e. Custom, Anywhere and My IP based on from where you want to be able to access the ActiveMQ Web Console.
    2. A rule for an AMQP endpoint acess from your CPI tenant. Based on the CPI tenant region, we need to add all the IP range of that region as per the given help documentation.
      • Choose Add Rule.
      • For Type, leave Custom TCP selected.
      • For Port Range, type the AMQP endpoint port i.e. 5671.
      • For Source, leave Custom selected and then type the IP ranges of your CPI tenant region. For neo-eu2, add the following IP ranges:, and
  5. Save the changes by clicking on the Save rules button.
  6. Your broker can now accept inbound connections. Click on the ActiveMQ Web Console URL to access it.
  7. Also do the connectivity test from your CPI tenant to the AMQP server.
    1. Open your CPI tenant web tooling and navigate to the Monitor tab.
    2. In the Manage Security section, click on the Connectivity Tests tile.
    3. Open the AMQP tab, provide the AMQP details as given in the screenshot and then click on the Send button.
      • If you get javax.jms.JMSException: connection timed out exception, this mean the inbound rules to the AMQP endpoint has not been setup properly. Kindly again follow the Step 4 carefully.
      • If you get javax.jms.JMSException: General SSLEngine problem exception, this mean the AMQP endpoint is accessable but the CPI tenant is not able to validate Amazon MQ Server certificate.
    4. In order to validate the Amazon MQ Server certificate, uncheck the Validate Server Certificate checkbox and click on the Send button. This will display the Amazon MQ Server certificates chain, download it and upload the root certificate of the chain in your CPI tenant keystore.
    5. Once you upload the root certificate of the chain in CPI keystore successfully, test the AMQP connectivity again and check the Validate Server Certificate checkbox. This time you should get the successfull response.

Create a queue in Amazon ActiveMQ message broker

To be able to connect to queues or topics in the message broker, you have to create queues and/or topics in the message broker. Follow the following steps to create a queue in Amazon ActiveMQ:

  1. Click on the Manage ActiveMQ broker link which is available on the landing page of ActiveMQ Web Console. Provide Username and Password which you have set earlier while creating the message broker and then click on the Sign in button.
  2. Click on the Queues tab and create a queue with a name Success_Queue. Kindly note, as per the default broker configuration, all inactive queues gets deleted automatically after 10 minutes. An ‘inactive’ queue is one that has had no messages pending and no consumers connected for some configured period of time.

In many cases integration scenarios have to be decoupled asynchronously between sender and receiver message processing to ensure that a retry is done from the integration system/message broker rather than the sender system.

Follow the steps described below to setup the sample scenario using Amazon ActiveMQ message broker and AMQP adapter in SAP CPI.

Setup Scenario With Asynchronous Decoupling

To configure the decoupling of inbound and outbound message processing you need to configure two processes:, one process to receive the inbound message and store it in the Amazon ActiveMQ queue and a second process to trigger the message from the Amazon ActiveMQ queue to the receiver backend. The blog describes the configuration using two separate integration flows.

Configure the Integration Flow Receiving the Message

The first integration flow will be configured to receive the message via any inbound adapter. In this sample setup we use the HTTP adapter to receive the student records in xml format, process it with Iterating Splitter and then move individual student record to Amazon ActiveMQ queue.

Configure the AMQP Receiver Channel

Create the integration flow with the inbound channel required by your scenario, and use the AMQP adapter with TCP protocol as the outbound adapter. You have to configure the AMQP endpoint details as given in the screenshot. To learn more about each of its configuration options, kindly read the blog

Deploy the Integration Flow

Now you can deploy the integration flow. In this case, the queue with name Success_Queue has been already created in the message broker but whatever queue name you provide in the AMQP adapter that will be created automatically in the message broker.

Configure the Integration Flow doing the Retry

To consume the messages from the Amazon ActiveMQ queue, you configure a second integration flow with a AMQP sender channel and the outbound adapter needed for your scenario. In this sample configuration we use the HTTP adapter.

Configure the AMQP Sender Channel

Create the integration flow with the outbound channel required by your scenario, and use the AMQP adapter with TCP protocol as the inbound adapter. You have to configure the AMQP endpoint details as given in the screenshot. Use the same queue name used in the receiving integration flow.To learn more about each of its configuration options, kindly read the blog

Retry Configuration

If an error occurs during the processing of the consumed message in Cloud Platform Integration, the message is not removed from the messaging system, but is retried again immediately. There is no option to configure a delay in retry processing in the AMQP adapter because this is not supported by the AMQP protocol. To learn more about this, kindly read the blog

Deploy the Integration Flow

Now you can deploy the integration flow.

Execute the scenario

From Postman, make a POST call to CPI HTTPS endpoint with student records:

It will enqueue 5 messages in Amazon ActiveMQ Success_Queue.

Other integration flow with automatically then start polling the messages and POST it to a REST mock service in beeceptor

hana-cli: XSA MTA project in VS Code

Last time I wanted to play with hana-cli. I walked through the guide provided by (@Thomas Jung) and wanted to check out, what I can achieve with that.

If You are not familiar with hana-cli, please check first post blog/videos prepared by Thomas:

I came up with a simple task for my self to setup an existing XSA project for development in Visual Studio Code.

It took me a few hours, I learned a lot, and I prepared this blog post. I hope it will help someone.

Why I would like to develop XSA MTA application in VS Code anyway? There could be a few reasons:

  • GIT console
  • Faster IDE
  • Node modules

But I treat it as a pure fun experiment and learning lesson.

Environment preparation

1. Install Node.js version 10.x or 12.x
2. Clone Your XSA project into a folder using git clone command
3. Add the SAP Registry to your NPM configuration

npm config set @sap:registry=

4. Install hana-cli as a global module

npm install -g hana-cli

Connection setup

Now, to set up the connection between hana container and local project (hana-cli) we need to create a default-env.json file in db folder. It is a JSON file which contains a set of environment variables and their values. There we need to provide variables describing HDI container params which were generated for our XSA project.

`default-env.json` example file with a target container binding and a user-provided service:

{ "TARGET_CONTAINER" : "target-service", "VCAP_SERVICES" : { "hana" : [ { "name" : "target-service", "label" : "hana", "tags" : [ "hana", "database", "relational" ], "plan" : "hdi-shared", "credentials" : { "schema" : "SCHEMA", "hdi_user" : "USER_DT", "hdi_password" : "PASSWORD_DT", "certificate" : "-----BEGIN CERTIFICATE-----\nABCD...1234\n-----END CERTIFICATE-----\n", "host" : "host", "port" : "30015" } } ], "user-provided" : [ { "name" : "GRANTING_SERVICE", "label" : "user-provided", "tags" : [ ], "credentials" : { "schema" : "SYS", "user" : "GRANT_USER", "password" : "PASSWORD", "procedure_schema" : "PRIVILEGE_PROCEDURE_GRANTOR_DEFINER", "procedure" : "GRANT", "type" : "procedure", "tags" : [ "hana" ] } } ] } }

Where we can find those properties for our project? – XSA Cockpit

Open SAP HANA XS Advanced Cockpit and navigate through:
Organization -> Space -> Your application, and select Environment Variables from the left menu.

In System-Provided section You should see configuration similar to the template above. You can copy sensitive data, but JSON has to be adjusted according to the above template.

To check if the configuration is correct type in terminal:

hana-cli status

If everything is fine, run:

npm start //"start": "node node_modules/@sap/hdi-deploy/deploy.js --exit --auto-undeploy"

To build a project in a defined container. It is the same step which is done in Web IDE for building db folder.

If You did everything well – You can continue working on Your project in VS Code now 🙂 I found a lot of useful pieces of information in README of @sap/hdi-deploy node module. TThere are also a lot of explanations, how this module works and how to configure it correctly. I recommend to read that, to better understand the topic.

Bonus: Cloud MTA Build Tool

As a bonus to this task, I did test run for Cloud MTA Build Tool. Using MBT we could build the whole project to mtar file.

1. Install MBT as global module

npm install -g mbt

2. Build project with XSA flag

mbt build -p=XSA

And that’s it!

*.mtar file should be created in Your project directory.

Powering SAP NetWeaver on RHEL 8

This post was originally posted here.

SAP NetWeaver marks the technical foundation for many of the SAP Business Applications. SAP and Red Hat have worked jointly to deliver timely support of SAP technology stack on Red Hat’s latest release of Red Hat Enterprise Linux. SAP officially announced the support for SAP NetWeaver based applications including SAP Business Suite, on Red Hat Enterprise Linux (RHEL) 8 in production environments on February 27th. This adds to the existing SAP support for its major database products on RHEL 8, including SAP MaxDB, SAP ASE on Intel 64, and SAP HANA- on Intel 64, and also IBM’s Power 9 platform.

Whether it’s supplier relationship management (SRM), customer relationship management (CRM), supply chain management (SCM), product lifecycle management (PLM), or enterprise resource planning (ERP), SAP NetWeaver is at the core of making it possible to integrate these applications and deliver a unified business experience.

With the certification of SAP NetWeaver on RHEL 8, customers can now benefit from RHEL 8’s stable, flexible, and highly available OS foundation across their entire production SAP landscape. Customers may find further details and configuration best practices at SAP Note 2772999.

So, why should you consider RHEL for SAP Solutions in your NetWeaver deployments?

Record-setting performance

When it comes to running NetWeaver on RHEL 8, high-performance application response time is an important performance characteristic to consider.

Administrators using the sap-netweaver profile can tune RHEL for running SAP NetWeaver-based applications and improve performance. For example, one of the several variables that the sap-netweaver tuned profile adjusts is related to shared memory. For the SAP NetWeaver application server, shared memory is important for buffering database table data, and for handling data objects shared in the cross-transaction ABAP application buffer (using commands like EXPORT TO SHARED MEMORY / SHARED BUFFER). This can result in faster application response times.

Enhanced high-availability

Since SAP NetWeaver is at the heart of many critical business applications, downtimes are costly for the business. To help provide high-availability for SAP NetWeaver environments, RHEL offers pacemaker-based cluster resource agents (SAPDatabase and SAPInstance). These resource agents are compatible with Standalone Enqueue Server version 1 (ENSA1), used with SAP NetWeaver deployments, and Standalone Enqueue Server version 2 (ENSA2), used with S/4 HANA 1809 or newer.

To learn more about setting up a highly available SAP NetWeaver deployment check out the Deploying Highly Available SAP NetWeaver-based Servers Using Red Hat Enterprise Linux HA add-on with Pacemaker knowledge base article for guidelines.

Efficient, proactive management and security tools

When deployed at scale, efficient and proactive management and security tools are needed. Red Hat Enterprise Linux for SAP Solutions includes Red Hat Smart Management and Red Hat Insights to meet these requirements.

Red Hat Smart Management combines Red Hat Satellite with Red Hat Cloud Management Services for Red Hat Enterprise Linux. Together they can help you to provision, patch, configure, and control your development, test, and production systems based on Red Hat Enterprise Linux, regardless of where they are running.

When operating at scale, system management could be a challenge. With Red Hat Smart Management, you can more easily check that your systems have the latest security patches and quickly remediate configuration drift. Additionally, you get auditing capabilities and can report on the historical state of your systems.

Red Hat Insights delivers predictive operating system analytics that help you rapidly identify and remediate threats to availability, security, stability, and performance. Proactive, automated, targeted issue resolution helps your environment operate optimally and avoid problems and unplanned downtime. Red Hat Insights includes more than 1,000 rules, including many specific to SAP system configuration requirements and best practices, to identify vulnerabilities before they impact critical operations.

Enterprise-class stability

Built on Red Hat Enterprise Linux, the RHEL for SAP Solutions subscription offers Red Hat Update Services for up to four years from general availability — including important and critical security patches and fixes — for select minor releases of Red Hat Enterprise Linux. This means that when it comes to building a stable foundation for your critical SAP workloads including SAP NetWeaver, Red Hat has you covered.

World-class support

With Red Hat, support is simple and hassle-free. Red Hat works with SAP and certified hardware and cloud providers to deliver integrated support for your entire environment. Support teams across Red Hat and SAP work together to identify the underlying issue and resolve the problem quickly and efficiently.

To meet the evolving needs of modern businesses, SAP Netweaver continues to serve as a foundation in the SAP technology stack. When powered by Red Hat Enterprise Linux 8, customers choose an infrastructure platform that lets them focus on their SAP business environment, while relying on an SAP optimised OS platform and a trusted vendor to support them on the journey towards the Intelligent Enterprise.

Ready to get started with SAP Netweaver on RHEL? Red Hat Enterprise Linux System Roles for SAP NetWeaver (For example, sap-netweaver-preconfigure) can be used to assist with the OS-specific configuration and setup of SAP NetWeaver. Check out the blog here or email us at!

How AI unites siloed data and reveals the probability of accuracy across insights

Two of the greatest challenges faced by organizations today are the rising volume of data and the lack of  confidence to act on the insights this data reveals. Fortunately, there are AI-fueled data management solutions that directly address these two challenges to make data simple and accessible.

Databases should be both powered by AI and built for AI, meaning they use embedded AI capabilities to improve their day-to-day functionality (powered by AI), while also being able to support AI initiatives throughout the entire business (built for AI). For example, marketing analysts could gain access to more extensive, robust data for insights or shop floor managers could use natural language functionality to use a google like request to ask why a machine might be failing regularly.

The eBook, Db2 – The AI Database discusses eight capabilities that make a database powered by AI and built for AI. Two “powered-by-AI” capabilities are discussed here which provide a single view of the overall data and provide trust in insights: data virtualization and confidence-based querying.

43 percent say data availability is a barrier to implementing AI

Data Virtualization

Data has not only risen in volume, but in variety as well. It is stored on-premises, on private clouds, and across multiple public clouds in both SQL and NoSQL formats. For that same reason, organizations risk their data becoming siloed, or find themselves spending too much time trying to join data together.

Data virtualization, achieved through a combination of data federation and an abstraction layer, helps eliminate these concerns by allowing all users to interact with multiple data sources from a single access point. This remains true even when the data diverges in terms of format, type, size, and location. The single access point provides greater simplicity for data professionals, allowing them to see and use all data across the organization without wasting time moving it around with ETL (extract, transform, load) processes.

One access point also aids governance and security, allowing a single point of entry to be monitored rather than one for each data repository. There are also cost savings on latency and bandwidth issues due to the reduced need for data transfers. So, no matter how divergent or voluminous data becomes, data virtualization helps access all of it in a simple, meaningful way.

Confidence-based querying

Even when data is accessible, some still find the insights produced difficult to trust. Answers to resulting queries may lack the nuance required to find close matches. It’s a very binary process; either the information matches the query and the result is returned or it does not, and it isn’t returned.

Confidence-based querying delivers SQL query results based on probabilities or “best matches” rather than a yes or no answer. This is accomplished by adding machine learning extensions to SQL through the implementation of deep feed-forward neural nets. Simply put, it identifies when “likeness” and likelihood of a match are high.

One of the best examples of this is identification of a potential suspect from a police database using eyewitness testimony. Because the eyewitness won’t be exact on height, weight, and other physical attributes it is often necessary to manually create a SQL statement that looked for a range of values around what they reported. Using confidence-based query, a probabilistic SQL statement can be used instead, which provides the best match compared to the overall witness profile. This is particularly valuable when a close match would have been excluded because it fell out of the manually created range on just one dimension.

In this way, confidence-based querying extends what SQL engineers can accomplish, allowing them to run similarity and dissimilarity queries, inductive reasoning queries, queries related to pattern anomalies, and more.

Where data scientists would have typically been previously necessary, SQL engineers can act on their own – saving time and increasing the value of their work. Data scientists which are already overburdened with tasks will also appreciate the relief.

How to set up your organization for robust confident insights

Implementing data virtualization and confidence-based querying may be easier than you think. Both are core components of IBM’s data management strategy anchored by IBM Db2 and IBM Cloud Pak for Data, which is built on Red Hat OpenShift Container Platform.

To learn more about technologies IBM uses to deliver data management that’s both powered by AI and built for AI read our latest eBook, Db2 – The AI Database. It has more information on data virtualization, confidence-based querying, and six additional features positioned to help you succeed on the Journey to AI.

Read the AI Database eBook








Got Questions? Ask our Experts!

Schedule a free one-on-one consultation with our experienced data professionals and distinguished engineers who have helped thousands of clients build winning data management strategies.

Accelerate your journey to AI.

Forecast Configuration Guidelines to Cope With Hoarding and Out-of-Stocks in the Retail Supply Chain

The current uncertain economic environment and legal restrictions has made it hard for retailers to plan their operations and in particular for forecasting and replenishment planning the effects of Coronavirus are dramatic. In most countries many shelves were out of stock for several weeks including paper products, pasta, rice and several sauces and soups. Usual planning methods are not working any longer since consumer demand is shifting – for some products demand is exploding, for others it is drastically reduced. Products are out of stock because of hoarding, people have to cook more often and more in general with kids being at home all day (I can assure this one!). Some categories such as hand soaps are selling out very quickly even after initial post hoarding replenishment runs. Instead of buying as needed on a very frequent basis, now planning ahead is required to go out just once week – which probably has a negative impact on fresh food such as bakery, meat and fish. Store traffic is different for stores in vicinity to closed borders.

In some cases, these effects are most likely and hopefully for a short period of time only, but also long-term shifts of consumer behavior can be expected, e.g. that consumers will keep online shopping habits once started for some products. Nobody really knows at this point in time exactly how things will develop. There is also, as now evidenced in Asian countries , a possibility of a second wave of Corona related infections even after the first wave has subsided. But one thing is clear: Manual intervention is now needed to stabilize the supply chain and to help setting it up again for smooth operation after the crisis. And there is a need  to vary a mitigation strategy by category and location.

This is why we collected some food for thought in note 2909006 around forecasting configuration, both for short and longer-term strategies. The idea is to provide recommendations and advice on how to configure the Unified Demand Forecast (UDF) or the forecast of SAP Forecasting & Replenishment  in order to help  to minimize the impact of the crisis on forecast accuracy and the downstream processes.

We hope very much that you find this useful and that it helps you running your business better.