Bank Validation in Payment Information Portlet in SAP Employee Central

Certainly this is not a new topic but I have observed that some information related to this topic is available in bits and pieces. Therefore, I would like to bring all that information together, add some important points on top of that which can become handy while configuring Bank validations or troubleshooting payment information.

Payment information in EC is a self service portlet so there is high probability that employee may end up updating incorrect information. The bank details should be updated in compliance with country requirements. One important attribute of payment information is Routing number and customers want to make sure that it is correct.

SAP has given few Deep validations. One relevant for us here is Bank Account Validations (Company System and Logo Settings -> Enable Bank Account validations).

The validations are pre-delivered content for all countries supported by SAP. This can be downloaded from SAP help and imported in your system like any other data. Go to ‘Import and Export Data’ and Import this file by selecting the object ‘Country specific validations..’

These validations can also be customized for every country (Manage Data -> Country specific validations). Though is not suggested to change these validations because SAP has configured this as per legal requirements for every country but just in case there is a legal change or customer specific requirement to support Integrations etc. then you can update this configuration.

Firstly I would like to begin by saying that we have an MDF for Bank but not every customer enables it for various reasons. So I would like to divide this blog in two parts – When Bank MDF is enabled and when it is not.

  • Bank MDF is enabled in SuccessFactors Employee Central (EC)

In this case the Bank master data can be manually created or imported into EC. System will perform the deep validations when you create/import Bank data. If you don’t want that check enabled please turn it off (Company System and Logo Settings -> Uncheck Bank Account validations)

‘Bank’ field can be enabled on Payment information Detail object (Configure object definition -> Payment information Detail -> Enable the field ‘Bank’). Now the Bank object and Payment information of an employee are connected via this ‘Bank’ field. When employee selects the Bank the routing number/Business Identifier code will be defaulted from Bank object. This makes sure that correct Bank information has been updated in the payment information. Further the fields like Routing number can be made as ‘Read only’ so that employee can’t change it.

  • Bank MDF is not enabled in SuccessFactors Employee Central (EC)

In this case when Employee updates the Bank details in Payment information then it is validated by the country specific validations as shown below.

In above screenshot you can see there are 3 options to validate the Routing number. For ex. if CAN supports only ISO Swift format validation then you cannot select Algorithm here. If you try you will get below error. As per ISO Swift format selected (attribute above Validation type) the validation is performed. In our case 9c means up to 9 characters allowed so basically only the length is validated. To validate the complete routing number you can write a business rule. This may become complex but depends upon the validation checks required.

For some countries which have Algorithm based validation like US not only the length but complete format is validated. So what I mean to say is that all countries have different validation formats.

The better way to validate Bank details is via Option 1 because you don’t need to maintain or worry about extra checks. You just need to make sure that you Bank master data has been imported correctly.

Migrating SAP Fiori Applications from Neo to Cloud Foundry using SAP Business Application Studio


Overview

SAP Business Application Studio enables you to develop SAP Fiori applications for various scenarios, and deploy to different target runtimes.

One of the most prominent scenarios is to develop SAP Fiori application and deploy the cloud environment

In the following blog post I will guide you how to migrate your application which was developed using SAP Web IDE Full-Stack and deployed to Neo environment, to an application using SAP Business Application Studio which is deployed to Cloud Foundry.

In case your application is deployed to ABAP runtime please follow this guide.

Environment Setup

  1. Create a subaccount in your global account or use an existing account and skip to Step three.
    Create%20a%20subaccount
    Note
    : You have to enable Cloud Foundry since the SAP Fiori application runtime is on the Cloud Foundry environment.Enable%20CF
  2. For trial account, Make sure that your subaccount has an SAP Business Application Studio entitlement. If the entitlement is missing, add it from Entitlements menu.
    In addition add entitlement to Portal/Launchpad.
    .
    Entitlement%20App%20Studio
  3. Navigate to the Subscriptions menu and search for SAP Business Application Studio. Click on the tile and click Subscribe.Subscribe%20App%20Studio
  4. In case you plan to use managed approuter please subscribe to Launchpad or to Portal.
  5. In non trial account assign yourself to a developer role, otherwise skip this step:
    1. Navigate to Security > Trust Configuration.
    2. Click the sap.default link.
    3. Enter your email address.
    4. Click Assign Role Collection.
    5. Select “Business_Application_Studio_Developer” from the dialog box.

Trust

Role

5. Launch SAP Business Application Studio:

    1. Navigate to Subscriptions menu and locate the SAP Business Application Studio tile.
    2. Click Go to Application.

The SAP Business Application Studio Landing page or Dev Spaces UI opens.

6. Create an SAP Fiori Dev Space:

    1. Click Create Dev Space.
    2. Enter a name for the dev space.
    3. Select the SAP Fiori type (from the left side).
    4. Click Create Dev Space.
    5. Launch the dev space by clicking on the dev space name.

7. Import your destination:

    1. Navigate to your Neo account and export the existing destination required for the SAP Fiori app to your PC.
    2. Navigate to Connectivity > Destinations in your new subaccount and import the existing destination.
    3. Make sure that the following properties are set in Additional Properties of the destination:
      1. HTML5.DynamicDestination = true
      2. WebIDEEnabled = true
      3. WebIDEUsage = odata_abap,dev_abap
        In case your service is not an abap catalog, add WebIDEUsage = odata_gen
        Do NOT maintain both odata_abap and odata_gen as WebIDEUsage.
      4. HTML5.Timeout = 60000
    4. Make sure that your subaccount is connected to your on-prem destination with
      a cloud connector.

Migrating your Project

During migration, you import your existing project to a temporary folder, create a shell MTA project add HTML5 module, replace the generated webapp folder with the one from temporary folder, and adjust three files in the migrated project.

Then you can build and deploy and run the migrated application on cloud foundry.

Import your project

You can clone your project from GIT using the command line in SAP Business Application Studio.
Note: See Connecting to External Systems for more information on using a destination with Cloud Connector as your GIT on-prem setup.

Alternatively, import your project as follows:

  1. Export your project (e.g. “migrationNeo” project) from SAP Web IDE Full-Stack to your PC.
  2. Unzip the project on your PC.
  3. In SAP Business Application Studio click File > Open Workspace and create a workspace in the projects folder.
  4. Drag and drop the “migrationNeo” project to the SAP Business Application Studio.

Create MTA project

  1. Click on the top toolbar File > New Project from Template.

  2. In the opened editor choose “Basic Multitarget Application” tile and press Start.

3. Fill the name of your root project e.g. MTA and press Finish. The project is generated with
mta.yaml file.


Add HTML5 Module

  1. Right click on the mta.yaml file and choose
  2. In the opened editor choose “SAP Fiori Freestyle Module” and press Start.
  3. Fill the name of the module, it should be the same name as your original project from SAP Web IDE Full Stack e.g. “migrationNeo”.
  4. Choose “SAPUI5 Application” template tile and press Next.
  5. Choose approuter configuration. Use “Managed Approuter” if you have subscribed to Portal/Launchpad, and give a business name to your application e.g. “myservice” which will be reflected in the runtime url.Using “Standalone Approuter” will require application runtime for each project and is not the proposed solution in this guide.
  6. The next 2 steps can be kept with default values, then an HTML5 module named migrationNeo is added to your MTA project.

Adjust your HTML5 Module

  1. Replace the existing webapp folder with the original webapp folder of the project copied from SAP Web IDE Full-Stack.
  2. Open index.html file or any html used for running the application.
    Update the bootstrap for SAPUI5 to an absolute url: src=”https://sapui5.hana.ondemand.com/resources/sap-ui-core.js”
  3. Open the manifest.json
    1. Add the sap.cloud service on the root level.

          “sap.cloud”: {

              “service”: “myservice”

          },

    2. Update datasource url from absolute to relative, namely remove any leading “/”.
      In case you access the datasource in javascript coding remove the leading “/”.For example “uri”: “/sap/opu/odata/iwfnd/RMTSAMPLEFLIGHT/”

      changed to    “uri”: “sap/opu/odata/iwfnd/RMTSAMPLEFLIGHT/”,

    3. Open xs-app.json and ensure you have a route to your destination.
      The route originally resided in neo-app.json file.

          {

            “authenticationType”: “none”,

            “csrfProtection”: false,

            “source”: “^/sap/opu/odata/”,

            “destination”: “<your destination name>”

          },

Build Deploy and Run your SAP Fiori Application

  1. Right click on mta.yaml file and choose “Build MTA” as a result a transportable MTA_0.0.1.mtar file is created under mta archive folder.

2. Connect to your target Cloud Foundry space, this is done once:

    1. From toolbar choose View->Find Command.
    2. In the dialog type CF login.
    3. Enter your credentials.
    4. Choose organization and space.

3. Right click on “MTA_0.0.1.mtar” file and deploy your MTA project to Cloud Foundry.

4. After the deploy is completed go to your cockpit and choose “HTML5 Applications” menu
and run your application.

Conclusions

You have accomplished the following tasks:

  1. Setup SAP Business Application Studio development environment.
  2. Migrate an SAP Fiori application consuming from Neo environment using SAP Web IDE Full-Stack to Cloud Foundry environment using SAP Business Application Studio.
  3. Run your deployed application on Cloud Foundry.

You can continue and learn about SAP Fiori development using SAP Business Application Studio in the formal documentation here and in this blog post.

SAP Cloud Platform: Open Innovation for Increasingly Complex Industry Challenges

Executives are always closely watching shifts in regulations, competition, market strength, and supply networks. During uncertain times, it is more critical than ever for executives to understand how to transition to digital models faster, access talent that facilitates this shift, and build capabilities that allow their business to grow.

Unfortunately, the reality of everyday operations is often messier than any one person expects. Accounting may bypass payroll processing by cutting an employee a check for missing wages so it can close month-end financials on time. Meanwhile, facility management may develop relationships with new suppliers on their own, yet forget to register them as an approved vendor.

While these events may be considered mere nuisances, they should never be taken lightly. Core business practices are shaped by the environment and people that use them – and when left unchecked, they can become too complex to address critical challenges that business leaders face during times of uncertainty.

Innovating the business core with cloud industry solutions

At a moment’s notice, seemingly minor issues can turn into complex challenges. A facility partner can become a major competitor, and a competitor can sign on as a strategic partner. Boundaries between organizations may shift and impact each other in vastly different ways. Even vendor partnerships in one business area can be forged or broken in another – without either one knowing anything about it.

Yes, there’s always more complexity than anticipated. However, with the introduction of SAP’s industry cloud solutions built on SAP Cloud Platform, businesses can openly innovate new capabilities and applications to handle complex challenges in core functions with resilience, efficiency, and flexibility.

This recent advancement leverages SAP Cloud Platform, which is part of the SAP Business Technology Platform portfolio, to help each SAP customer become one forged ecosystem focused on the industry in which it competes. This new reality is brought to life by combining the power of all organizational functions, partners, and technologies to deliver more value to their customers.

Part of the value of SAP’s industry cloud is the opportunity to use an open cloud platform and solutions to rethink core business functions in industry-relevant ways. Yet, I find more joy in sharing how our customers are uncovering their unique edge in ways that improve people’s way of life.

Eneco: Supporting green electricity with cloud integration

Eneco Holding N.V.​ is one of the most prominent investors in sustainable energy, serving 5.7 million customers and partners in the Netherlands, Belgium, and the United Kingdom. Like most utility providers, the company faces rapid change and transformation across its processes.

But with a cloud-first strategy, Eneco is ready to keep up with every evolution. The company started with SAP Cloud Platform Integration Suite, which is scalable enough to support cloud-to-cloud integration and hybrid deployment models. This strategic move reduced its IT spend for integration middleware to between 50% and 60% of its annual total cost of ownership. Simultaneously, the ease of use and flexibility of the solution helped the company go live in a matter of weeks.

Danone: Fostering stewardship with capital expenditures

Like most consumer good companies, Danone S.A. had a legacy capital expenditure (CAPEX) system built more than a decade ago and based primarily on e-mail. This system led to poor user experiences such as no tracking and reporting capabilities, cumbersome executive approval processes, and a lack of flexibility to adapt to shifts in workflows and business requirements. Even more frustrating was the inability to implement key performance indicators for planet sustainability in the approval process.

By running SAP HANA, SAP Fiori apps, and intelligent technologies from SAP on SAP Cloud Platform, Danone created a next-generation CAPEX approval process that supports an easy-to-use mobile front end for anytime, anywhere tracking and analytics. This one source of truth provides the financial visibility needed to measure and approve CAPEX projects that will achieve innovative and ambitious 2030 environmental goals, including a 50% reduction in CO2 and 100% renewable electricity.

Taronga Conservation Society: Inspiring lasting connections efficiently

 Committed to conservation, research, and education, Taronga Conservation Society Australia (Taronga) operates two local zoos in Sydney and Dubbo. The public organization continuously seeks to improve its animal habitats and enhance its visitors’ experience – but this mission also meant that it needed to become as efficient as possible.

Taronga decided to replace aging legacy systems, cumbersome processes, and paper-based reporting with SAP S/4HANA Cloud, SAP Cloud Platform Workflow Management, and SAP Analytics Cloud. This accelerated data transformation project opened the door to enterprise-wide information governance for general ledger accounts, providing an optimal user experience and greater transparency for admissions revenue and expense management. The organization also embedded analytics into existing applications to automate financial and management reporting and adapt quickly to rapidly changing market needs with insights from scenario analysis, forecasts, and growth predictions.

Driving growth with industry sustainability

All business innovations start with an idea inspired by a desire to fix a problem and unlock more value from current experiences. This fact is true whether a ground-breaking product or service is created or an accounts payable process is reimagined. One may gain more attention than the other, but they are equally crucial to the company’s overall success.

This appreciation of the core business makes SAP’s industry cloud solutions such a promising development for SAP Cloud Platform. The solutions form an innovation space that holds all the tools and content necessary to build and deliver change with agility and speed to remain competitive.

I’d love to hear how you’re using the SAP Cloud Platform for your cloud innovations so please share your feedback or comments.

To learn more, go to www.sap.com/BTP.

Deconsolidation with Nested Handling Units


Overview

This blog is written to explain how to conduct deconsolidation of Nested Handling units in an EWM environment. The blog will primarily focus to understand settings in Decon POSC and to explore possibilities in EWM to have ”HU WT’‘ for lower level HUs if we condense the Higher level HUs for IB03 step.

There can be business case when Products are packed into 2 level, say, Box level and Case level, hence making it as a Nested HU.

And requirement can be to deconsolidate only top level HU, Case level HU, putaway by Box level HU.

The process will incorporate 3 steps.

  1. IB01- Unload
  2. IB02- Deconsolidation( We will check the side effects of keeping IB02 for Nested HU in detail)
  3. IB03- Putaway

Before, proceeding into details, it is recommended to activate to  single Follow on task in below node.

Checking this selection box, system creates 1 Queue for N handling units( 1:N) ratio instead of 1 queue per HU hence making system more faster and this is recommended by SAP also.

–>If no activation, queues are created below. WMTH<HANDLING UNIT>, if the quantum of handling units is large, queue processing will be slow and hence system performance will be compromised.

–>If check box is enabled, system created 1 queue per N HU, and queue is written as below, hence making system more faster.

Now, coming to main point, how deconsolidation with Nested HU can be performed with IB01, IB02 and IB03 steps.

Case I: When IB02 is enabled with Product WT.

Considering the Pack spec is created like below

   1 Box= 5 EA

       1 Case= 10 Boxes= 50 EA

Inbound delivery with  Automatic Packin, nested HUS are created. Top HUs Case level HU with 7 series and Box level Sub HUs with 8 series number range.

Now, moment Unload IB01 task is created and confirmed, system directly creates WT for IB03 first and then IB02. 

The final Product WT Task in waiting status ”B” is created  before the Decon step, hence we lost the scope  to putaway HU by Box level HU, HU WT is not created.

After Decon step,  Case level HU closure, we are just  Final Product WT which is not desired. Hence Decon step IB02 failed here in case of nested HU

Case-II: IB01 is enabled with Product WT.

Enabling IB01 settings with Product WT, system first creates WT for IB03 and then for IB01.

Once again the final task are product Warehouse task which we did not intend to work on, system should condense the Top level HU and allow final task with Sub level HU.

Closure of HU at decon station and we are with below task, final task are open but with Product WT.

Looks this set up IB02 decon with POSC mixing with Nested HU will not work, I even tried to have HU type check in final storage type. Again this also did not work as in programming for IB03, system ignores the HU type check.

SAP document suggest that if NO deconsolidation is required, then it is possible to create HU WT, which implicitly implies, IB02 should not be contained in POSC step.

 Conclusion:

With all the above proofs submitted, it looks for Deconsolidation, system always creates the Product WT at IB03 step . I would be quite happy and eager to learn if anybody had achieved this feet using IB02 and IB03, the final task to be as HU WT. Atleast I cannot achieve this.

How to achieve final task as HU task for Nested HUs?

Looks this can be achieved not by Deconsolidation( IB02) but with Packing (IPK1) as an external step. Hence POSC incorporates IB01, IPK1 and IB03.

Confirm the complex unloading HU WT and system creates task for IPK1 which is Packing, HU Warehouse task finally :).  HU is nested

Confirm the IPK1 HU WT and condense the Higher level HU 70010000072, system created final IB03 Task as HU task which we were trying to achieve.

Putaway the HU.

Stock updated in Monitor as Box level HU which we were eyeing.

Thanks for going through it, appreciate feedback and corrections if required for this blog.

Shailesh Mishra

Why Intelligent Finance Has Reached A New Tipping Point

Your ‘How To’ Guide To Finance Value Creation

At SAP, we’ve been rethinking the very purpose of an ERP solution for finance. Rather than simply focusing on finance efficiency for transactional processing, we’ve shifted our attention to value-creating activities with a simplified architecture. We’ve reshaped and revisited each finance process. The result is a new tipping point in the era of intelligent finance.

Whether your own need for change stems from evolving customer expectations and behaviours, adoption of new business models with complex transactions around usage-based revenue, demand for greater agility and resilience or simply the need to reduce operational costs, the finance function is evolving significantly, impacting the demands of the system landscape.

By harnessing the intelligence of Machine Learning and AI, our customers are now able to drive finance strategy, business insights, risk, compliance and cost reduction with assurance and automation like never before. We’ve accelerated time to value in ERP migrations, solving the need to balance change with business continuity through fast finance transformation with a 360-degree view. We’ve empowered internal and external stakeholders with more confidence and visibility via streamlined, standardized treasury processes, giving more control to the treasurer.

And we’ve added intelligent capabilities into the principle of continuous closing, increasing efficiencies, enriching financial data with additional analysis and multi-dimensional managerial insights for faster business steering.

We’d like to show you how this disruptive technology leapfrog is a fundamental game changer for your business.

Our webinar series, Empowering Finance Value Creation – The New Tipping Point shows you customer examples of how your peers are benefitting from advanced intelligent finance and how they’ve achieved it. Through five webinars running every Thursday for 45 minutes, you can learn how to unlock new levels of performance across your organization, simplify key decision making, and automate time-consuming finance processes.

Webinar topics include:

  1. 11 Feb The Catalyst For Change
    How to increase efficiency and automation and calculate your business case for finance transformation.
  1. 18 FebPerfect Process – Prechecks To Publishing
    Advantages of the end-to-end continuous close and customer examples in action.

 

  1. 25 FebPlan Your Journey To 360º View of Finance
    How to realize fast finance transformation with SAP’s revolutionary approach.
  1. 4 MarchReal Time Financial Risk Management and Smart Trading at SAP How SAP’s Treasury organization manages financial risks with real-time insights, increased efficiency and automation.
  1. 11 MarchReal Time Cash Visibility and Payment Optimization at SAP
    How SAP’s Treasury organization manages Cash and Payment Operations efficiently in a truly virtual and connected environment.

We believe your ERP system should act like your smartphone, pushing notifications to you when something changes, providing easy communication, and acting as a digital assistant to ensure you’re always connected and able to receive information as soon as possible in a simple and intuitive format.

Finance has reached its own digital transformation tipping point and has changed forever. Find out how much it can do for your finance department. You can register for the Webinar Series here.

Chris Johnston is Head of Finance and Risk, SAP EMEA North Customer Solution Advisors.

Why Intelligent Finance Has Reached A New Tipping Point

Your ‘How To’ Guide To Finance Value Creation

At SAP, we’ve been rethinking the very purpose of an ERP solution for finance. Rather than simply focusing on finance efficiency for transactional processing, we’ve shifted our attention to value-creating activities with a simplified architecture. We’ve reshaped and revisited each finance process. The result is a new tipping point in the era of intelligent finance.

Whether your own need for change stems from evolving customer expectations and behaviours, adoption of new business models with complex transactions around usage-based revenue, demand for greater agility and resilience or simply the need to reduce operational costs, the finance function is evolving significantly, impacting the demands of the system landscape.

By harnessing the intelligence of Machine Learning and AI, our customers are now able to drive finance strategy, business insights, risk, compliance and cost reduction with assurance and automation like never before. We’ve accelerated time to value in ERP migrations, solving the need to balance change with business continuity through fast finance transformation with a 360-degree view. We’ve empowered internal and external stakeholders with more confidence and visibility via streamlined, standardized treasury processes, giving more control to the treasurer.

And we’ve added intelligent capabilities into the principle of continuous closing, increasing efficiencies, enriching financial data with additional analysis and multi-dimensional managerial insights for faster business steering.

We’d like to show you how this disruptive technology leapfrog is a fundamental game changer for your business.

Our webinar series, Empowering Finance Value Creation – The New Tipping Point shows you customer examples of how your peers are benefitting from advanced intelligent finance and how they’ve achieved it. Through five webinars running every Thursday for 45 minutes, you can learn how to unlock new levels of performance across your organization, simplify key decision making, and automate time-consuming finance processes.

Webinar topics include:

  1. 11 Feb The Catalyst For Change
    How to increase efficiency and automation and calculate your business case for finance transformation.
  1. 18 FebPerfect Process – Prechecks To Publishing
    Advantages of the end-to-end continuous close and customer examples in action.

 

  1. 25 FebPlan Your Journey To 360º View of Finance
    How to realize fast finance transformation with SAP’s revolutionary approach.
  1. 4 MarchReal Time Financial Risk Management and Smart Trading at SAP How SAP’s Treasury organization manages financial risks with real-time insights, increased efficiency and automation.
  1. 11 MarchReal Time Cash Visibility and Payment Optimization at SAP
    How SAP’s Treasury organization manages Cash and Payment Operations efficiently in a truly virtual and connected environment.

We believe your ERP system should act like your smartphone, pushing notifications to you when something changes, providing easy communication, and acting as a digital assistant to ensure you’re always connected and able to receive information as soon as possible in a simple and intuitive format.

Finance has reached its own digital transformation tipping point and has changed forever. Find out how much it can do for your finance department. You can register for the Webinar Series here.

Chris Johnston is Head of Finance and Risk, SAP EMEA North Customer Solution Advisors.

Replication of Cost Center Manager from SAP ERP Controlling to Employee Central

Introduction

Program ODTF_REPL_CC is used for replication of Cost Centers from SAP ERP Controlling to Employee Central, including the following data: Long Code with the Controlling Area in it, Short Code, Start Date, Status, Name, Description, Legal Entity (aka Company Code) and Cost Center Manager.

In this blog post, we will share details about replication of the Cost Center Manager field.

Details

In SAP ERP Controlling, the Cost Center Manager is stored in table CSKS in field VERAK_USER (User Responsible).

In Employee Central, the Cost Center Manager is stored in object Cost Center in the standard field costCenterManager.

Program ODTF_REPL_CC provides the following four options for replication of Cost Center Manager:

  1. Read UserID from field VERAK_USER (User Responsible);
  2. Read UserID from field VERAK (Person Responsible);
  3. Set UserID in BAdI ODTF_CO_REPL_IDOC_COST_CENTERS;
  4. Leave UserID empty.

Options 1 and 2 are not used in production, as values of fields VERAK and VERAK_USER cannot be accepted by a User field of Employee Central, which is explained below.

Option 4 is used for replication of Cost Centers without the Cost Center Manager field, which is the most probable scenario in production.

Option 3 with BAdI is the only option that can be used for replication of Cost Center Manager in production, as evaluation of EC UserID requires field VERAK_USER of table CSKS, Infotypes 0000 and 0105, which means that the Infotypes must be synchronized with the Job Information in Employee Central. We will share an example of the BAdI below in the blog post.

The table below summarizes differences between a user field in Employee Central and field VERAK_USER in SAP ERP Controlling.

Table 1. Differences between fields VERAK_USER (User Responsible in ERP) and costCenterManager (Cost Center Manager in EC)

Field VERAK_USER of table CSKS in ERP Field costCenterManager of Cost Center in EC
The value of the field is the SAP User Name limited to 12 characters, which comes from transaction SU01 in ERP (e.g. CGRANT). The value of the field is EC UserID, which matches* Personnel Number in SAP ERP HCM (e.g. 00001234).
The field does not check the assignment** of the Personnel Number to the SAP User. Moreover, the field can be time-independent.*** The field is time-dependent and checks if the Employee is active in EC on the start date of the Cost Center.

* For Details about mapping of EC UserID to SAP Personnel Number, see guide “Employee Central Core Hybrid: Handling Employee Identifiers”. ** The Personnel Number is assigned to SAP User in Infotype 0105 Subtype 0001. *** The time-dependency can be checked in transaction OKEH in ERP.

If Option 3 “Set Responsible Manager in BAdI” is selected, then BAdI ODTF_CO_REPL_IDOC_COST_CENTERS evaluates EC UserID. It is saved to the IDoc field ASS_MGR_EE_TEXT and sent to Employee Central as show on the diagram below:

Figure 1 Replication chain of Cost Center Manager from ERP to EC

Replication%20chain%20of%20Cost%20Center%20Manager%20from%20ERP%20to%20EC

* Externalized parameter PERSON_RESP_TARGET_FIELD is used for mapping of the IDoc field ASS_MGR_EE_TEXT to field costCenterManager of Cost Center. For details about the mapping, see guide “Replicating Cost Centers from SAP ERP to Employee Central Using SAP Cloud Platform Integration as the Middleware”, § 5.3.5, p. 28.

Let us suppose that field VERAK_USER in table CSKS is time-independent and contains actual information about Cost Center Managers, Infotype 0000 contains actual information about hire and termination dates, and Subtype 0001 of Infotype 0105 contains actual information about assignment of Personnel Numbers to SAP Users, then we can use the following code to evaluate the UserID of the Cost Center Manager.

Code 5. Evaluation of UserID of Cost Center Manager with use of BAdI ODTF_CO_REPL_IDOC_COST_CENTERS.

 METHOD if_odtf_co_repl_idoc_cost_cent~modify_cost_center_extractor. DATA: lt_p0000 TYPE STANDARD TABLE OF p0000, ls_p0000 TYPE p0000, lv_pernr TYPE pernr_d, lv_start_old TYPE datum, lv_start_new TYPE datum. LOOP AT cs_cost_centers_idoc-cost_centre ASSIGNING FIELD-SYMBOL(<ls_cost_centre>). LOOP AT <ls_cost_centre>-cost_centre_attributes ASSIGNING FIELD-SYMBOL(<ls_cost_centre_attributes>). CLEAR: lv_pernr, lv_start_new, lv_start_old. CALL FUNCTION 'ODTF_CC_GET_PERNR_FOR_USER' EXPORTING user = <ls_cost_centre_attributes>-ass_mgr_ee_user_account_id IMPORTING pernr = <ls_cost_centre_attributes>-ass_mgr_ee_text. IF sy-subrc <> 0. CLEAR <ls_cost_centre_attributes>-ass_mgr_ee_text. ELSE. WRITE <ls_cost_centre_attributes>-ass_mgr_ee_text TO lv_pernr. lv_start_old = <ls_cost_centre_attributes>-validity_period_start_date. READ TABLE lt_p0000 TRANSPORTING NO FIELDS WITH KEY pernr = lv_pernr. IF sy-subrc = 4. CALL FUNCTION 'HR_READ_INFOTYPE' EXPORTING tclas = 'A' pernr = lv_pernr infty = '0000' begda = '19000101' endda = '99991231' TABLES infty_tab = lt_p0000 EXCEPTIONS infty_not_found = 1 invalid_input = 2 OTHERS = 3. IF sy-subrc <> 0. CLEAR <ls_cost_centre_attributes>-ass_mgr_ee_text. ENDIF. ENDIF. LOOP AT lt_p0000 TRANSPORTING NO FIELDS WHERE pernr = lv_pernr AND stat2 = '3' AND begda <= lv_start_old AND endda >= lv_start_old. ENDLOOP. IF sy-subrc <> 0. CLEAR <ls_cost_centre_attributes>-ass_mgr_ee_text. SORT lt_p0000 DESCENDING BY pernr begda. LOOP AT lt_p0000 INTO ls_p0000 WHERE pernr = lv_pernr AND stat2 = '3' AND begda >= lv_start_old AND endda <= <ls_cost_centre_attributes>-validity_period_end_date. lv_start_new = ls_p0000-begda. ENDLOOP. IF sy-subrc = 0. <ls_cost_centre_attributes>-validity_period_start_date = lv_start_new. APPEND <ls_cost_centre_attributes> TO <ls_cost_centre>-cost_centre_attributes. <ls_cost_centre_attributes>-validity_period_start_date = lv_start_old. <ls_cost_centre_attributes>-validity_period_end_date = lv_start_new - 1. ENDIF. ENDIF. ENDIF. ENDLOOP. ENDLOOP. ENDMETHOD.

In the code, value of field VERAK_USER is converted to PERNR and added to the IDoc node “Cost Center Attribute” starting from the date when the Personnel Number become active.

Conclusion

Replication of Cost Center Manager is possible only with use of BAdI and requires synchronized data of Infotypes 0000 and 0105 in SAP ERP and Job Information in Employee Central.

Reference list

  1. Latyshenko, V. V., 2020. The Core Hybrid integration model on the example of Cost Centers. SAPinsider, [online]. Available at https://www.sapinsideronline.com/wp-content/uploads/2020/12/The-Core-Hybrid-Integration-Model-on-the-Example-of-Cost-Centers.pdf (Accessed: 13 January 2021).
  2. Replicating Cost Centers from SAP ERP to Employee Central Using SAP Cloud Platform Integration as the Middleware. Document Version 2H 2020 – 2020‑10‑16. Available at https://help.sap.com/doc/6e943d18c1f347b88e91b1e605d502e2/2011/en-US/SF_ERP_EC_CC_HCI_en-US.pdf (Accessed: 13 January 2021).
  3. Employee Central Core Hybrid: Handling Employee Identifiers. Document Version 1.4 – 2020‑05‑31. Available at https://d.dam.sap.com/a/Q3ABoSy/IDP%20-%20Employee%20Central%20Core%20Hybrid%20-%20Employee%20Identifiers%20V1.4.pdf (Accessed: 13 January 2021).

Data Integration & Data Source Connectivity in SAP Data Warehouse Cloud


SAP Data Warehouse Cloud helps us in building a stable virtual, business- and semantic layer across a heterogeneous, ever-growing and ever-changing system landscape. Technology does change rapidly, but the ideas, values and guidelines our analytics business is based on typically remain pretty stable and do only change very slowly. If they change, they do so only because our surrounding business changes, but never because our infrastructure moves to a newer technology.

Unfortunately, the last part of the above statement holds not always true. When there is a change in technology we use to run our daily business, we often make decisions concerning our analytics strategy based on the used technology. Technology influences the way we do analytics. But it should be the other way round, really, if not disconnected at all. The best-case scenario is an environment where our analytics business remains untouched from any technology changes and only needs adjustments if our real-world business environment demands it.

Therefore it is a key requirement to separate semantics (how is a specific KPI calculated?) from technology (how and where is my data stored?).

SAP Data Warehouse Cloud offers an environment where this is possible. With tools like the Business Builder, Graphical View Builder or SQL View Builder and Data Flows at hand one can create a semantic layer which stays independent of any change in the underlying data layer. Any analytics tool connected to the semantic layer is not impacted by changes to the data-managing basis.

In order to establishing this separation, SAP Data Warehouse Cloud offers different ways how remote data can be integrated. Questions typically asked can be separated into these categories:

  • Which systems can SAP Data Warehouse Cloud connect to?
  • What artefacts / entities from connected systems can be consumed in SAP Data Warehouse Cloud?
  • Which data access methods are supported?
  • Which authentication techniques can be used?
  • Which tooling is required for integrate on-premise data sources?

With SAP Data Warehouse Cloud we follow different approaches in parallel to cover all the aforementioned points and to integrate your data.

Connectivity%20Overview

Connectivity Overview

Out-of-the-box Connections

Integrating data sources by pulling (Pull) data into SAP Data Warehouse Cloud is part of its out-of-the-box connectivity. The number of remote data sources supported by SAP Data Warehouse Cloud directly is growing over time and focused on different aspects like SAP and non-SAP data source support, hyperscaler connectivity, cloud and on-premise data sources. The SAP Road Map Explorer is receiving frequent updates with new and planned connectivity options for SAP Data Warehouse Cloud.

Connections%20Overview.%20Items%20marked%20with%20*%20are%20subject%20to%20change.

Connections overview. Items marked with * are subject to change.

This option is available from the Connections section in your Space Management in your SAP Data Warehouse Cloud tenant. Connections created this way can be used in the different tools like SQL View Builder, Graphical View Builder and Data Flow to create and fill your data models. Data from the connected sources can be acquired live (virtual / federated, not talking about authentication metchanisms here yet ;)), replicated to or extracted into SAP Data Warehouse Cloud.

Connectivity%20in%20SAP%20Data%20Warehouse%20Cloud

Connection types in SAP Data Warehouse Cloud

Connecting to on-premise sources (all sources not directly accessible on the public internet) requires the setup of either the Smart Data Integration (SDI) Data Provisioning Agent or SAP Cloud Connector or both, depending on which sources you need to connect to. These agents act like proxies or gateways for SAP Data Warehouse Cloud into your local network. Without these agents you would have to expose your systems to the public internet to allow SAP Data Warehouse Cloud to connect which is not what you want, trust me. 🙂

Generic SQL Connection Capabilities

However, we, the SAP Data Warehouse Cloud development organisation, do know that the number of data sources customers want to connect and integrate is so huge that we probably never, at least not as of now, can cover all these connectivity needs only with the natively integrated connection options shown in the above pictures.

Therefore SAP Data Warehouse Cloud additionally offers the possibility to let you push (Push) data with any third-party SQL client into your SAP Data Warehouse Cloud space. By creating so-called Database Users (SQL endpoints) in your space management overview, you can connect any tool which is capable of connecting to a SQL target. Pro tip: The Database User functionality can be used to consume data in a third-party application, too! 😉

Creating a new Database User (SQL interface / endpoint)

This generic option to move data physically into your SAP Data Warehouse Cloud space is your gate-opener to integrating any data source into SAP Data Warehouse Cloud for which you can find a a tool which can connect to your source and can write data into a SQL-based target application. You can also build your own SQL client if you wanted to to move data into your SAP Data Warehouse Cloud tenant. 😉 Heads up: Soon SAP Data Warehouse Cloud add the Generic SDI Apache Camel JDBC adapter to its set of native connections. Stay tuned for another blog of mine explaining the beauty of this adapter to you and which benefits it brings to the data warehousing table.

Partner Connectivity Platforms

Another strategy we are pursuing is embedding partners as dedicated connection options into SAP Data Warehouse Cloud as well as enabling partners like Adverity, SnapLogic, APOS, Datazeit, Precog, Informatica, and others to write data into your space in SAP Data Warehouse Cloud using their connectivity platforms.

Example%20of%20Adverity%20Integration%20into%20SAP%20Data%20Warehouse%20Cloud

Example of Adverity Integration into SAP Data Warehouse Cloud

However, this option is not part of the actual SAP Data Warehouse Cloud offering and customers are required to license the partner solution in order to connect their SAP Data Warehouse Cloud tenant to the partner’s platform.

Authentication Methods

SAP Data Warehouse Cloud as of today offers Basic Authentication only when connecting to remote sources. Whenever connecting to a remote source which requires you to authenticate first before you can access its data, you have to specify the credentials at design time when creating the connection. Currently SAP Data Warehouse Cloud does not allow for creating real live connections as you may know it from SAP Analytics Cloud connecting to remote sources using Single-Sign-On and SAML assertions.

Authentication%20methods%20when%20creating%20Generic%20OData%20connections

Authentication methods when creating Generic OData connections

Connectivity Components

All the aforementioned connectivity options are represented by different components in the SAP Data Warehouse Cloud architecture. High-level speaking, components 2) and 3) are part of SAP HANA Cloud. Smart Data Access and Smart Data Integration as well as the Database Users components are wrapped by the SAP Data Warehouse Cloud application to offer the functionality to its users. The Data Flow component 1) is contributed by SAP Data Intelligence Cloud. Partners are the fourth component contributing to the rich connectivity of SAP Data Warehouse Cloud and connecting to the SQL endpoints of SAP Data Warehouse Cloud.

Connectivity%20components

Connectivity components

Connections created in your SAP Data Warehouse Cloud space can be used in the different modeling tools, but be careful: Not each and every connection can be used in both the Graphical or SQL View Builder and Data Flow. The Create Connection dialog in your SAP Data Warehouse Cloud space tells you which connection type can be used with with tool (check out my other blog for all the details).

View%20Builder%20and%20Data%20Flow%20comparison

Comparing View Builder and Data Flow

The different tools are mainly built for two different purposes: Building virtual / federated data models by default (Graphical & SQL View Builder) with the option to persist the models if needed (see Data Integration Monitor), for example for performance reasons, and building your ETL processes for extracting, transforming and storing data in your SAP Data Warehouse Cloud space using Data Flows.

Both tools share a basic set of actions and transformations like join, union, calculations, but at the same time both tools come with unique functionality like SQL script as part of the two View Builders and Python scripting capabilities as part of Data Flows.

The various data integration & connectivity options combined with the different tools for data modeling allow you to establish a semantic layer which is only loosely coupled to the underlying data layer to establish stable, reliable and allows for non-disruptive changes to the data layer for any data-consuming application on top.

With the ever-growing connectivity capabilities in SAP Data Warehouse Cloud, you can make it your central piece for connecting your applications via a harmonised, virtual or materialised semantic and data layer to your heterogenous system landscape, hiding all the hard-to-understand and maintain specifics and edge cases from your frontend-facing applications.

Check out the SAP Data Warehouse Cloud Road Map for upcoming connectivity enhancements like SAP HANA Cloud, SAP BW/4HANA, SFTP, SAP S/4HANA Cloud, PostgreSQL, Teradata, MySQL and more!

You can get yourself a free 30-days SAP Data Warehouse Cloud trial tenant with all the features enabled. Check out our free trial page here.

Let me know in the comments or ask your question in the Q&A area.

Using Side-by-Side Extensibility to Process Down Payments for Sales Orders

In collaboration with: Praveen Kumar Appari

This blog post describes how to create a side-by-side application for managing sales orders that are eligible for down payment on SAP S/4HANA Cloud You can create this custom app using the side-byside extensibility feature of the SAP S/4HANA Cloud system and deploy it on the SAP Cloud Platform. With this app, it is possible to automate the process of down payments for multiple sales order line items in one step.  

For example, on the standard system the process of down payment in a sales order is done on one line item at a time. When creating down payment requests for a billing item you would need to complete details such as Customer Down Payment Amount for every Sales Order Item and Sales Order NumberUsing the custom app, it is possible to automate the down payment for multiple line items with a sales order.

 

You have administrative access to SAP S/4HANA Cloud and implementation experience on it. Coding experience is necessary for creating custom application, since this extensibility solution requires the implementation of a coding logic.

Business Role  Business Role ID 
Configuration Expert – Business Network Integration  SAP _ BR _ CONF _ EXPERT _ BUS _ NET _ INT 
Master Data Specialist – Business Partner Data  MASTER _ SPECIALIST 
Extensibility  SAP_BC_CORE_EXT 
Administrator  SAP_BR_ADMINISTRATOR 
Communication Management  SAP_CORE_BC_COM 

 

High%20Level%20Overview

Solution%20Architecture

Implementation has three parts

1. Creating the custom app:

  • Implement a backend Java application for managing down payments sales order items. The communication arrangements are defined to support remote system access, which in this case is the SAP S/4HANA Cloud – Note: To create destinations, you need to input the username, URL of the system, authentication type along with other configuration data.
  • Implement the User Interface (UI) with SAPUI5 using JavaScript in Web IDE. This UI implementation needs to be deployed on SAP Cloud Platform, which in turn must connect with SAP S/4HANA Cloud.

2. Setting up the SAP /4HANA Cloud and SAP Cloud Platform for the Custom App

The setup process toggles between steps in the SAP S/4HANA Cloud and SAP Cloud Platform systems

  • On SAP S/4HANA Cloud, create a communication system:
    • Make a note of user data as this is required when creating the communication arrangement.
    • Create a communication arrangement by using the communication ID of the user created in the previous step.
    • Create new communication system by providing the system details. This is used for inbound calls; therefore, it is not necessary to specify host name. In the user for inbound communication section add the user details created in the previous steps –  Note: A communication scenario bundles inbound and outbound communication design-time artifacts. Since it allows communication between systems, each communication arrangement must be based on a communication scenario. You need to create a new custom scenario and specify the following:
      • SAP_COM_0002 – Finance – Posting Integration
      • SAP_COM_0109 – Sales Order Integration
      • SAP_COM_0093 – Identity Management Integration
  • Create the custom CDS views for existing tax environment to allow the retrieval of the tax codes and to expose them with ODATA.
  • Create a new Custom Field using the Custom Fields and Logic app. Choose Sales: Billing Document because the down payment process is within sales orders. Select the Delivery checkbox within the Sales Document at the Item level within Business Scenario Enable the BAPI CREATE_MUTIPLE for billing document creation.

3. Create integration in the SAP Cloud Platform

Creation of launching application in S/4HANA Cloud.

  • To launch application, create a Custom Tile on SAP S/4HANA Cloud using the extensibility cockpit. Enter the browser URL without any parameters, for example, https://XXXXX.dispatcher.hanatrial.ondemand.com
  • You must Assign Catalogs to the relevant business roles, for example, SAP_FIN_BC_AR_INC_PAYM_PC – Accounts Receivable – Incoming Payments. After assigning the catalogs you need to publish a custom tile.
  • Create SAP Identity and Authentication Service in the cockpit using the tenants’ ID. Access the tenant’s administration console for SAP Cloud Platform Identity Authentication service by using the console’s URL. By using the Administrator tile create the new administrator by choosing the system. Ensure that manage users is enabled and provide the password details in the fields. If user is setting the password for API authentication for the first time, these fields must be empty.

Testing the Extension Scenario

  • Create the Standard Sales Orders with pre-sized percentage of down payment amount as conditions type mentioned as ZZWA on the header level of S/4HANA Cloud.
  • After completing all above steps, should be able to use newly created Side-by-Side application. Log-on with an SAP S/4HANA Cloud User, which has assigned to custom tile. Navigate to the custom tile and try out the application which enables the list of sales orders created for creating down payment request.
  • Through accessing the sales order from the new custom application, you can set the tax codes for each Sales Order Item, as these are predefined in the beginning. After setting the Tax Codes, the next steps are defining the down payment amount percentage and creating the Down Payment request specifying the journal entry date.  The successful creation of down payment request generates the request ID which is Journal entry number. This reference number can be verified even in the defined custom field at the Sales Order Item levels. The entries are created also in the Manage Journal Entries application with all mandate details using the Journal entry reference.

Conclusion

With this we described how you can create a side-by-side application for managing sales orders for automating the down payment process in SAP S/4HANA Cloud. 

HANA Data Lake Size and Row Monitoring

Are you looking for a programatic way to monitor the size of your data lake objects?  How about viewing the row counts of the objects as well?

As you will have seen from my previous blog posts about the SAP HANA Data Lake, it is based on SAP IQ (formerly Sybase IQ) technology.  Some of the monitoring that we would have done in years past in an SAP IQ landscape can be ported to work with the SAP HANA Data Lake.

The basic premise is simple: develop code that runs inside the data lake to capture the proper statistics and then expose that code via a stored procedure.  There are (or will be when the next evolution of the data is released in Q1 2021 per the roadmap), two ways to interface with the data lake.  Since Q1 2020, the data lake has been exposed through SAP HANA Cloud.  In the next generation of the data lake we will be exposing a native SQL endpoint or interface.  This blog will cover both interfaces.

First, the code.  As mentioned, we must develop code which captures the proper statistics.  In this case, we want to capture the size of each table as well as the number of rows in the table.  This can be done via a SQL stored procedure that is created in the data lake.

If you will be exposing the data lake via SAP HANA Cloud, this code should be used.  It puts the proper syntax and wrappers around the code so that HANA can send it to the data lake for creation.

CALL SYSRDL#CG.REMOTE_EXECUTE('
drop procedure if exists sp_iqsizes;
create procedure sp_iqsizes()
begin declare local temporary table size_res ( table_owner varchar(128) , table_name varchar(128) , sizeKB bigint , rowcount bigint ) in SYSTEM; declare sizeKB bigint; declare rc bigint; declare o varchar(128); declare t varchar(128); declare ot varchar(255); declare blksz int; -- SYSIQINFO not exposed in HDL yet, but HDL uses 32KB blocks -- can revert to commented out line in next gen HDL --select first block_size/512/2 into blksz from sys.SYSIQINFO; set blksz=32; FOR FORLOOP as FORCRSR CURSOR FOR select table_name, user_name table_owner from sys.systable, sys.sysuser where creator <> 3 and server_type = ''IQ'' and table_type <> ''PARTITION'' and creator=user_id do set sizeKB=0; set rc=0; set o=table_owner; set t=table_name; set ot=o+''.''+t; select convert(bigint, NBlocks * blksz) into sizeKB from dbo.sp_iqtablesize(ot); execute ( ''select count(*) into rc from ''||ot ); insert into size_res values ( table_owner, table_name, sizeKB, rc ); end for; select * from size_res;
end; drop view if exists V_HDL_ALL_SIZES;
create view V_HDL_ALL_SIZES as select * from sp_iqsizes(); ');

If you will be using the SQL Endpoint for the HANA Data Lake, then this code would be used.  The only difference is the REMOTE_EXECUTE wrapper and the single quote escaping that is done by using two single quotes.

drop procedure if exists sp_iqsizes; create procedure sp_iqsizes()
begin declare local temporary table size_res ( table_owner varchar(128) , table_name varchar(128) , sizeKB bigint , rowcount bigint ) in SYSTEM; declare sizeKB bigint; declare rc bigint; declare o varchar(128); declare t varchar(128); declare ot varchar(255); declare blksz int; -- SYSIQINFO not exposed in HDL yet, but HDL uses 32KB blocks -- can revert to commented out line in next gen HDL --select first block_size/512/2 into blksz from sys.SYSIQINFO; set blksz=32; FOR FORLOOP as FORCRSR CURSOR FOR select table_name, user_name table_owner from sys.systable, sys.sysuser where creator <> 3 and server_type = 'IQ' and table_type <> 'PARTITION' and creator=user_id do set sizeKB=0; set rc=0; set o=table_owner; set t=table_name; set ot=o+'.'+t; select convert(bigint, NBlocks * blksz) into sizeKB from dbo.sp_iqtablesize(ot); execute ( 'select count(*) into rc from '||ot ); insert into size_res values ( table_owner, table_name, sizeKB, rc ); end for; select * from size_res;
end; drop view if exists V_HDL_ALL_SIZES; create view V_HDL_ALL_SIZES as select * from sp_iqsizes();

Regardless of which code path you have used, the net is that the data lake will now have a procedure called sp_iqsizes() and view called V_HDL_ALL_SIZES.  When run, sp_iqsizes takes no parameters and will collect the size in KB/kilobytes and the row count for each table in the data lake that is not a system table.

The view is needed to expose the code logic (stored procedure) to a remote system, SAP HANA Cloud.

If you wish to expose the data lake procedure and view to SAP HANA Cloud, this code can be used to do just that:

drop table HDL_ALL_SIZES; CREATE VIRTUAL TABLE "HDL_ALL_SIZES" AT "SYSRDL#CG_SOURCE"."<NULL>"."SYSRDL#CG"."V_HDL_ALL_SIZES";

At this point, we have created the necessary data lake objects as well as mapped those through HANA Cloud for programmatic use in HANA Cloud.  To access the data from within HANA Cloud, you can run this command:

select * from HDL_ALL_SIZES;

I ran this via the HANA Cloud Database Explorer and have this as output in the test system:

HANA%20Cloud%20DBExplorer%20Output

HANA Cloud DBExplorer Output

Hope this helps you in better monitoring the space consumption of your objects, programmatically, in your data lake deployment.  As an added benefit, if you are an existing SAP IQ user, the stored procedure will also work in an on-premise deployment of SAP IQ 16.