Replicate data from SuccessFactors to Commissions using Speech-To-Text

This blog post will give you an overview of how you can replicate information from SAP SuccessFactors to SAP Commissions using REST API. Sounds pretty basic right? How about we by leverage SAP’s Conversational AI and also use Speech-To-Text to accomplish our goal? Sounds interesting? Let us get started.

Requirements:

  1. Create an account in https://cai.tools.sap/ . This is to create our chatbot.
  2. SAP BTP trial account
  3. Access to a SAP SuccessFactors instance
  4. Access to SAP Commissions instance
  5. Code editor, Example:Visual Studio Code.

I’ve broken down this blog post into 3 sections.

  1. Create iFlow to replicate Employee Info from SuccessFactors to Commissions
  2. Create bot using SAP CAI to initiate replication process
  3. Create a simple UI5 app and embed chatbot with Speech-To-Text functionality.

1. Create iFlow to replicate Employee Info from SuccessFactors to Commissions.

Ensure you have your BTP trial account setup and subscribe to SAP’s Integration suite. Then you can start creating your iFlow. Please note, before you start creating the iFlow, ensure to setup the basic auth credentials for your SuccessFactors and Commissions tenant under Monitor>Integrations>Security Material.

Once done create and setup the iFlow as shown in the below images.

Set Participant Properties is a groovy script. Below is the script.

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.MarkupBuilder ;
def Message processData(Message message) { def body = message.getBody(java.lang.String)as String; def parseXML = new XmlParser().parseText(body); String name ;
String email ; name = "${parseXML.PerPerson.personalInfoNav.PerPersonal.firstName.text().toString()}" ;
email = "${parseXML.PerPerson.emailNav.PerEmail.emailAddress.text().toString()}"; message.setProperty("name", name);
message.setProperty("email", email); return message;
}

Deploy the iFlow once the setup is done. Once deployed copy the endpoint url of your iFlow, we will need this when we create our bot.

2. Create bot using SAP CAI to initiate replication process

Next let us create a bot in SAP’s CAI as shown in the below images. Create a new bot and give it any name you want. Now it is time to define our intent and entities.

Create a Free entity by the name Employeeid, under Entities tab.

Then go to the Intent tab. Create a new intent and give it a name – replicateemployee. Create intents as shown below.

Once the intents are created as shown above, tag the employeeid entity to the numeric values as shown below.

Now it is time to create our skill. Go to the Build tab and create a new skill. You can name it replicateemployee. Set up the skill as shown below.

Next step is to create a webclient AI channel. For this go to the Connect tab and under primary channel, expand SAP Conversational AI webclient and create a new webclient as shown below.

Do not change anything here, let the settings be default and click create.

Copy the Generated Web Client script for embedding.

3.Create a simple UI5 app and embed chatbot with Speech-To-Text functionality.

Next let us create a simple UI5 app and embed our chatbot in this app. Create a simple UI5 app by following any of the blogs or tutorials available over the internet.

Once you’ve created your app, we will need to create 3 additional files under our webapp>controller folder.

  • webclient.js
  • webclientBridge.js
  • webclientBridgeImpl.js

The Speech-To-Text functionality in our case will be handled by the browser speech to text service .Ensure the controller file has the following setup as shown below before proceeding with creating the above mentioned files.

Add the below function to the controller file.

Below is how you will need to create your webclient.js file. This info can be found in the web client script.

Below is the code for webClientBridge.js

const webclientBridge = { callImplMethod: async (name, ...args) => { console.log(name) if (window.webclientBridgeImpl && window.webclientBridgeImpl[name]) { return window.webclientBridgeImpl[name](...args) } }, // if this function returns an object, WebClient will enable the microphone button. sttGetConfig: async (...args) => { return webclientBridge.callImplMethod('sttGetConfig', ...args) }, sttStartListening: async (...args) => { return webclientBridge.callImplMethod('sttStartListening', ...args) }, sttStopListening: async (...args) => { return webclientBridge.callImplMethod('sttStopListening', ...args) }, sttAbort: async (...args) => { return webclientBridge.callImplMethod('sttAbort', ...args) }, // only called if useMediaRecorder = true in sttGetConfig sttOnFinalAudioData: async (...args) => { return webclientBridge.callImplMethod('sttOnFinalAudioData', ...args) }, // only called if useMediaRecorder = true in sttGetConfig sttOnInterimAudioData: async (...args) => { // send interim blob to STT service return webclientBridge.callImplMethod('sttOnInterimAudioData', ...args) }
} window.sapcai = { webclientBridge,
}

Below is the webClientBridgeImpl.js

// Handles working with browser speech recognition API
class SpeechToText { constructor(onFinalised, onEndEvent, onAnythingSaid) { var _this = this; var language = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : 'en-US'; if (!('webkitSpeechRecognition' in window)) { throw new Error("This browser doesn't support speech recognition. Try Google Chrome."); } var SpeechRecognition = window.webkitSpeechRecognition; this.recognition = new SpeechRecognition(); // set interim results to be returned if a callback for it has been passed in this.recognition.interimResults = !!onAnythingSaid; this.recognition.lang = language; var finalTranscript = ''; // process both interim and finalised results this.recognition.onresult = function (event) { var interimTranscript = ''; // concatenate all the transcribed pieces together (SpeechRecognitionResult) for (var i = event.resultIndex; i < event.results.length; i += 1) { var transcriptionPiece = event.results[i][0].transcript; // check for a finalised transciption in the cloud if (event.results[i].isFinal) { finalTranscript += transcriptionPiece; onFinalised(finalTranscript); finalTranscript = ''; } else if (_this.recognition.interimResults) { interimTranscript += transcriptionPiece; onAnythingSaid(interimTranscript); } } }; this.recognition.onend = function () { onEndEvent(); }; this.startListening = function () { this.recognition.start(); }; this.stopListening = function () { this.recognition.stop(); }; }
} // Contains callbacks for when results are returned
class STTSpeechAPI { constructor(language = 'en-US') { this.stt = new SpeechToText(this.onFinalResult, this.onStop, this.onInterimResult, language) } startListening() { this.stt.startListening() } stopListening() { this.stt.stopListening() } abort() { this.stt.recognition.abort() this.stt.stopListening() } onFinalResult(text) { const m = { text, final: true, } window.sap.cai.webclient.onSTTResult(m) } onInterimResult(text) { const m = { text, final: false, } window.sap.cai.webclient.onSTTResult(m) } onStop() { const m = { text: '', final: true, } window.sap.cai.webclient.onSTTResult(m) }
} // Contains methods SAP Conversational AI needs for handling
// chatbot UI events
let stt = null
const sttSpeech = { sttGetConfig: async () => { return { useMediaRecorder: false, } }, sttStartListening: async (params) => { const [metadata] = params const { language, _ } = metadata stt = new STTSpeechAPI(language) stt.startListening() }, sttStopListening: () => { stt.stopListening() }, sttAbort: () => { stt.abort() },
} window.webclientBridgeImpl = sttSpeech

Once the above setup is done, run the below command in your terminal

ui5 serve -o index.html

You should see the UI5 page open with our bot showing ‘Chat with me’. Click on it and you should see the chatbot open. You should also see a microphone icon.

Click on the microphone icon on your bot and say ‘Replicate employee <employeeId >’.

Please note the employeeId here is the personIdExternal, which is the unique identifier for the PerPerson entity in SuccessFactors. So ensure you provide a valid personIdExternal that exists in SF but not in present in Commissions as a participant.

And there you have it, a voice enabled chatbot that replicates information from SuccessFactors to Commissions. Thanks for reading and happy learning!

You may also want try out IBM Watson’s STT Service. Below are the references which helped me write this blog.

https://github.com/SAPConversationalAI/WebClientDevGuide/tree/main/examples/WebClientBridge

https://blogs.sap.com/2022/03/31/how-to-implement-the-new-speech-to-text-in-chatbots/

https://answers.sap.com/questions/13631383/speech-to-text-for-the-sap-cai-web-client-using.html

https://developers.sap.com/tutorials/conversational-ai-speech-2-text-simple.html