ABAP: Fast JSON Serialization

tldr; Install https://github.com/timostark/abap-json-serialization and enjoy the fastest possible JSON serialization (be warned: at costs of high side-effects).

Oh no – another JSON Serialization Blog Post? Hey – At least no blog about excel exports 🙂

So why are we in need of a new JSON serialization? The reason is simple: Runtime! Especially when working with custom REST services with a big payload you will notice a lot of runtime getting lost in JSON serialization. Loosing 30% of your runtime in a dumb JSON serialization at least makes me very unhappy, when I just optimized my much more difficult business class.

So, what are our goals:

  1. Fast AND
  2. Support Camel-Case
  3. Support real booleans and numbers
  4. Not need to be easy or generic (I will accept a bad life as a developer if it is fast and reliable).

There are already multiple solutions out there – just to mention the most important; Just to mention the most important ones:

So how are they behaving from runtime perspective. Let’s take a very simple example and serialize 5.000 lines of SFLIGHT lines and a very complex and deep structure:

So what does that tell us? Not really surprisingly the only feasible solution on a ABAP stack is the usage of CALL TRANSFORMATION – as this is executed directly in the Kernel, thus not depending on slow ABAP String concat and/or field-symbol traversal.

There are however quality problems when using CALL TRANSFORMATION ID:

  1. No Camel-Case
  2. No real “booleans” (instead ‘X’ is printed.. tell that somebody outside of the SAP world)
  3. No real NUMC (instead leading 0s are printed)

There is one solution which was already mentioned in a blog post, using a custom ABAP transformation to at least support camel case. Unfortunately, that throws away the performance benefit as the wonderful fast kernel module has to go up to the ABAP stack for a simple “to-camel-case” transformation.

My suggested solution is, that we use CALL TRANSFORMATION for what it is actually thought: to transform data using XSLT transformations in transaction “STRANS”. This means we are creating an own XSLT transformation for every single (root) structure/table-type we want to serialize.

Let’s see an example transformation for the table SFLIGHT (shortened):

Nobody wants to write that code (and for sure nobody with a right mind will want to keep that transformation up to date) – but let’s first see the runtime impact.

We gained a lot of quality (Booleans, Numbers, Camel-Case), and lost only ~~15ms (yes almost double :-))

==> The solution is around 10 times faster than /UI2/CL_JSON.

As already said of course nobody wants to write these XSLT mappings – especially for deeply nested structures this is horrible.

Therefore, I’ve published a small helper program ZJSON_TO_XSLT under MIT license to GitHub which allows you to directly create those transformation for any structure/table

Output (next to the generated transformation).

Execute the transformation using normal CALL TRANSFORMATION call:

DATA(lo_writer_json) = cl_sxml_string_writer=>create( type = if_sxml=>co_xt_json ).
CALL TRANSFORMATION ZSFLIGHT SOURCE root = lt_flights RESULT XML lo_writer_json.
DATA(lv_json) = cl_abap_codepage=>convert_from( lo_writer_json->get_output( ) ).

In my customer projects I am using the API called in the program in a regular job (including a mapping-table) which updates the transformations on the development system in a regular manner.

==> Using this approach, you can get an extremely fast JSON serialization while still having high quality. The solution works as long as you know the exported JSON types upfront ( i.E. have static data-types).

A big word of warning: The solution is thought for performance critical development. The solution comes with very high costs: you have to think of an additional development object (the transformation). Even if it updates automatically, it can get out-of-date, you can forget it or it can get corrupted. Do not waste your time on this solution, if you do not need it.