Code push-down in ABAP Development

S/4 HANA presents a new opportunity for code optimization.

If you are into ABAP Development then by now you must already be familiar with the code to data paradigm. In simple terms this means pushing much of the data processing down to where the data resides i.e. the database.

Historically SAP ABAP supports 2 kinds of languages to interact with the database system.

  1. Open SQL – SAP’s own way of performing SQL data interaction
  2. Native SQL – Using the underlying database’s native SQL supported features.

Let me remind you that SAP as a software largely supports most of the common relational databases, MS-SQL, and Oracle being the most common that I have encountered. As an established rule, for a database to be called relational, it should support the structured query language, SQL. However in addition to supporting standard SQL, database makers add their own unique constructs which makes it stronger than other databases for some purpose.

SAP therefore supports Native SQL which means if you know the uderlying database, you can exploit its potential by using statements specifically supported by that database using Native SQL.  In contrast to this, OpenSQL allows SAP developers to code in a database agnostic way. In other words OpenSQL statements will be always be understood and executed by the underlying database ( because they are driven off of industry standard SQL ) while NativeSQL may or may not be understood by the DB. For this reason NativeSQL statements are considered a big no-no within the ABAP Development community. In my programming career I may have hardly used NativeSQL may be like 5 times and that too mostly %_HINTS statement to optimize program SQL performance.

Check out this blog for some examples to understand %_HINTS in MSSQL.

Find more on HINT in sap help here.

So what is Code to Data Paradigm and why should I be concerned of it ?

With S/4 HANA – SAP now uses HANA as its native database to store data. But HANA is much more than just a database. Among other advantages and features, like

  • Row and Column Data store,
  • Data Compression,
  • Supporting both : OLTP and OLAP patterns within one application,

It has capabilities of In-memory computing.
One of the imperatives of In-memory computing is that you have to

  1. Avoid (unnecessary) movement of large data volume
  2. Perform data-intensive calculations in the database.

One of the key differences for developing applications in ABAP for HANA is that you can push down data intense computations and calculations to the HANA DB layer instead of bringing all the data to the ABAP layer and then processing the data to do computations. This is what is termed as Code-to-Data paradigm in the context of developing ABAP applications optimized for HANA.

The Code to Data Paradigm is experienced in three levels or stages in SAP HANA each with increasing levels of complexity and performance improvements

  1. Transparent Optimizations : Fast Data Access, Table Buffer enhancements.
  2. Advanced SQL in ABAP : Open SQL Enhancements, CDS Views
  3. SAP HANA Native Features : AMDP, Native SQL

Most of the times one would be satisfied with advances achieved with level 2 itself and Level 3 is really squeezing out the final bit of optimization, but each of them are interesting in their own applications.

Top-Down Approach for Development

The Code to Data paradigm means that data intensive operations be pushed down to the database. The obvious implementation of this would be the Bottom – Up approach in which we would program stored procedures and views directly into the HANA database and then consume them in the application server as needed. Since the procedures are coded at the database level itself, they would run faster. Correct ?

Yes. True and Even SAP thought the same way prior to NW 7.4 SP2. But later on realized that the issue with this approach for general consumption is that

  1. As a developer you would have to work in two environments, HANA DB to create the DB artifacts and ABAP to consume those artifacts as remote proxies. So far we have been transparent to the Database layer. In fact I have never ever directly logged in into the database layer ever.
  2. You will have to bear the responsibility of keeping your HANA and ABAP artifacts in sync and take care of the life cycle management.

So with NW 7.5 Sp5 onwards a change in methodology , the Top – Down approach was adopted. The Top – Down approach is our usual way of working with ABAP development objects. Meaning you would develop HANA based ABAP artifacts in the ABAP Application Server itself and deploy(activate) them on the HANA Database. Its just like our usual ABAP Report development, where in we develop the report at the application server level and the report is then activated and a transport request is generated which can be released in order to move the object across systems.

Currently Top-Down approach is used in the CDS views and ABAP Managed Database Procedures (AMDP).

There are already a number of blogs on this topic. I am planning to post a few topics on CDS views.  What are your findings on S/4 HANA ? Have you already started working on it ? or are you interested to learn more on these topics ? Let me know your views on this topic.

14 Comments

  1. Kripa Rangachari

    Very interesting post.

    I started following your blogs and & youtube videos. Thanks for sharing the knowledge..

    Regards,
    Kripa Rangachari.

  2. Michael

    It seems to be a bad idea to use this paradigms even if they seem to be technically appealing. You explained yourself why it is not a good idea to push down.

    1. As a developer you would have to work in two environments, HANA DB to create the DB artifacts and ABAP to consume those artifacts as remote proxies.

    To keep maintainablity of your code over long years or even generations of programmers you would normally try to avoid to distribute your program logic into two layers: The ABAP code and some logic located the database layer and possibly bound to a database vendor or database release. I.e. do no use of any predefined procedure on the database level even if this will give you some performance advantage.

    2. You will have to bear the responsibility of keeping your HANA and ABAP artifacts in sync and take care of the life cycle management.

    Keeping the database and application layer artefacts (manually) in sync is a risk for your productive environment.

    Overall the new paradigm is bad form the viewpoint of maintainability, service stability and thus for total cost of ownership.

  3. Raj

    We head about the proverb. old wine in new bottle. In the initial days i remember client/server programming like VB/SQL server or VB/oracle. we use to develop stored procedure in backend and consume them in front end like VB or some other tool.

    The performance of those applications is great inspite of other challenges. Now we are getting there again but in different flavors.

    The only one thing i am always confused in SAP world the concept is more or less is simple but the terminology or woding or branding the SAP use confuses me a lot.

    Good article Linkin. keep it up.

    • Yes Raj, I echo your thoughts. Old wine in New bottle. Good way to put it.
      Initially it was a great idea when data processing was pushed to the database layer, but later there were considerable challenges to this approach. Hence later software moved towards a solid 3 tier architecture and kept data processing to the middle layer.

      But HANA and in-memory database are challenging this thought once again. The advantages are clearly visible until off course we hit another snag. But for now, it seems this is the direction SAP is focusing on.

      Thanks for the comment and for being a subscriber.

  4. Manoj Priyadarshi

    It’s great Article in simple words explaining SAP HANA for technical guys. Even I have worked on one S/4 implementation project but it seems I need to learn a lot on S/4 HANA. Keep me updated.

    • Hey Harshit, good you asked that – I totally forgot to elaborate on it.

      Softwares are broadly classified into 2 types OLTP and OLAP.

      OLTP stands for Online Transaction Processing. Softwares which are used to record daily transactions and computations come under this category. For a typical OLTP software to function well, the underlying database needs to be in a normalized form. Usually this means Real Time Transactional Data.

      OLAP on the other hand stands for Online Analytical Processing. Such software, as the name suggests, are used for reporting and data analysis purposes. Such systems usually run on historic data which is not update daily. In such softwares the database needs to be in a denormalized form to support faster data access and reporting.


      Because of the nature of these systems and the constraints they lay on the underlying database design, they could either support OLTP or OLAP not both. HANA thru if its Column store, Row store data storage principle breaks this constraint and can therefore allow the same system to support both, OLTP as well as OLAP architecture at the same time based on the application requirement.

Leave a Reply

Your email address will not be published. Required fields are marked *