Key Design Decisions
Financial Products Subledger Integration

Key Design Decisions
Financial Products Subledger Integration

Introduction

This technical overview presents a comprehensive discussion on the system integration architecture for Financial Products Subledger (FPSL) in a single integrated environment, specifically focusing on the S/4HANA platform. The overview provides an in-depth exploration of the architectural direction, emphasizing the need to handle Extract, Transform, Load (ETL) outside of the S/4HANA platform to minimize customizations and reduce the overall Total Cost of Ownership (TCO).

Executive Summary

The overview explores the complexities of integrating with the SAP/FIONEER subledger solution, recommending a hybrid approach where part of the integration is handled in S/4HANA, while performing generic functions with external integration tools. It delves into the critical design decision of where validations and error handling should occur and the implications of this choice on the integration build, toolset, and overall design.

The overview further discusses the concept of a common financial language, emphasizing the importance of data validation before importing data into the subledger. It also explores the challenges of handling complex transformations outside of the SAP S/4HANA platform and the need for external ETL tools to transform data to FPSL specific objects.

The overview also addresses the decision between packaged solutions and custom builds, highlighting factors such as integration with external data sources, validation and substitution capabilities, error handling, and tool selection. It also discusses the need for a control framework to manage the data load process and a reconciliation framework for financial system integration.

In conclusion, this technical overview provides a detailed guide on the system integration architecture for FS-CD, FPSL, and G/L, offering valuable insights and recommendations for professionals navigating this complex technical landscape.

Integration Architecture

When considering data integration one, of the first big questions to be addressed is where should the integration happen  — outside or on top of the S/4HANA platform. The below image outlines msg global's recommended system integration architecture for FS-CD, FPSL, and G/L in a single integrated environment.

msg global fpsl systems integration architecture



The General Architectural direction is to handle ETL outside of the S/4HANA platform as much as possible to minimize customizations required within SAP, to minimize the amount of work required when upgrading to future releases or possible future strategies such as public cloud offerings, in which the number of customizations possible may be limited, and to minimize the TCO of the S/4HANA platform.

In practice, it is often cost-prohibitive to handle all integration work outside the SAP platform, considering the in-depth knowledge required from an experienced SI to effectively integrate with the SAP/FIONEER subledger solution. We recommend a hybrid approach, in which part of the integration is handled in S/4HANA, while generic functions can be performed in a more cost-effective manner through external integration tools.

Validation and Error Handling

The most significant design decision is determining where validations and error handling should occur, since it will add significant effort to the integration build and influence the toolset and the overall integration design. While validations are easily added, the challenge is managing bad data. Users need the ability to correct data — with the necessary restrictions and a corresponding audit trail — to report, and to comply with data retention policies.

The three most common options for SAP when it comes to Validations and Error Handling are:

  • No Validations: The ability to ensure data quality in upstream systems.
  • Hard Error: The ability to partially reject or rejecting entire data package (batch).
  • Data Correction: The ability to manually correct and reprocess data.

An external integration tool should be chosen for Validation and Error Handling with appropriate native capabilities, since it requires significant custom build effort in SAP that should be avoided if possible. This may be a scary thought at first, but if you really think about it, usually clients that walk down the path of building this in SAP are also building the exact same functionality in upstream systems/external integration tool, which means you are just building redundant functionality and increasing your TCO.

There are, however, certain situations in which it is better to validate in the subledger. If data is received from multiple sources that must tie together in the subledger, then at least an “ability to fully/partially reject data” approach is preferred. For example, if your data source for master and transactional data is different, or if you receive actuals from admin systems and estimates from actuarial systems.

These types of validations are better handled in the subledger, since it is critical for subledger processes to work correctly that this data is properly linked. Performing such validations requires access to the subledger data itself, as well as extensive knowledge about the internal data model. In order to handle such validations in an external tool, you would have to first integrate the subledger internal data model into the tool, as well as understand the internal processes and linkage requirements. During an implementation, those are still under development and may change at moment’s notice, which makes such an approach very risky and highly cost prohibitive.

Any of these integration approaches is viable. It is most important that you consciously decide and that this decision is clear to all parties involved, so they can plan their efforts accordingly.

Common Financial Language

Common financial language describes a client’s set of common attributes that are used for financial accounting and reporting and their relationship with each other. Attributes such as business direction (direct, assumed, ceded), line of business, profit center, etc.

Validation of the data before importing into the subledger is of utmost importance since failure to properly validate would lead to inability to process or report on the data, as well as downstream integration issues passing accounting results to the general ledger. One of the core concepts of ensuring the integrity of an accounting solution is that accounting documents cannot be changed, they can only be reversed. Accounting documents also can’t be modified before passing them downstream to the general ledger, since doing so would lead to irreconcilable differences between subledger and general ledger.

In addition, the general ledger has usually been around before the introduction of a subledger and has been customized with its own set of complex validation. Neither an external tool nor Financial Products Subledger can directly leverage G/L substitutions and validations. Documents that have been posted in the subledger, but do not pass general ledger validation will be stuck and would require removal of validation rules or extremely slow and highly manual data manipulation in the subledger to be passed on. All necessary validations must be rebuilt. A lot of these validations possibly must be redesigned from the ground up, since G/L account determination is performed inside the subledger, while the validations must be performed prior to ingesting the data into the subledger. Due to this added complexity, a complete lift and shift of all G/L validation and substitution rules is most likely cost prohibitive. So, it is critical to evaluate and redefine only the rules that are required in context of the subledger implementation.

In typical implementations, we follow an 80/20% approach. Eighty percent of validations are performed by external ETL, only if integration of SAP data sources is required and cost-prohibitive validations are then implemented on top of SAP S/4HANA during the transformation process, before ingesting data into FPSL. Generally, it is more strategic and more cost-effective to handle validations in an external ETL tool, where the investment may also be shared with integrations to other platforms, and they are easier to set up. This scale may tip, however, if it is necessary to integrate additional data sources, which may already be available within S/4HANA and would not require additional integration effort.

Common Source Template

Common source template (CST) describes the layout in which the data will be provided to the subledger. From an integration perspective, the key design decision is if these CSTs should be aligned with the source data or with the target data model in the subledger.

Complex transformations should be handled outside of SAP S/4HANA within the external integration tool. At a bare minimum, we recommend using external ETL tools to transform the data to FPSL specific objects, such as contract, coverage, portfolio, business transactions, and preliminary subledger documentation. 

Generally, external ETL tools should perform all transformations into the target format and validations and substitutions that can be performed solely based on the source data provided, which do not require in-depth knowledge of the source data or the target data model or are based on value sets unknown to SAP (custom code block dimensions). In case the data does not fulfill data validation requirements, the ETL solution must have automated rules to correct the data, offer the ability to hold and correct incorrect data (error correction), or be able to reject the data. It’s the source systems responsibility to provide corrected data.

Subledger specific validations include but are not limited to:

  • Contract/Coverage Versioning: FPSL cannot accept multiple contract versions for the same effective timestamp. Multiple versions (same contract, different attributes) must be sequenced by assigning appropriate timestamps. FPSL will only load the final version for a given data load.
  • Document Grouping / Splitting: A document in FPSL is identified by number under its respective header attributes such as contract, profit, and cost center. Documents are limited to 999 line items.
  • Balanced Documents: A preliminary subledger document can only be accepted if debits and credits balance to 0 and conditions for document header criteria are met (e.g., a document may only belong to one posting date, company code, profit center, and contract).
  • Business Transaction ID: A BT ID must be unique.
  • Date: Dates must be valid (e.g., 2/30/2023 is not a valid date) and certain dates are related to each other (e.g., expiration dates cannot be smaller than effective dates).
  • Special/Hidden/Invalid Character Removal: The ETL tool should protect the application from all special/hidden/invalid characters, which may negatively impact reporting or program logic within FPSL.
  • Valid Values: The ETL solution should protect FPSL from invalid values, especially for custom/non-SAP specific elements.

Source to Target Mapping

Source to target mapping describes the process of mapping the source data to target common source templates (CST), which are ultimately expected by the subledger API. The source data maybe received from a large number of source systems, possibly for different entities and geographic locations. All these systems maintain similar information in different ways and implementation teams must decide who should be responsible to map from source to target.

As already mentioned under Common Source Template (CST), the subledger should be shielded from such complexities, which leaves this transformation to the source (decentralized approach) or to an intermediary ETL tool (centralized approach).

One of the main considerations when choosing the right approach is to consider that whoever performs the mapping must have a thorough understanding of the source, as well as the target data. While every source maybe different, the target is a finite number of common source templates (CST), each of which comes with its own complexities. For example, for a contract master data CST one must ensure there is no duplicate version within a given delivery and, if duplicates are present, that the right (most current version) is provided to the subledger.

While during an initial implementation the scope maybe limited to one or very few sources, we highly recommend establishing a central intermediary ETL layer to handle source to target mapping. While the initial effort maybe higher, since the team must be familiar with source as well as target, the scale will very quickly tip with the addition of additional source systems. ETL tools excel at maintaining mapping rules, and maintaining a central repository will ensure that complexities within the target mapping are centrally addressed and can be consistently applied across all source to target mappings.

Packaged Solution vs. Custom Build

While software development cycles can be greatly shortened with modern development tools, custom build solutions often still lack in sophistication compared to packaged solutions. The overall effort may be underestimated or final solutions may lack capabilities that would be available with very little effort from packaged solutions for building integration frameworks.

Factors to consider are:


Integration with External Data Sources

In many cases, integration with SAP S/4HANA requires integrating data from various external sources. ETL tools excel in this aspect, offering connectors and adapters for various data formats and sources. Custom solutions may require additional development to accommodate these external data sources, making ETL tools a preferable choice for quicker and more efficient integration. This pertains to integrating source systems that originate the financial information to be transformed and to reference systems such as MDG, which are the source for data validations or mapping tables required to enrich the financial data with additional financial code block dimensions.


Validations and Substitutions

ETL tools offer robust validation capabilities, enabling the extraction of data from multiple sources and validating it against predefined rules. These tools often provide built-in functions for data cleansing, transformation, and substitution. Custom built solutions require significant development effort to implement similar validation and substitution functionalities.


Error Handling and Error Correction Capabilities

ETL tools typically provide comprehensive error handling mechanisms, allowing for easy identification, logging, and tracking of errors during the data integration process. Additionally, they often support error correction capabilities, enabling automated or manual corrections of data issues. Custom solutions require extensive coding efforts to implement similar error handling and correction functionalities.


Tool Selection

As an SAP specialist, msg global solutions inc. is most experienced with and recommends SAP products such as SAP Integration Suite (PAAS), SAP Data Intelligence and SAP Data Services (BODS); however, we often find ourselves integrating non-SAP based external solutions and suggest leveraging the customer’s preferred and well established tools to maximize the benefits of existing and available skills, knowledge, and pre-existing rules, validations, and substitutions. While FPSL may require data for its interfaces to be provided in very specific formats, custom financial code block validations and substitutions are likely to be very similar between different financial system interfaces.

For a list of popular integration services you may also refer to Gartner’s 2023 Magic Quadrant on Integration Platforms As A Service.

Data Transfer (FTP, HTTP, ODBC)

While packaged solutions typically include the ability to publish data in target applications, we often find ourselves having to define solutions for data transfer as well. This means the financial information required for FPSL is available fully transformed (contract, coverage, portfolio, business transaction, preliminary subledger document, etc.), but still must be transferred from a file to S/4HANA.

We recommend making data available to FPSL in staging tables on S/4HANA. While data load from a file is possible with additional development effort, we don’t recommend it, since external applications can write to staging tables, and handling files on S/4HANA would further increase the effort regarding data governance and possibly require more complex load balancing procedures to handle large data volumes.

To handle data transfers to S/4HANA, SAP offers Smart Data Integration (SDI), Smart Data Access (SDA), SAP BW Data Sources/Open Hub Connections, and SAP Data Services (BODS) as transfer mechanism with which we have extensive experience. BODS runs on additional hardware so technically it is an application external to S/4 HANA in the context of this document.

It is also important to note that a client’s ability to write directly to the S/4HANA database maybe limited by your licensing agreement and must be carefully evaluated during the implementation.

Master Data

Subledger systems come with their own proprietary seta of master data at which level processes in the subledger are configured. They are key when implementing a subledger, ensuring processes can be executed effectively, and controlling system behavior when processing transactions for a master data object. FPSL requires master data to be provided through a separate API before loading transactions for the corresponding object. Master data in an insurance subledger usually represent policies, treaties, claims, or actuarial cohorts.


Master Data On-the-Fly

There are two challenges when trying to create master data from a separate data source:

  • The higher the granularity of the master data, the less likely it is that there is a comparable data source within the client’s financial systems that can reliably be used as a source for the data.
  • Though master data may be closely aligned with a client’s known objects, such as policies and cohorts, the subledger often requires the master data to be created at a slightly different granularity to best leverage the subledger functionality in alignment with the requirements.

One way to minimize processing issues due to missing master data is to create master data on-the-fly out of transactional data. Depending on the complexity of the solution, however, this may also drive additional significant effort.

For example: To calculate IBNR, the subledger requires net premium earned (NPE), paid claims, case reserves, and loss ratio. From a subledger perspective all this information is expected to be available at the cohort; however, all this data may be coming at different granularities. While NPE may be calculated at the policy level and rolled up to a cohort, loss ratios may be provided at a profit center/year level and must be pushed down to all its respective cohorts.

We recommend creating master data on-the-fly to minimize integration issues and processing delays due to missing master data.


Master Data ID Assignment

Master Data ID assignment describes the key attributes that form the master data object, such as company + policy number + profit center.

As mentioned in the previous section, the granularity of master data in the subledger is likely to be different from anywhere else in the client’s system landscape to best leverage subledger processing in accordance with the requirements. This also means the granularity may change over time, especially during an initial implementation, to match a change in requirements.

It is best to handle master data ID assignment in the subledger to be able to flexibly control the granularity of subledger master data objects. If the subledger provides granular data back to an external system (e.g.: actuarial system for calculation), this interface should be built based on external attributes and not the subledger internal ID..


Master Data Delta Detection

If master data is created on the fly and master data ID assignment is handled within the subledger, most likely the subledger will have to be able to handle multiple redundant data loads for the same subledger object. Delta detection describes the ability to identify multiple duplicate and redundant data loads and discard unnecessary loads accordingly.

At a minimum one must consider the grouping and sorting of master data objects within a given data load (batch); however, depending on the processing requirements, it may be necessary to avoid unnecessary loads across data loads, since they may have performance implications on the subledger processes.

Integration Layers

Integration layers describe the steps the data must go through in S/4HANA before it can be ingested by the native subledger API.

  • One Layer: If all transformation, substitutions, and validations occur in the external integration tool, then a one layered staging approach may be chosen. The data is delivered in exactly the format and granularity required by the system and the staging layer is solely leveraged to support the load balancing when loading transaction in the system.
  • Three Layers: If any form of transformation, validation, or enrichment is performed in the subledger, a three-layered approach should be chosen to support reconciliation and optimal performance and to avoid complicated modifications.


Acquisition Layer

The acquisition layer is where the data from external systems is delivered. For reconciliation and traceability, retain this data as is. Do not modify it in any way. We generally recommend that this data is delivered in a staging table, since the subledger application can directly work with tables and it reduces administrational overhead for managing staging data. If the data is delivered via files, a custom process must be built to load the data into an acquisition table, and additional processes are required to manage the lifecycle of the file (e.g., moving from inbound to archive folder and final deletion of the files after reaching a specified retention period).


Transformation Layer

Key for the transformation layer is that processing of the data must occur in parallel to maximize the throughput of the data ingestion process. In the transformation, the system will perform validations, substitutions, and the granularity of the data may change, for example if the data is stored at a more granular level in the subledger. The transformation layer should handle all processing-intensive operations.

As discussed in Validation & Error Handling, Common Financial Language and Common Source Templates, the goal is to minimize the amount of transformations required in the integration layer of the subledger; although, it can’t always be avoided. The key is to avoid building redundant validations and substitutions and to ensure a clear rationale for logic to be handled in the subledger integration compared to an external integration tool to minimize the overall implementation cost.


Data Load Layer

Finally, the data must be provided to the subledger API through a data load layer. In the data load layer, all data must be validated and conform to all subledger specifications. Only operations which cannot be performed in parallel processing should be performed during the transfer from the transformation to the data load layer.

For example, when assigning document numbers in the subledger you may have to consider the entire batch and properly group the transactions by document header attributes, such as company, policy number, and profit center.

The data load layer also usually integrates with the control framework to only make fully enriched and validated packages available to the subledger application.

If possible, the data load layer should be expressed through a HANA view on top of the transformation layer, which can reduce complexity and efforts for the integration build. It is important to note, however, that HANA views require a HANA enterprise license and may not always be available to the SI.

Integration Layer Technologies

When building the integration, one must choose from various available technologies in SAP, such as HANA Tables, Dictionary Tables, and BW ADSOs.he SI.

  • HANA Tables: The key issue with native HANA tables is that these tables are not available through most SAP standard applications, such as SE16 (Data Browser). This is a major disadvantage. Most resources on a project will not be able to access these tables themselves and reconciling and validating between staging tables and the subledger application will take up a significant amount of time during integration testing. It is critical that the project team has access to this information.
  • Dictionary Tables: Dictionary tables are defined in the SAP Data Dictionary and most people are very familiar with these tables. They are easily created and maintained, and standard tools are available to access these tables.
  • BW ADSO’s: Advanced Data Storage Objects are designed in BW and are specifically designed for supporting data ingestion process. They come with basic, standard functionality to support data load, data deletion, parallel processing of transformation, and error handling. Given the additional functionality, there is also more administrative effort to configure ADSOs and standard APIs must be used to read from and write to ADSOs.
  • HANA Views: HANA views provide a powerful toolset to expose the data to the subledger without the need to replicate the data. This makes HANA views an ideal tool to replace the data load layer; however, as mentioned in the previous section, the ability to use HANA views may depend on your licensing agreement.

Generally, we recommend the use of BW ADSOs due to the significant amount of out-of-the-box functionality available to support the data integration process.

Data Load

Once the data is fully transformed, validated, and available in the data load layer, it is ready to be consumed via the FPSL Data Load Layer (DLL) process. This process is used to load the data from staging tables into FPSL via pre-defined interfaces. The DLL framework is extremely powerful when it comes to managing even the largest data loads and loading the data quickly into the system.

While FPSL handles some validations out-of-the-box for FPSL internally required characteristics, and some capabilities exist to enhance validations within DLL depending on the interface, we generally do not find the validations sufficient or effective to protect the application from bad data, which would lead to reporting inconsistencies or downstream integration issues when passing the data on to the General Ledger, for example.

One of the core principles of a subledger is that once a document is posted, it can no longer be modified. It can only be reversed. At the same time, no document can be excluded when passing information from the subledger to the general ledger, as this would lead to inconsistencies and inability to reconcile between subledger and general ledger, leading to the point at which the subledger could no longer be trusted as the single point of truth.

For these reasons it is extremely important to thoroughly validate the data before accepting it into the subledger. Typically, we would review existing validations and substitutions in the general ledger and, where applicable, ensure they are applied in the integration layer for the subledger. Depending on your integration scenario, these may have to be redesigned, since in the subledger validations may have to occur on master data vs. transaction data level or G/L account level validations must be transformed into business event level.

Control Framework

Control Framework refers to a custom data model that operates on a package (batch) level. Once a package is delivered, the external integration tool makes the package available to the subledger by placing a package control record into the framework.

The subledger integration uses this control record to manage the data load and updates the status to indicate the different stages of the integration process, as well as successful/unsuccessful load.

Establishing a control framework is highly recommended to effectively manage the data load process.

Reconciliation Framework

As part of any integration of financial systems, a technical reconciliation procedure must be established that enables the reconciliation between the sending and receiving systems. This includes reconciliation from source to ETL and from ETL to source. Since there are typically complex transformations involved between source and FPSL, a direct reconciliation between source and FPSL is usually not possible.

We recommend this reconciliation be handled by the ETL or Data Transfer solution, which means, for the data transfer between ETL and FPSL, the solution would receive control totals (record count, total debit, total credit) from ETL, write the data to staging tables, establish the control totals from staging after the data has been written, and make a reconciliation report available.

As part of the reconciliation framework, we recommend that the source system or ETL also makes reconciliation IDs available to better support reconciliation.

  • Batch ID: An ID that identifies a single data load/package from ETL/source. This could be 10s of millions of transactions. The batch ID ties all records received for a single data load across all interfaces (contracts, coverage, portfolio, business transactions, preliminary subledger document, etc.). For transformation processes, the batch ID allows the system to reject an entire batch if a single record fails validation. Depending on the interface, the Batch ID may be traced all the way to the general ledger.
  • Transaction/Business Case ID: A group of transactions that belongs together (e.g., external document number). Depending on the interface, a Transaction ID may be traced all the way to the FPSL subledger document.
  • Record ID: A single record identifier within a given batch. Depending on the interface, Transaction ID may be traced all the way to the FPSL subledger document.

While we do recommend that the end-to-end reconciliation is handled by the ETL/data transfer application, we also recommend establishing a batch control table and populated it with the ETL/data transfer tool once a batch has been successfully transferred and is ready for consumption within FPSL. This batch control table can then be used to automate the handover and trigger transformation and data load into FPSL. During transformation additional status information may then be added to the control table to handle acceptance or rejection in case of data validation issues and to prohibit the load of partial and incomplete data into FPSL.

msg global solutions

The contents discussed in this technical overview highlight the need for companies to choose highly experienced system integrators. Integrating Financial Products Subleger into any complex enterprise landscape relies on proper setup and configuration.

msg global solutions is a preferred System Integrator (SI) for SAP based Finance Solutions in Banking and Insurance with hundreds of practitioners across the globe and a proven track record of successful implementations with some of the largest financial services providers in the world.

You can find more information on our implementation services here.

msg global Ana Mitic

About the Author

Jan Bornholdt | Director, msg global solutions

Jan Bornholdt is a director at msg global solutions, Inc. As an Insurance Subledger Technology expert he has worked with various SAP based subledger technologies for over 20 years. In his career Jan has helped dozens of clients successfully integrate SAP subledger technologies in their overall financial system landscape.

© 2024 msg global solutions ag