Tuesday, April 1, 2014

LSA and LSA++

LSA seems to be a hot BW jargon three years ago where most companies strive to standardize the data modelling design. Of course, I personally believe not all dataflow require LSA , example straight forward data from excel file. But nevertheless, the beauty of LSA is we minimize the risk of losing historical data (PSA then was supposed to have housekeeping and not serve as historical data storage as in LSA++) and changes in transformation layer can be reloaded from propagation layer safely without the need to re-initialize. Plus if we manipulate data in start routine from propagation DSO to transformation DSO , the key figures are not cumulative when looping the source package, unlike start routine between DSO and Infocube. And finally the aggregated data flows up to InfoCube for reporting. There is also Corporate Memory Layer available in DSO W/O to retain historical data.

Ok, with in-memory technology or famously known as HANA, snow flake schema got flatten out. No more SIDs meaning no longer reporting needs  InfoCube, in-memory-optimized DataStore objects, which can be used for reporting. So we now have LSA++. From 4 layers now it's 3 layers. We can also see they brought back 'InfoSource' in the picture, I'd always think InfoSource is useful when it come to 2 steps transformations.The data is acquired in the open operational data store layer. The PSA serves as the historical data foundation. No transformations or aggregations are defined in this layer.The data is then harmonized and transformed to the core EDW layer. The DataSource and the DSOs in the core EDW layer are connected by an InfoSource. A virtual data mart layer is used for reporting. InfoProviders that reside in this layer do not contain any data. Instead, they describe which data is accessed, and how it is displayed semantically to the end user. MultiProviders usually access the data from DataStore objects.


1 comment:

  1. what about corporate memory, how and where do you see that?

    ReplyDelete