The reality is :
1) Adoption level and usage of the reports drive the BI business and needs of services from vendor and shared service
2) Business pays for the enhancements and new reports
3) Data accuracy is most important (follow by the availability of reports) and often a lot of build defects or missing/incorrect business rules are realized at later stage. In a multiple SAP R/3 environments, data accuracy and feasibility to consolidate data from multiple systems relies on the data standards and business rules. The data standards is usually adopted in the source system and extracted to BI but in some cases, there will be a requirement to standardized the master data in the BI layer due to the complexity of getting it done in R/3. Eg. to measure the consolidated sales of the same product which is name differently in different systems.
Monday, February 28, 2011
Wednesday, February 23, 2011
Global and Regional Authorization Concept
Concept:
- Global AA role (A)
- Global role (B)
- Global composite role (A+B+E)
- Regional AA role (C)
- Regional role (D)
- Regional composite role (C+D)(E)
Authorization can be inserted into roles that are used to determine what type of content is available to specific user groups.
Authorization Objects
Authorization objects enable you to define complex authorizations by grouping up to 10 authorization fields in an AND relationship to check whether a user is allowed to perform certain action. To pass an authorization test for an object, the user must satisfy the authorization check for each field in the object.
SU21 to maintain the authorization objects. Major one starts with RS.
Analysis Authorization (AA)
AA define semantic data slices a user is allowed to see in reporting, eg all data belonging to company code variable xxx that goes through user exit during query runtime. Infoobjects has to be defined as authorization relevant.
AA (Authorization Analysis)
- Global AA role (A)
- Global role (B)
- Global composite role (A+B+E)
- Regional AA role (C)
- Regional role (D)
- Regional composite role (C+D)(E)
Authorization can be inserted into roles that are used to determine what type of content is available to specific user groups.
Authorization Objects
Authorization objects enable you to define complex authorizations by grouping up to 10 authorization fields in an AND relationship to check whether a user is allowed to perform certain action. To pass an authorization test for an object, the user must satisfy the authorization check for each field in the object.
SU21 to maintain the authorization objects. Major one starts with RS.
Analysis Authorization (AA)
AA define semantic data slices a user is allowed to see in reporting, eg all data belonging to company code variable xxx that goes through user exit during query runtime. Infoobjects has to be defined as authorization relevant.
AA (Authorization Analysis)
Demand IT in BI
Ensures the reports are utilized
Bridge between end users and BAU organization to obtain the fund for valid/required change
especially in master data changes and standardization
Ensures business users availability of UAT
Bridge between end users and BAU organization to obtain the fund for valid/required change
especially in master data changes and standardization
Ensures business users availability of UAT
Business Release in BI
Changes to be promoted in Production system for all the IT system in a big organization that include several regional instances has to adhere to the business release time line.
This is to ensure the impacts are accessed and minimal risk is introduced to the production environment. There is also a need to cross check the dependent changes that ensures the correct sequence of importing the changes in all different systems are followed.
This is very important as:
This is to ensure the impacts are accessed and minimal risk is introduced to the production environment. There is also a need to cross check the dependent changes that ensures the correct sequence of importing the changes in all different systems are followed.
This is very important as:
- Prevent newest changes to be overwritten by old transport(emphasize in orphan transport and transport not in build list)
- Datasource not imported first or replicated after . Eg shared datasources like 2LIS_02_SCL (Purchase Order History) which is shared between SRM and BI (Procurement solution).
- Data loading and transport sequence (top down dependency and cross solution dependency)
- Inactive objects discovered later and development system is opened for new changes, thus re-transport is impossible
- Shared datasource and dataflow are not impacted.eg OTIF(sales forecast) and COPA(sales volume)
- The conversion of logical system name is done in parallel (done in the same BR) for all the 'target' system like BI and APO that feeds on the same ERP system when the feed system change its logical system name.
Change Management in BI
The key success for a Global platform is to have strong governance. In order to produce strong governance in a big organization, a robust and effective process has to be in place. This include the deep understanding of a changes in business and technical that impact BI. One of the rule of thumb of change management in BI is changes in the target system won't impact the feed system but changes in the source system might impact the target system. For example: COPA realignment is done inR/3 which changes the historical master data but BI must reinitialize the delta. New characteristics are created under a new class for new/existing material group. In order to report on the new attributes , BI needs to regenerate its material classification datasource in R/3. A criteria needed to ensure successful change management is the ability to understand the root cause of the issues technically and functionally and address its risk and effort level accurately. Knowing how to identify and fill the gaps between in-source responsibility with outsource capability is equivalently important to drive an efficient BAU process and avoid redundant processes to take place.
The objective of a change process is to ensure there is a standard and control for changes that lands into the system in order to mitigate risk of defects and impacts. But the control must be flexible enough to hold different type of scenarios that range from project mode, project into bau transitioning mode, shadow support mode,post go live, warranty fix mode,technical go live, business go live and etc. Eg. during warranty period or post go live mode, it is impossible to demand for a CR for each fixes as it is not uncommon at all to spot handful of bugs and breaks in those period. At some cases, when project and demand IT team are not in agreement of the project release timeline, there will be a release for technical go live and only when there adoption of the BI reports by business users, the project is moved to the business release go live stage. Such scenario is very complex to handle in bau management and project resource management, especially the technical warranty period is lapsed but more issues were encountered during business go live. At such, the process of change management should be flexible enough to apply different level of control at different stage.
The objective of a change process is to ensure there is a standard and control for changes that lands into the system in order to mitigate risk of defects and impacts. But the control must be flexible enough to hold different type of scenarios that range from project mode, project into bau transitioning mode, shadow support mode,post go live, warranty fix mode,technical go live, business go live and etc. Eg. during warranty period or post go live mode, it is impossible to demand for a CR for each fixes as it is not uncommon at all to spot handful of bugs and breaks in those period. At some cases, when project and demand IT team are not in agreement of the project release timeline, there will be a release for technical go live and only when there adoption of the BI reports by business users, the project is moved to the business release go live stage. Such scenario is very complex to handle in bau management and project resource management, especially the technical warranty period is lapsed but more issues were encountered during business go live. At such, the process of change management should be flexible enough to apply different level of control at different stage.
Friday, February 18, 2011
BI Readiness in the Global Arena
- Adoption level of the regional business users to utilize the BI reports as a reliable source to make decisions
- A BI roadmap that ensures strategical implementation, maintainability and governance that adhere to a tactical operational model
- Standard business rules that might impact the data mapping and conversion factor that needs to be in agreement by all regional stakeholders
- Readiness of standard master data
- Establishment of inter dependency between BI and ERP/APO/source system
- Existence of Demand IT and body to govern the functional changes
- Existence of Information Office to bridge the business users , the Demand IT, the Solution
- Delivery and the Solution Center
- Solid process and efficient of business release management process
- Strong governance and efficient change management process
- Good partnership of BAU organization with project and support team
Thursday, October 14, 2010
Material Classification
Any new/replacement/consolidation of material classification or changes to the R/3 material classification in term of format require the regeneration of the BI datasource. The character is identified from which material class and added in the datasource The datasource is activated and replicated in BI side and 1:1 transformation or routine is created to fetch the changed/new attribute values.
If it is a new attribute, a new global infoobject needs to be created from the template box upon approval from the Infoobject forum (agrees on the type, length and naming) and to be included as the regional material infoobject either as navigational or display attribute. The process chain is enhanced to automate the load of the new attribute.
If the new attribute is created as a navigational attribute, the infocube which contain data for reporting needs to have the navigational attribute activated. If it is used as a characteristic in the cube dimension, then the cube needs to be remodeled to include the new infoobject. The query is also changed to pull the new attribute from the cube to the reporting column.
One point to note when making decision on using dimension or navigational attribute is
if dimension attribute is used, there will be a need to delete the transactional data before master data can be deleted where else if the navigation attribute is used, you can delete the 'unclean' master data directly.
Transport of material classification to test, regression and production is to ensure the consistency but if the client is different, the datasource need to be regenerated directly in the system via CTBW.
A lot of time, there is a need to view snapshots of historical view. User might want to see the master data at that period of time. The most significant one is material master whereby they are new attributes and obsolete attributes over a period of time. The process of master data standardization across regional R/3 system require the changes on material master data attributes and the way it was reported as well.
Eg:
Material 123
2008 Jan
Attr1 - Yes
Attr2 - No
Attr3 - NA
Attr4 - Yes
July 2008
Att1 - Yes
Att2 - Yes
Attr3 - No
Attr4 - Yes
Jan 2009
Attr1 - No
Attr2 - Yes
Attr3 - No
Attr4 - Yes
Options:
1) Including those characteristic into the cube
2) time dependent navigational attributes and introduce the key dates in the query
3) Version dependent hierarchy
Option 1
Will show and sums up figures according to the historical master data even when the user wants to see the latest one
Option 2
Cant have two key dates in the query and the user cannot compare two previous snapshots, eg. if master data was brought in on 15.12.2007 and the second was brought in on 11.12.2008.
Option 3
Does not cater for a lot of characteristics and not all characteristics are parent-child node related
In scenario like this, most common approach is Option 1 but with another navigational attribute which represent the latest correct master data in the reporting infocube.
Most of the time when a product evolves , the R/3 system will need to create a new class for certain material group with new characteristics assign to it. This changes will require changes in BI as well. The main material infoobject has to be extended to take the new characteristics as its attribute which is populated in new infoobjects. The new material group also has to be included in any hardcoded product category or UOM derivation.
If it is a new attribute, a new global infoobject needs to be created from the template box upon approval from the Infoobject forum (agrees on the type, length and naming) and to be included as the regional material infoobject either as navigational or display attribute. The process chain is enhanced to automate the load of the new attribute.
If the new attribute is created as a navigational attribute, the infocube which contain data for reporting needs to have the navigational attribute activated. If it is used as a characteristic in the cube dimension, then the cube needs to be remodeled to include the new infoobject. The query is also changed to pull the new attribute from the cube to the reporting column.
One point to note when making decision on using dimension or navigational attribute is
if dimension attribute is used, there will be a need to delete the transactional data before master data can be deleted where else if the navigation attribute is used, you can delete the 'unclean' master data directly.
Transport of material classification to test, regression and production is to ensure the consistency but if the client is different, the datasource need to be regenerated directly in the system via CTBW.
A lot of time, there is a need to view snapshots of historical view. User might want to see the master data at that period of time. The most significant one is material master whereby they are new attributes and obsolete attributes over a period of time. The process of master data standardization across regional R/3 system require the changes on material master data attributes and the way it was reported as well.
Eg:
Material 123
2008 Jan
Attr1 - Yes
Attr2 - No
Attr3 - NA
Attr4 - Yes
July 2008
Att1 - Yes
Att2 - Yes
Attr3 - No
Attr4 - Yes
Jan 2009
Attr1 - No
Attr2 - Yes
Attr3 - No
Attr4 - Yes
Options:
1) Including those characteristic into the cube
2) time dependent navigational attributes and introduce the key dates in the query
3) Version dependent hierarchy
Option 1
Will show and sums up figures according to the historical master data even when the user wants to see the latest one
Option 2
Cant have two key dates in the query and the user cannot compare two previous snapshots, eg. if master data was brought in on 15.12.2007 and the second was brought in on 11.12.2008.
Option 3
Does not cater for a lot of characteristics and not all characteristics are parent-child node related
In scenario like this, most common approach is Option 1 but with another navigational attribute which represent the latest correct master data in the reporting infocube.
Most of the time when a product evolves , the R/3 system will need to create a new class for certain material group with new characteristics assign to it. This changes will require changes in BI as well. The main material infoobject has to be extended to take the new characteristics as its attribute which is populated in new infoobjects. The new material group also has to be included in any hardcoded product category or UOM derivation.
Thursday, June 3, 2010
Financial reports columns
If you are a report designer, these definately sound familiar to you:
1) YTD Actual
2) Full Year Forecast
3) SPLY (Same Period Last Year)
4) vs SPLY
5) vs Previous QPR
But what does these column actually means in general?
Important factors:
i)The matrix for calculation
ii)The definition of forecast version and how many version of them in a year
iii) The same rules applies in the manual entry layout and actual reports query
iv)Use structures to have a single version of truth of the formula and calculation
v) Cross year rules & balance carry forward (controlled at backend)
vi)Conversion type and exchange rate type
1) YTD Actual
2) Full Year Forecast
3) SPLY (Same Period Last Year)
4) vs SPLY
5) vs Previous QPR
But what does these column actually means in general?
Important factors:
i)The matrix for calculation
ii)The definition of forecast version and how many version of them in a year
iii) The same rules applies in the manual entry layout and actual reports query
iv)Use structures to have a single version of truth of the formula and calculation
v) Cross year rules & balance carry forward (controlled at backend)
vi)Conversion type and exchange rate type
Global EDWH
What is Global BI landscape means in terms of a big organization that has its own regional BI and r/3 instances? The initiative of cost cutting through the deployment of global template to regional layer seems like a strategical decision that has its own challenges. This comprises of complex regional business rules, standardization of master data,buy in of regional stakeholder, management of regional-global deployment as a BAU.
SAP BI Administration & Monitoring

What are the responsibilities of an SAP BW Administrator?
Most of the companies have BW administrators responsible for R/3 administration as well. Depending on the SAP landscape and the version of the BW system the responsibilities can vary, but most of the common responsibilities include installation and upgrade of the BW system, backup and recovery, performance tuning, setup and support the transport system, applying patches and so on.
Additional BW system administrator responsibilities could be troubleshooting R/3, handling database and UNIX system problems, and copying and renaming of existing R3 systems.
But this looks like Basis stuffs to me...
Hey,don't be limited to that space. This books from SAP Press tells you things about SAP BI Administration that even a BI expert has never known about.Check it out.
This is a link you can download the basis common tcode together with the explanation and screenshots of that transaction.
Global Transport Strategy
Imagine having multiple projects working on your BI system concurrently. And try to consider the business release cycle that allows transports to be imported to regression and production server based on a straight timeline. Now, what happen to shared objects (esp infoobjects and datasource) being overwritten and missing manual steps before a transport went in? Surely some consideration to strategize the transports have to be done. And this is what the post is about.
Action for Pre cutover:
1) Engagement with basis on system refresh for regression server,logical system name mapping,
RFC connection
2) Engagement with different projects on the manual steps in between transport and data
loading sequence (two projects working on different solution may shared same datasource,
eg.Marketing and Finance retrieve data from COPA)
3) Preparation of transport build recipe
4) Communication to respective stakeholder on dates,system lockout and date for last transport
to be included in build recipe
5) List of users remain unlock and process chain to be 'paused' during cutover
6) Manual steps in between of cutover and party involves. Eg. replication from R/3
Action for Post cutover:
1) Compare inactive objects before and after cutover
2) Ensure all process chain run successfully
3) Reports can be executed successfully
4) Document down lesson learned. Eg. Cube content be truncated prior to sending in a change to
add navigational attribute in order to shorten the transport time
5) Raise awareness to respective party on their changes in their system that impact BI. Eg is
changing logical system name in source system might resulted in delta extraction failure in BI.
Another example is COPA realignment in R/3 will break the delta extraction in BI.
Action for Pre cutover:
1) Engagement with basis on system refresh for regression server,logical system name mapping,
RFC connection
2) Engagement with different projects on the manual steps in between transport and data
loading sequence (two projects working on different solution may shared same datasource,
eg.Marketing and Finance retrieve data from COPA)
3) Preparation of transport build recipe
4) Communication to respective stakeholder on dates,system lockout and date for last transport
to be included in build recipe
5) List of users remain unlock and process chain to be 'paused' during cutover
6) Manual steps in between of cutover and party involves. Eg. replication from R/3
Action for Post cutover:
1) Compare inactive objects before and after cutover
2) Ensure all process chain run successfully
3) Reports can be executed successfully
4) Document down lesson learned. Eg. Cube content be truncated prior to sending in a change to
add navigational attribute in order to shorten the transport time
5) Raise awareness to respective party on their changes in their system that impact BI. Eg is
changing logical system name in source system might resulted in delta extraction failure in BI.
Another example is COPA realignment in R/3 will break the delta extraction in BI.
Can't have a bug-free BI system
In BI reporting environment, especially if it reports on global and regional level, there are a number of report designs and modeling techniques that needs to be considered as to prevent future pain of spending many man days and time to fix them. Most of the issues arises from frontend such as data binding in web template and incorrect variable applied in the Bex queries. The bigger issues lean towards the data standardization or cleanliness of master data across all the SAP and non-SAP feed systems. In order to consolidate the figures for global reporting, the master data has to be compounded to its source system or a single set of master data to be agreed upon from different regional master data management as the global standard. BI standards and governance play a major role here all the way from standardizing the infoobject (and its attribute),hierarchies and mapping DSO/table contents as they are the baseline for the accuracy of data churn out from the ETL layer into reporting display.There is no escape from having to perform multiple data reload or self-transformation whenever there is a change in one of those object. The always changing business process requires snapshots reporting and this introduce complexity in term of time dependence and versioning of master data and hierarchy.
Subscribe to:
Posts (Atom)