Thursday, October 14, 2010

Material Classification

Any new/replacement/consolidation of material classification or changes to the R/3 material classification in term of format require the regeneration of the BI datasource. The character is identified from which material class and added in the datasource The datasource is activated and replicated in BI side and 1:1 transformation or routine is created to fetch the changed/new attribute values.

If it is a new attribute, a new global infoobject needs to be created from the template box upon approval from the Infoobject forum (agrees on the type, length and naming) and to be included as the regional material infoobject either as navigational or display attribute. The process chain is enhanced to automate the load of the new attribute.

If the new attribute is created as a navigational attribute, the infocube which contain data for reporting needs to have the navigational attribute activated. If it is used as a characteristic in the cube dimension, then the cube needs to be remodeled to include the new infoobject. The query is also changed to pull the new attribute from the cube to the reporting column.

One point to note when making decision on using dimension or navigational attribute is
if dimension attribute is used, there will be a need to delete the transactional data before master data can be deleted where else if the navigation attribute is used, you can delete the 'unclean' master data directly.

Transport of material classification to test, regression and production is to ensure the consistency but if the client is different, the datasource need to be regenerated directly in the system via CTBW.

A lot of time, there is a need to view snapshots of historical view. User might want to see the master data at that period of time. The most significant one is material master whereby they are new attributes and obsolete attributes over a period of time. The process of master data standardization across regional R/3 system require the changes on material master data attributes and the way it was reported as well.

Eg:
Material 123

2008 Jan

Attr1 - Yes
Attr2 - No
Attr3 - NA
Attr4 - Yes

July 2008

Att1 - Yes
Att2 - Yes
Attr3 - No
Attr4 - Yes

Jan 2009

Attr1 - No
Attr2 - Yes
Attr3 - No
Attr4 - Yes

Options:
1) Including those characteristic into the cube
2) time dependent navigational attributes and introduce the key dates in the query
3) Version dependent hierarchy

Option 1
Will show and sums up figures according to the historical master data even when the user wants to see the latest one

Option 2
Cant have two key dates in the query and the user cannot compare two previous snapshots, eg. if master data was brought in on 15.12.2007 and the second was brought in on 11.12.2008.

Option 3
Does not cater for a lot of characteristics and not all characteristics are parent-child node related


In scenario like this, most common approach is Option 1 but with another navigational attribute which represent the latest correct master data in the reporting infocube.


Most of the time when a product evolves , the R/3 system will need to create a new class for certain material group with new characteristics assign to it. This changes will require changes in BI as well. The main material infoobject has to be extended to take the new characteristics as its attribute which is populated in new infoobjects. The new material group also has to be included in any hardcoded product category or UOM derivation.

Thursday, June 3, 2010

Financial reports columns

If you are a report designer, these definately sound familiar to you:
1) YTD Actual
2) Full Year Forecast
3) SPLY (Same Period Last Year)
4) vs SPLY
5) vs Previous QPR

But what does these column actually means in general?

Important factors:
i)The matrix for calculation
ii)The definition of forecast version and how many version of them in a year
iii) The same rules applies in the manual entry layout and actual reports query
iv)Use structures to have a single version of truth of the formula and calculation
v) Cross year rules & balance carry forward (controlled at backend)
vi)Conversion type and exchange rate type

Global EDWH

What is Global BI landscape means in terms of a big organization that has its own regional BI and r/3 instances? The initiative of cost cutting through the deployment of global template to regional layer seems like a strategical decision that has its own challenges. This comprises of complex regional business rules, standardization of master data,buy in of regional stakeholder, management of regional-global deployment as a BAU.

SAP BI Administration & Monitoring


What are the responsibilities of an SAP BW Administrator?

Most of the companies have BW administrators responsible for R/3 administration as well. Depending on the SAP landscape and the version of the BW system the responsibilities can vary, but most of the common responsibilities include installation and upgrade of the BW system, backup and recovery, performance tuning, setup and support the transport system, applying patches and so on.

Additional BW system administrator responsibilities could be troubleshooting R/3, handling database and UNIX system problems, and copying and renaming of existing R3 systems.

But this looks like Basis stuffs to me...

Hey,don't be limited to that space. This books from SAP Press tells you things about SAP BI Administration that even a BI expert has never known about.Check it out.

This is a link you can download the basis common tcode together with the explanation and screenshots of that transaction.

Global Transport Strategy

Imagine having multiple projects working on your BI system concurrently. And try to consider the business release cycle that allows transports to be imported to regression and production server based on a straight timeline. Now, what happen to shared objects (esp infoobjects and datasource) being overwritten and missing manual steps before a transport went in? Surely some consideration to strategize the transports have to be done. And this is what the post is about.

Action for Pre cutover:
1) Engagement with basis on system refresh for regression server,logical system name mapping,
RFC connection
2) Engagement with different projects on the manual steps in between transport and data
loading sequence (two projects working on different solution may shared same datasource,
eg.Marketing and Finance retrieve data from COPA)
3) Preparation of transport build recipe
4) Communication to respective stakeholder on dates,system lockout and date for last transport
to be included in build recipe
5) List of users remain unlock and process chain to be 'paused' during cutover
6) Manual steps in between of cutover and party involves. Eg. replication from R/3


Action for Post cutover:
1) Compare inactive objects before and after cutover
2) Ensure all process chain run successfully
3) Reports can be executed successfully
4) Document down lesson learned. Eg. Cube content be truncated prior to sending in a change to
add navigational attribute in order to shorten the transport time
5) Raise awareness to respective party on their changes in their system that impact BI. Eg is
changing logical system name in source system might resulted in delta extraction failure in BI.
Another example is COPA realignment in R/3 will break the delta extraction in BI.

Can't have a bug-free BI system

In BI reporting environment, especially if it reports on global and regional level, there are a number of report designs and modeling techniques that needs to be considered as to prevent future pain of spending many man days and time to fix them. Most of the issues arises from frontend such as data binding in web template and incorrect variable applied in the Bex queries. The bigger issues lean towards the data standardization or cleanliness of master data across all the SAP and non-SAP feed systems. In order to consolidate the figures for global reporting, the master data has to be compounded to its source system or a single set of master data to be agreed upon from different regional master data management as the global standard. BI standards and governance play a major role here all the way from standardizing the infoobject (and its attribute),hierarchies and mapping DSO/table contents as they are the baseline for the accuracy of data churn out from the ETL layer into reporting display.There is no escape from having to perform multiple data reload or self-transformation whenever there is a change in one of those object. The always changing business process requires snapshots reporting and this introduce complexity in term of time dependence and versioning of master data and hierarchy.

Friday, May 28, 2010

Report Design Rules of thumb

Stating the purpose
What is the overall purpose of the report?
Who is going to read the report?

Determining the layout of the report
What is the report title going to be?
What identifying information is needed in the header and footer?

Finding the data
What data do you want to use in the report?
What specific data should appear in the body of the report?
Does the data exist or does it need to be calculated?
What types of fields contain data?

Manipulating the data
Do you want the data organized into groups?
Do you want the data sorted based on record or group values?
Do you want the report to contain only specific records or groups?
Do you want to summarize the data?
What information should be flagged on the report?
How do you want information flagged?

Determining printing area characteristics
In what order will the areas print on the report?
How often do report objects print?

Developing a prototype on paper


ITIL in BI

When I attended the ITIL Foundation V3, the first question I asked was how practical is ITIL in practise for Business Intelligence in a corporate organization.

In my opinion, in BI we can first zoom into these 3 areas - Change Management, Problem Management and Project Delivery. When we start to identify the essence of these areas, we are nominating ownership for each employee in the division. Without a sense of ownership, an organization may end up creating an unhealthy culture whether there will be one or two 'star employee' who holds all the important information without needing to share any of it to the peers or everyone is doing everything in a messy way based on adhoc instructions. 

What makes me want to look into these 3 areas are based on some idealogy as below. 

Change is the only thing that is constant. Hence change is our business as usual. BI change management has to take place in parallel with the business change. The execution of change management in BI is the result of impact asessment and this exercise take place at 2 main trigger points: 

1) when there is a new initiative. This can be:
- new products or service line
- groupwide consolidation
- new kpi or division

2) when there is a change in core functions. This can be:
- restructuring of organization
- change in service line and product offerings
- change in business operations

A good system and design has to take account into potential changes. Project Delivery should also take into the account of balancing the deliverables on time based on current requirements and scopes as well as to include the requirement and effort required to ensure system scalability in terms of business change. A smart angle to control the project from budget and business needs perspectives are to manage the later point through new phases or enhancement.

Problem is the only thing we see things we fail to see at the initial stage.In BI problem management or commonly known as Support, it is the execution of the result of root cause analysis.Root cause analysis reveals a lot about the reasons for data discrepencies or system flaws that goes all the way back to business requirements, data cleaniless, governance practices and best design practices.

Both of these fundamental ITIL procesess - Change Management and Problem Management tie back to the initial stage of the solution proposal stage in Project Delivery because from the root cause we can propose better solution based on lesson learnt and with impact asessment we see what need to take into account to avoid impacts on other areas such as data accuracy in reports or system downtime when there are new changes introduced. In a layman way of saying, Problem Management takes place after bad things had happened , Change Management takes place as a precaution measure before bad things happen.

Hence Project Delivery , Change Management and Problem Management are tightly coupled functions that need to co-exist and integrate well to ensure the success of any Analytical or BI operations in a large organization.


Tuesday, May 4, 2010

Tackling Master Data for New & Old Infoobjects

Master data in a global environment is controlled by one source to ensure the single version of truth. BI and ERP system has to share the same type of master data as it is only appropriate that the master data comes from the r/3 system and BI extract from there. In some cases where the master data is not in r/3 system, the BI extract it from the global master data database.

Scenario 1
In BI reporting, the version of master data reported depend on the period selection. If material group in year 2009 month 12 is reported as Semi Finish Good and in year 2010 month 1 is reported as Finish Good, how does BI actually cater for this scenario?

Scenario 2
If the material classification in R/3 takes from GS_XXX and now it's replaced by GS_YYY, how does BI changed according to the material attributes extraction?

In scenario across BI landscapes eg in different regions and one global BI instance in which all the regional master data flows up to global level, the master data has to be compounded with source system as the object might have the same name, eg. same material code may exist in two different regions, an error stack may have te same technical name in two or more BI instances.

Friday, April 16, 2010

Impact Assessment

Impact assessment(IA) is the process of identifying the future consequences of a current or proposed action.A Technical Design Authority needs to carry out IA every time there is a change or a new project landing into the BI system. It is a very challenging job as it really test your level of understanding, experience and thoughts. It's paper work but when we actually put our mind into the piece of work, good result is achieved in term of notifying the project team on the possible bugs resulted from the solutioning or simply pin point out mistakes in design and at the same time ensure the design adheres to the company standard guideline such as infoobjects,naming convention and architecture.

This document on impact assessment based on ITIL is worth a read.

Tips on IA for BI environment..
Give emphasis on the following when reviewing the blueprint:
1.Shared Objects
  • Infoobjects
  • Datasource
  • Hierarchy
  • Authorization objects
  • Unit of measure conversion
  • Currency exchange rate
2. Authorization
  • Authorization objects
  • Authorization DSO
3. Standards
  • Global/regional infoobjects
  • Projects/System infoarea
  • Naming convention
4. Batch Schedule
  • Master data and transactional data load not already exist in current system
  • Availability of windows from R/3 for extraction and work process slots for extraction in BI
5. Housekeeping
  • Data retention for PSA
  • Data retention in reporting cube (consider factors like the needs of backposting and reload)
  • Is CML (Corporate Memory Layer) used
  • Compression and aggregates
6. Flat files
  • Define ownership and frequency of load
  • Define wheather upload of data is via FTP/Webdynpro interface or IP
  • If FTP,define standard SAP directory and ftp method that does not violate security policy

7. Any potential data reload and methods to reload wrongly defined(mapped) data, eg. deletion by request or selective deletion depending on the key figure mode is in overwrite or summation.

Wednesday, April 14, 2010

Housekeeping & Database Sizing

In a BI environment, 3 items need to be considered for housekeeping and data retention period:
1) DSO change log
2) PSA change log
3) Logs of error stack

Some useful program:
RSPC_LOG_DELETE
RSPC_INSTANCE_CLEANUP
RSB_ANALYZE_ERRORLOG
RSBM_ERRORLOG_DELETE

This is a useful document to explain the deletion at database level.

Tuesday, April 13, 2010

Where-used list of an attribute (either display or navigational)

This program will list down all the queries and cubes that contains the infoobject. It is very useful especially if that object is a shared infoobject as we'd know which reports and cubes are impacted.

Code download

P/S table that store navigation attribute info : RSDCUBEIOBJ(infocube),RSDATRNAV(characteristic)

Shared Infoobjects

It is utmost important that a governance body is able to identify the shared infoobjects or any assignment ofthe shared infoobject to the same business rules definition as required by new projects or enhancement. This is important to keep single version of truth of the same business definition master data. One example of this is location. If the location refers to general geographical place, then it can fit any geographical definition like plant,endmarket or factory. But in certain reporting drilldown or mappings, a specific master data set is required hence plant is different from endmarket and they are different set of master data with respective infoobjects. This is important when assigning the attributes as attributes are technically correct for 1:m relationship but not m:n. Eg. Malaysia is an attribute of Selangor but Malaysia cannot be an attribute to Selangor if the business definition is not referring to national country but export country as Malaysia,India,UK could be the export country for Selangor and vice versa Malaysia can be export country to Selangor,Perak and Melaka.

Some of the important infoobjects to watch out for during a request for creation or change in infoobjects are:
1) Geographical infoobject such as region,area,cluster,endmarket and business unit
2) Material infoobject that has a lot of attributes
3) Regional infoobjects that push master data to global level has to be compounded with source system as the same master data can mean different thing in different region. Eg is material ABC is a Finish Good in Asia but it is a Raw material in Europe.

Inconsistent business unit (lowest granularity of geographical pointer)

The changes on the business unit will result in data issue both for master data and transactional data. This also impact authorization for user's access who refer to business unit in the centralized authorization DSO. From my understanding,there are a couple of scenario:

1) wrong business unit -> correct business unit (happen when there is alignment required especially for Marketing and Finance business unit hierarchy)
2) old business unit -> new business unit (happen when there is a change in business process. Eg. for endmarket level when China include HK& Macau)
3) obsolete business unit (happen when there is a change in business process)


Scenario (1) Needed for all reports and remapping/reload required. Master data ZG_BUSUNT (infoobject for business unit) should be populated to the correct business unit. ZG_LOC (infoobject for location - used in APO)should point to correct default business unit.Both transactional and master data should be in sycn in global and regional.

Eg:1338_Summerset to 1338_Sommerset

Transactional data (APO Demand Planning)

A mix of entries for both wrong and correct business unit which does not make sense.For planning year period 201401 forecast version 201103 , the data is planned under 1338_Summerset but it was planned under 1338_Sommerset for 201102.

If the report need to reflect the correct business unit, then there may be a need to perform a self-transformation (to map to correct mapping or with some logic included) in the dso level to offset the records that mapped to the wrong business unit and reload the impacted data with correct business unit. The new set of data will be reflected in the cube level as after image. This can potentially be a repeatable action on regional level depending on as and when there is inconsistent issues in business unit esp between APO,Marketing and Finance. A full reload on the impacted generated datasource is required at Global level.


Scenario (2) The end users have to decide whether they require the new business unit view to reflect historical data or not.

This is a challenge to get the same consistent data across region to global as well as an agreed way of viewing the APO report whether to view total volume only by the new business unit or to cater the historical snapshot view (view by both old and new business unit).

Finance reports may need to reflect data that require both old and new business unit mapping and it is important the derivation of business unit in mapping table is correct and in sync with the business unit master data. There is potential issues for some Finance reports that have SPLY (Same Period Last Year). Eg. business unit A01 exist on hierarchy version A but when it became obsolete or replaced by new business unit A02, hierarchy version B is created. When user view SPLY on current hierarchy version (version B), they won't have the comparison figure for A01.


Scenario (3) The business unit will appear on the hierarchy version it ties to (In other solutions, this can also be controlled by time dependent master data and hierarchy). The master data should remain intact and should not be blanked out.

Friday, April 9, 2010

UOM

Often in a complex multiple regional SAP instance environment that does not have a standard UOM defined or in the midst of doing so, BI is required to 'define' the standard UOM that is govern by an existing or new data standardization team.The standard UOM is specific to the company's business rules and not necessary refer to the international ISOCODE. In order to convert the transactional UOM to the standard UOM, the logic derived from the r/3 unit of conversion table such as matunit and t006. The standard UOM and base UOM also needs to be captured in the material group or material master data so that there is a base and target UOM that can be referred to for conversion.

Wednesday, April 7, 2010

Step 1,2,3 & Validation check + message class

The enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting) is called up several times during execution of the report. Here, the parameter I_STEP specifies when the enhancement is called.

I_STEP = 1
Call takes place directly before variable entry. Can be used to pre populate selection variables

I_STEP = 2
Call takes place directly after variable entry. This step is only started up when the same variable is not input ready and could not be filled at I_STEP=1.

I_STEP = 3 In this call, you can check the values of the variables. Triggering an exception (RAISE) causes the variable screen to appear once more. Afterwards, I_STEP=2 is also called again.

I_STEP = 0
The enhancement is not called from the variable screen. The call can come from the authorization check or from the Monitor. This is where you want to put the mod for populating the authorization object.

Full text available here.

Global Exchange Rate Program

Exchange rate is shared by financial reports from different projects such as Management Information (MI) and Product Costing. Thus ensuring both are taking and defining the same rate is crucial. Rounding adjustment and inverse flag are some of the items that should be factored in when designing the reports.

In a global environment, the exchange rate is retrieved from a single point , eg. from ft.com.
The exchange rate is usually downloaded to r/3 and global transfer to BI. Either that, BI can also have its own exchange rate program that download the latest rate for Budget,COPA,COPC etc.

Global Authorization

I a global and regional BI system environment, it is crucial to have business access segregation through a set of controlled and standardized roles and analysis authorization. Hence the BI developer/gatekeeper and GRC team has to work closely to ensure the roles are used correctly and new menu roles and AA objects are introduced whenever there is a new set of reports developed. Portal team is involved in creating the menu link at portal as well.

One of the approach is to introduce the usage of a centralized Authorization DSO in which the user and their report access privileges are maintained in the DSO and the access check is executed  through CMOD whenever the report is run .The check aims to identify  the type of BI reports/solutions and the authorization analysis object the report is based on. The regional Authorized DSO is also replicated to the Global centralized DSO and this can ensure the users have the similar report access across regional and global level. The standard forms for user to request for new roles has to be in place first and existing old roles have to go through a cleanup to reflect the new set of standard authorization.

Integration Party

Regional BI business release requires an alignment with multiple parties such as
1) ERP (dependency on datasources)
2) APO (dependency on datasources)
3) Portal (dependency of report published in portal)
4) Global BI (it depends on the readiness of regional cutover as the data flows
bottom to top or vice-versa)
5) Any other feed system that connects to BI

Three Top Questions To Ask a BI Vendor

I stumbled upon a very good article by Boris Evelson at Forrester Blog that reveals the unpopular fact about the essence of BI in a big organization. The first point really hits the bull's eye as eventually what controls the changes in BI matters the most in terms of minimizing risk and cutting cost.

Q1: What are the capabilities of your services organization to help clients not just with implementing your BI tool, but with their overall BI strategy.

Most BI vendors these days have modern, scalable, function rich, robust BI tools. So a real challenge today is not with the tools, but with governance, integration, support, organizational structures, processes etc – something that only experienced consultants can help with.

Q2: Do you provide all components necessary for an end to end BI environment (data integration, data cleansing, data warehousing, performance management, portals, etc in addition to reports, queries, OLAP and dashboards)?

If a vendor does not you'll have to integrate these components from multiple vendors.

Q3. Within the top layer of BI, do you provide all components necessary for reporting, querying and analysis such as report writer, query builder, OLAP engine, dashboard/data visualization tool, real time reporting/analysis, text analytics, BI workspace/sandbox, advanced analytics, ability to analyze data without a data model (usually associate with in-memory engines)

If a vendor does not, believe me the users will ask for them sooner or later, so you'll have to integrate these components from multiple vendors.

I also strongly recommended that the editor discounts questions that vendors and other analysts may provide like:
  • do you have modern architecture
  • do you use SOA
  • can you enable end user self service
  • is your BI app user friendly
because these are all mostly a commodity these days.

CO-PA vs Billing

There are 2 type of datasources in SAP R/3 that can be extracted to BI for the sales volume figure:
1) CO-PA
2) Billing (2LIS_XXX - PO and SD)

The difference between these two extractors are the point of time the data is updated to either logistic or accounting table.

In order-to-cash business scenario, if the sales volume is required to be measured in the initial stage where the sales order is keyed into the system, then CO-PA can't be used as during that stage, no accounting document is created yet. Financial impact will only occur in good issue stage where table BKPF and BSEG are updated. The final stage of this process in logistic is invoice creation (updating table VBRK and VBRF). In accounting, the final stage would be when receivable is cleared and payment is received where another accounting document is created in BKPF and BSEG and open receivable is cleared and BSAD is updated.

In procure-to-pay scenario, the early stage (PR to PO) all the way to Good Receipt does not involve any financial process until invoice is received by other party and financial table BKPF and BSEG is updated.BSAK is updated when invoice is paid via payment run and payable is cleared.

Saturday, April 3, 2010

CO-PA Profitability Analysis

CO-PA Profitability Analysis is a drilldown report which allows slice & dice multi-dimensional sales profitability analysis by variety of analytical segment (which is called Characteristics) such as market / region, product group, customer group, customer hierarchy, product hierarchy, profit center etc.., most of those characteristics can be filled in by standard SAP functionality i.e. customer master or material master.
By making Distribution Channel as a characteristic, CO-PA enables more flexible analysis. Distribution channel is filled in when sales order is created. User can differentiate distribution channel if they need to take separate analysis (such as between wholesale / retail, goods sale / commission sale / service sale etc..) without maintaining master data. (The customer & material master maintenance hassle caused by multiple distribution channels can be solved by VOR1 (IMG Sales and Distribution > Master Data > Define Common Distribution Channels).)
CO-PA allows to take in non SAP standard data by using External Data Transfer, but the major benefit of CO-PA is that it has close relation with the SAP SD module. SD profitability data is automatically sent forward and stored within the same system. In fact, SAP SD module without CO-PA Profitability Analysis is like air without oxygen. It is pointless using SAP without it.
Overhead cost allocation based on sale is possible by CO-PA, which is not possible in the cost center accounting.
Gross Profit report is possible by sales order base (as a forecast, in addition to billing base actual) by using cost-based CO-PA.CO-PA is especially important since it is the only module that shows financial figures which is appropriate in terms of cost-revenue perspective. For this reason, BW(BI) is taking data from CO-PA at many projects. BW(BI) consultants are sometimes setting up CO-PA without FI/CO consultants' knowing. CO-PA captures cost of goods sold (COGS) and revenue at billing (FI release) at the same time (cost-based CO-PA). This becomes important when there is timing difference between shipment and customer acceptance. COGS should not be recognized, but FI module automatically creates COGS entry at shipment, while revenue entry will not be created until billing (or FI release). Such case happens when for example customer will not accept payment unless they finish quality inspection, or for example when goods delivery takes months because goods are sent across by ocean etc..It is especially delicate to customize the copa infosource to map to your specific operating concern. This seems to be easier since ECC6.0 Enh pack 4 (auto generated).
For this point, CO-PA (cost-based) does not reconcile with FI. But this is the whole point of CO-PA, and this makes CO-PA essential. Account-based CO-PA is more close to FI module in this point. Account-based CO-PA is added later on, and it could be that it is simply for comparison purpose with the cost-based. Cost-based CO-PA is used more often.
When CO-PA is used in conjunction with CO-PC Product Cost, it is even more outstanding. If fixed cost and variable cost in CO-PC cost component are appropriately assigned to CO-PA value fields, Break-Even-Point analysis is possible, not to mention contribution margin or Gross Profit analyses. Consider that BEP analysis or GP analysis are possible by detailed level such as market / region, product group, customer group etc..
This makes no wonder. CO-PC is for COGM and inventory. CO-PA is for COGS. What do you do without CO-PA when you use CO-PC? It’s a set functionality. This doesn’t necessary mean CO-PA is only for manufacturing business, though.
Conservative finance/sales managers are reluctant to implement SAP R/3 sometimes because they are frightened to expose financial figures of harsh reality. Needless to say it helps boosting agile corporate decision-making, and this is where Top-down decision to implement SAP R/3 is necessary. Those managers will never encourage R/3. SAP R/3 realizes this BEP analysis even for manufacturing company. No other ERP software has realized such functionality yet. Even today R/3 is this revolutionary if CO-PA is properly used.
Role of capturing COGS-Sale figures is even more eminent in the cases of sales order costing or Resource Related Billing, and variant configurable materials with PS module or PM/CS module. (Equipment master of PM/CS is also a must to learn.) After determining WIP by Result Analysis, CO-PA is the only module that displays cost-revenue-wise correct financial figures. PS is necessary for heavy industry or large organization, variant configurable materials are also handy for large manufacturer or sales company. RRB is usable to non-manufacturing industry. RRB is indispensable for IT industry or consulting companies. The importance of CO-PA will be proven if used with these.
Production Cost Variance analysis is possible by assigning variance categories to different COPA value fields in the customizing. There are projects who had to develop production variance reports because they kicked COPA out of the scope without ever considering SAP standard functionality. Why do you cripple standard SAP functionality, simply because you are ignorant of anything more than CCA Cost Center Accounting or PCA Profit Center Accounting? Naturally it takes time to apprehend overall SAP functionality. This is where experience makes difference, which makes no wonder.
Settling production variances to COPA raises one issue. Variances originated from WIP or Finished Goods at month end all go to COPA i.e. COGS. Actual Costing by using ML Material Ledger solves this issue for the most part. Variance reallocation whose origin is unknown is only made to COGS and FG, not made to WIP. This is something SAP should have rectified long time ago. They made excuses that they didn't have enough resource to do that, developing BW, SEM-BCS, or New G/L on the other hand. Realtime consolidation became impossible in SEM-BCS, and New G/L isn't adding much of new functionality other than parallel fiscal year variants, in a practical sense. What SAP does is, they spent all their resource and effort in only revising the same functionality using the new technology, but nothing much was made possible from an accounting point of view.
ML can also be used to reflect transfer pricing or group valuations with specific buckets into copa value fields.Actual cost component split is also possible with ML, but you have to plan well in advance lest you use up value fields.COPA can also handle planning and actual/plan reporting. it has a built in forecasting engine (planning framework) and can handle top down or bottom up planning (using different versions, as well as plan allocations and distributions from Cost centers.A FI/MM interface allows you to post to specific COPA value fields for FI or MM based transactions (overheads or non-trade specific charges that impact the product profitability) such as trade show expenses etc.).
Configuration of CO-PA can sometimes be a bit of hassle. But it is far easier, cheaper and quicker than building infocube in BW(BI) from the scratch. If you know what you do with BW(BI), in many projects they take data from CO-PA table. Then why bother creating additional work of building BW(BI)?
Training course of CO-PA is just 5 days. Competent consultant should not spare such small investment. You will see there is more to learn about it in addition to that.
SAP is whimsical and they sometimes exclude CO-PA, CO-PC from the academy curriculum. They are not eager to keep trainees to have the right understanding of how to use their product. This is why many FI/CO consultants are ignorant of CO-PA and CO-PC, and only an experienced consultant knows its necessity.
CO-PA is a must for experienced FI/CO consultant.
One point which has to be added is, keep Segment Level Characteristics as little as possible.
I sometimes hear users linked too many characteristics, and completely ruined CO-PA database. If you look in their config, they link 5 customer hierarchies, 6 user-defined product hierarchies on top of standard product hierarchy, material code, sales rep in the sales order line level, and 4 other user-defined derivation segments as Segment Level Characteristics. Now their CO-PA table doesn't respond other than short dump in 3 years usage.
SAP clearly explains and dissuades from just adding sales order as segment level, and it is always a struggle. Whatever they were thinking in adding 45 segment levels.
It was once a controversy. Data segregation from program logic, data normalization and elimination of duplicate entries. That gave birth to SQL database which SAP is running on. Now SAP users don't know such history, and repeat the same failure.
They have corporate reshuffling and need to revise product lineup, then maintain product hierarchy. Why adding segment levels every time you have reshuffling?
No matter how you have new tool, there's no end. It's not tool itself. It's people who is twisting the case.
Successful usage would be product hierarchy 1, and maybe 2. If characteristic is configured, segment data is stored at that level. You can download the segment data, this may be a remedy. Data feeding and presentation in BW maybe another way.

*Article plucked from it.toolbox.com

Flows to CO-PA

From FI
Value fields is the building blocks of CO-PA
The updates that flow to FI and CO from different processes
- during Supply Chain process
Supply chain processing is linked to SD (Sales Distribution) and MM (Materials Management) modules
- during Material Ledger process
At month end after completing the material ledger close the actual periodic price will be generated and cost of sales updated at its actual cost for FI and CO-PA
- during Project System and Cost Center Accounting
CO-PA can be reconciled to PS after Settlement
CO-PA can be reconciled to CCA after Assessment Cycle

From SD
http://learnmysap.com/sales-distribution/264-ke4s-post-billing-documents-to-co-pa-manually.html

Friday, April 2, 2010

LIS vs LO

LIS & LO extractors- LIS is old technique through which we can load the data. Here we use typical delta and full up-loading techniques to get data from an logistics application, it uses V1 and V2 update, it means that it was a synchronous or asynchronous update from the application documents to LIS structures at the time when the application document was posted.

LO Cockpit is new technique which uses V3 which is an update that you can schedule, so it does not update at the time when you process the application documents, but it will be posted at a later stage. Another big difference between LO and LIS is that, you have separate datasources for header level, item level and schedule line level available in LO, you can choose at which level you want to extract your data and switch off others, which results in reduced data.When you configure LIS structure in ECC, the system environment  needs to be opened for changes where else not required for LO.

Friday, March 26, 2010

Lesson Learnt from Business Release

Part of the datawarehouse governance in a corporate company is to have quarterly business release (BR) for the enhancements and new project to be transported into Production system. Thus there is a need to set up a regression environment to ensure thorough testing was done before the the cutover took place. The regression environment has to be the exact copy of Production system in terms of data and configuration.

During this time cutover for BR1 to Regression and Production environment, I encountered numerous issues which I can relate them to the nature of the change request, shared objects, transports,data loading and design.

Change Request
There were two particular change requests that causes impact to multiple objects (including the queries, web template, infoobjects, function modules, class, IP aggregates) in the system.The first one being the reverse sign for data which is supposed to be reported both in P&L and Overhead. The data is entered in Overhead manual entry layout. It'd be entered in positive sign. The data saved needs to be in reversed sign as P&L report for these particular account is supposed to be shown in negative. We can't make changes to the frontend P&L report as the reversal sign in frontend will cause all other non Overhead account to be reversed as well. So the only way is to change the layout for Overhead report to flip the sign and ensure the backend data is saved in negative value. This change involves changes in all the Overhead and P&L manual entries and actual reports; both queries and web template.

The second one refers to the inclusion of additional level in cost center hierarchy for manual entry input and reporting. If the relationship of data is defined through the navigation/display attributes, then it is important to ensure the master data is correctly updated. If the new level needs to be open for manual entries, a new infoobject for that level has to be created and included in all the aggregates level and cubes. Thus there's a need to recreate a new aggregate level and perform cube remodeling. New aggrgates level means new manual entry layout. So we can see changes involve all the way from infoobject attributes, aggregates level, queries (both manual entry and output) and web templates.

Shared Objects
Removal of an obsolete attribute in an infoobjects can cause some of the transformation to fail as those infoojects are still in used. Thus the governance of infoobjects is very important to ensure the changes to any infobject (especially those shared in APO and Finance like material and material group) are carefully assessed on the impact of change.

Transport
Transport order and its prerequisite are important so that no old changes overwrites the new ones. It is a good practice to keep one change in one transport request or to collect only necessary objects. There'd bound to be incident where the objects which are not related to the change or fix are collected together with the transport request and caused the object to be inactive when moved to production environment.

Data loading
Whenever there is a logic change in the transformation level that requires the historical data to be transformed again causes the need to reload data to reporting level cube. There are different ways to approach this scenario but previous steps done on the reporting level data such as selective deletion has to be considered. Below are some steps that can be taken:
1) Selective deletion (in this case we can see the importance of including the source module character as well in that level although the objective at reporting level is to minimize the data granularity for performance purpose)

2) Deletion by request (in this case we can see the importance of loading data by request to reporting layer). The only setback is the load can take a long time as the previous requests were loaded in on daily basis.

3) Offset data through self-loop. This step is quite safe as offset data is added (nothing is deleted).

Design
Usually there's a need to report from the reconciliation level(data that had gone through all the transformation) for data checking purposes. This means the data in transformation level and reporting level has to be the same. In order to ensure the data is the same in reconciliation reports and actual reports ,the best practice is not to allow any transformation to happen between the transformation level and reporting level.

Monday, March 15, 2010

SAP Authorization Tables


Authorization Objects Tables

Table Name Description
TOBJ Authorization Objects
TACT Activities which can be Protected (Standard activities authorization fields in the system)
TACTZ Valid activities for each authorization object
TDDAT Maintenance Areas for Tables
TSTC SAP Transaction Codes
TPGP ABAP/4 Authorization Groups
USOBT Relation transaction > authorization object
USOBX Check table for table USOBT
USOBT_C Relation Transaction > Auth. Object (Customer)
USOBX_C Check Table for Table USOBT_C

User Tables
Table Description
USR01 User master record (runtime data)
USR02 Logon data
USR03 User address data
USR04 User master authorizations
USR05 User Master Parameter ID
USR06 Additional Data per User
USR07 Object/values of last authorization check that failed
USR08 Table for user menu entries
USR09 Entries for user menus (work areas)
USR10 User master authorization profiles
USR11 User Master Texts for Profiles (USR10)
USR12 User master authorization values
USR13 Short Texts for Authorizations
USR14 Surchargeable Language Versions per User
USR30 Additional Information for User Menu
USH02 Change history for logon data
USH04 Change history for authorizations
USH10 Change history for authorization profiles
USH12 Change history for authorization values
UST04 User masters
UST10C User master: Composite profiles
UST10S User master: Single profiles
UST12 User master: Authorizations

Sunday, February 14, 2010

Perspective of Basis in BI...

Background & Dialog Processes
A process that runs with dialogs, means with screen processing is call dialog processing and it is a foreground processing. The process which runs with out a dialog involvement and which runs without manual interference is background processing. The process here is managed by background processors. The background job, as the name suggests completes the job in the background, therefore allowing the user to continue accessing the application.A batch process comes into picture when two or more background jobs run as a batch. Like when you trigger a load the back ground process runs with as a batch process.
Background users are - access to Privileges to execute Background Jobs (reference users)
Dialog users are -Normal users to execute the interactive tasks.

RZ10-If a job is run as a dialogue it takes up a SAP session so as a user you will be unable to do any other work. There is also a time limit when you run a job as dialogue which if reached will not complete the job. This can be change in RZ10.

SM50 - Allows you to identify how many processes you have set up for dialogue (DIA) or background (BGD). If either are reached it can cause performance issues.

RSBATCH - Setting Parallel Processing of BI Processes
BI background management enables you to process the BI load and administration processes in parallel. You can configure the maximum number of work processes for BI processes. In parallel processing, additional work processes for processing the BI processes are split off from the main work process. The parallel processes are usually executed in the background, even if the main process is executed in dialog. BI background management is instrumental in attaining a performance-optimized processing of the BI processes. In addition to the degree of parallelism, you can specify the servers on which the processes are to run and with which priority (job class). BI background management thus supports an efficient distribution of the work processes in order to reduce the system load when processing the BI processes. Another advantage is that logs are created for monitoring the BI processes.

Suggestions:

1) First check the resource availability.
2) Check the extraction process how fast the records are generated in the source, how much time the background job is taking to finish. If that is fine then check how the idocs are moving from source to BI.
3) And also if the extraction job finishes in time then check the dialogue processes available at that time.
4) Check and find the difference at peak and free times by running differently(just a comparison).
4) Check the data packet size at BI level and Source level.
5) Check is there critical transformations/routines at BI level.

Reference

BI Jargon

There are countless of 'BI jargon' used in communicating the technical aspect of datawarehousing and these terms are not to be taken for granted if we want to truly appreciate the art and value of a good business intelligence architecture.

Some of the terms I want to list down are:
LSA - Also known as the next generation of datawarehousing stands for Layer Scalable Architecture. Most organizations with global footprint are considering this to be the way how the datawarehouse architecture should be.LSA consist of 5 main layers namely Data Acquisition Layer,Corporate Memory Layer, Propagation Layer (contain original 'unflavored' data ), Transformation Layer and Reporting Layer. This article explains everything about LSA.

Timestamped record - Discrete vs Continuous

Nonvolatile records - Data in datawarehouse is not subject to change in contrast with OLTP

Corporate Direction in BI

Data is the asset of an organization and making sense out of the data is invaluable for an organization. Each organization will decide its own BI strategy that is aligned with the business strategy and goals.The fundamental of a successful BI environment is based on the ability to maintain tight management over the data warehouse architecture while making the best use of information.

I stumbled upon a good article written by Prashant Pant , a senior Deloitte BI Consultant which point out the 10 essential components of a successful BI Strategy. I found a particular interesting point he highlighted which is called the metadata roadmap. This is so due to the nature of my work to address any potential impacts in a new project or a new enhancement to the existing system. It stated that metadata explains how, why and where the data can be found, retrieved, stored and used in an information management system. Metadata management like the one my company is having for infoobjects is crucial especially when the objects are shared across different BI systems. The attributes has to be correctly set to the master data and is able to differentiate the disparate systems. If not there won't be a single version of truth and user's would be confuse with the reported numbers when they examine granular data in different ways.

Check out Prashant's article

Saturday, February 13, 2010

My Sap BI Blog

I started this blog because I needed a place to act as a 'repository' to the BI information I can retrieve from my day to day experience as a SAP BI Consultant. Plus I thought this would be ideal for me to churn out what I understood and to shed some lights to some of the work related issues.