Friday, April 16, 2010

Impact Assessment

Impact assessment(IA) is the process of identifying the future consequences of a current or proposed action.A Technical Design Authority needs to carry out IA every time there is a change or a new project landing into the BI system. It is a very challenging job as it really test your level of understanding, experience and thoughts. It's paper work but when we actually put our mind into the piece of work, good result is achieved in term of notifying the project team on the possible bugs resulted from the solutioning or simply pin point out mistakes in design and at the same time ensure the design adheres to the company standard guideline such as infoobjects,naming convention and architecture.

This document on impact assessment based on ITIL is worth a read.

Tips on IA for BI environment..
Give emphasis on the following when reviewing the blueprint:
1.Shared Objects
  • Infoobjects
  • Datasource
  • Hierarchy
  • Authorization objects
  • Unit of measure conversion
  • Currency exchange rate
2. Authorization
  • Authorization objects
  • Authorization DSO
3. Standards
  • Global/regional infoobjects
  • Projects/System infoarea
  • Naming convention
4. Batch Schedule
  • Master data and transactional data load not already exist in current system
  • Availability of windows from R/3 for extraction and work process slots for extraction in BI
5. Housekeeping
  • Data retention for PSA
  • Data retention in reporting cube (consider factors like the needs of backposting and reload)
  • Is CML (Corporate Memory Layer) used
  • Compression and aggregates
6. Flat files
  • Define ownership and frequency of load
  • Define wheather upload of data is via FTP/Webdynpro interface or IP
  • If FTP,define standard SAP directory and ftp method that does not violate security policy

7. Any potential data reload and methods to reload wrongly defined(mapped) data, eg. deletion by request or selective deletion depending on the key figure mode is in overwrite or summation.

Wednesday, April 14, 2010

Housekeeping & Database Sizing

In a BI environment, 3 items need to be considered for housekeeping and data retention period:
1) DSO change log
2) PSA change log
3) Logs of error stack

Some useful program:
RSPC_LOG_DELETE
RSPC_INSTANCE_CLEANUP
RSB_ANALYZE_ERRORLOG
RSBM_ERRORLOG_DELETE

This is a useful document to explain the deletion at database level.

Tuesday, April 13, 2010

Where-used list of an attribute (either display or navigational)

This program will list down all the queries and cubes that contains the infoobject. It is very useful especially if that object is a shared infoobject as we'd know which reports and cubes are impacted.

Code download

P/S table that store navigation attribute info : RSDCUBEIOBJ(infocube),RSDATRNAV(characteristic)

Shared Infoobjects

It is utmost important that a governance body is able to identify the shared infoobjects or any assignment ofthe shared infoobject to the same business rules definition as required by new projects or enhancement. This is important to keep single version of truth of the same business definition master data. One example of this is location. If the location refers to general geographical place, then it can fit any geographical definition like plant,endmarket or factory. But in certain reporting drilldown or mappings, a specific master data set is required hence plant is different from endmarket and they are different set of master data with respective infoobjects. This is important when assigning the attributes as attributes are technically correct for 1:m relationship but not m:n. Eg. Malaysia is an attribute of Selangor but Malaysia cannot be an attribute to Selangor if the business definition is not referring to national country but export country as Malaysia,India,UK could be the export country for Selangor and vice versa Malaysia can be export country to Selangor,Perak and Melaka.

Some of the important infoobjects to watch out for during a request for creation or change in infoobjects are:
1) Geographical infoobject such as region,area,cluster,endmarket and business unit
2) Material infoobject that has a lot of attributes
3) Regional infoobjects that push master data to global level has to be compounded with source system as the same master data can mean different thing in different region. Eg is material ABC is a Finish Good in Asia but it is a Raw material in Europe.

Inconsistent business unit (lowest granularity of geographical pointer)

The changes on the business unit will result in data issue both for master data and transactional data. This also impact authorization for user's access who refer to business unit in the centralized authorization DSO. From my understanding,there are a couple of scenario:

1) wrong business unit -> correct business unit (happen when there is alignment required especially for Marketing and Finance business unit hierarchy)
2) old business unit -> new business unit (happen when there is a change in business process. Eg. for endmarket level when China include HK& Macau)
3) obsolete business unit (happen when there is a change in business process)


Scenario (1) Needed for all reports and remapping/reload required. Master data ZG_BUSUNT (infoobject for business unit) should be populated to the correct business unit. ZG_LOC (infoobject for location - used in APO)should point to correct default business unit.Both transactional and master data should be in sycn in global and regional.

Eg:1338_Summerset to 1338_Sommerset

Transactional data (APO Demand Planning)

A mix of entries for both wrong and correct business unit which does not make sense.For planning year period 201401 forecast version 201103 , the data is planned under 1338_Summerset but it was planned under 1338_Sommerset for 201102.

If the report need to reflect the correct business unit, then there may be a need to perform a self-transformation (to map to correct mapping or with some logic included) in the dso level to offset the records that mapped to the wrong business unit and reload the impacted data with correct business unit. The new set of data will be reflected in the cube level as after image. This can potentially be a repeatable action on regional level depending on as and when there is inconsistent issues in business unit esp between APO,Marketing and Finance. A full reload on the impacted generated datasource is required at Global level.


Scenario (2) The end users have to decide whether they require the new business unit view to reflect historical data or not.

This is a challenge to get the same consistent data across region to global as well as an agreed way of viewing the APO report whether to view total volume only by the new business unit or to cater the historical snapshot view (view by both old and new business unit).

Finance reports may need to reflect data that require both old and new business unit mapping and it is important the derivation of business unit in mapping table is correct and in sync with the business unit master data. There is potential issues for some Finance reports that have SPLY (Same Period Last Year). Eg. business unit A01 exist on hierarchy version A but when it became obsolete or replaced by new business unit A02, hierarchy version B is created. When user view SPLY on current hierarchy version (version B), they won't have the comparison figure for A01.


Scenario (3) The business unit will appear on the hierarchy version it ties to (In other solutions, this can also be controlled by time dependent master data and hierarchy). The master data should remain intact and should not be blanked out.

Friday, April 9, 2010

UOM

Often in a complex multiple regional SAP instance environment that does not have a standard UOM defined or in the midst of doing so, BI is required to 'define' the standard UOM that is govern by an existing or new data standardization team.The standard UOM is specific to the company's business rules and not necessary refer to the international ISOCODE. In order to convert the transactional UOM to the standard UOM, the logic derived from the r/3 unit of conversion table such as matunit and t006. The standard UOM and base UOM also needs to be captured in the material group or material master data so that there is a base and target UOM that can be referred to for conversion.

Wednesday, April 7, 2010

Step 1,2,3 & Validation check + message class

The enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting) is called up several times during execution of the report. Here, the parameter I_STEP specifies when the enhancement is called.

I_STEP = 1
Call takes place directly before variable entry. Can be used to pre populate selection variables

I_STEP = 2
Call takes place directly after variable entry. This step is only started up when the same variable is not input ready and could not be filled at I_STEP=1.

I_STEP = 3 In this call, you can check the values of the variables. Triggering an exception (RAISE) causes the variable screen to appear once more. Afterwards, I_STEP=2 is also called again.

I_STEP = 0
The enhancement is not called from the variable screen. The call can come from the authorization check or from the Monitor. This is where you want to put the mod for populating the authorization object.

Full text available here.

Global Exchange Rate Program

Exchange rate is shared by financial reports from different projects such as Management Information (MI) and Product Costing. Thus ensuring both are taking and defining the same rate is crucial. Rounding adjustment and inverse flag are some of the items that should be factored in when designing the reports.

In a global environment, the exchange rate is retrieved from a single point , eg. from ft.com.
The exchange rate is usually downloaded to r/3 and global transfer to BI. Either that, BI can also have its own exchange rate program that download the latest rate for Budget,COPA,COPC etc.

Global Authorization

I a global and regional BI system environment, it is crucial to have business access segregation through a set of controlled and standardized roles and analysis authorization. Hence the BI developer/gatekeeper and GRC team has to work closely to ensure the roles are used correctly and new menu roles and AA objects are introduced whenever there is a new set of reports developed. Portal team is involved in creating the menu link at portal as well.

One of the approach is to introduce the usage of a centralized Authorization DSO in which the user and their report access privileges are maintained in the DSO and the access check is executed  through CMOD whenever the report is run .The check aims to identify  the type of BI reports/solutions and the authorization analysis object the report is based on. The regional Authorized DSO is also replicated to the Global centralized DSO and this can ensure the users have the similar report access across regional and global level. The standard forms for user to request for new roles has to be in place first and existing old roles have to go through a cleanup to reflect the new set of standard authorization.

Integration Party

Regional BI business release requires an alignment with multiple parties such as
1) ERP (dependency on datasources)
2) APO (dependency on datasources)
3) Portal (dependency of report published in portal)
4) Global BI (it depends on the readiness of regional cutover as the data flows
bottom to top or vice-versa)
5) Any other feed system that connects to BI

Three Top Questions To Ask a BI Vendor

I stumbled upon a very good article by Boris Evelson at Forrester Blog that reveals the unpopular fact about the essence of BI in a big organization. The first point really hits the bull's eye as eventually what controls the changes in BI matters the most in terms of minimizing risk and cutting cost.

Q1: What are the capabilities of your services organization to help clients not just with implementing your BI tool, but with their overall BI strategy.

Most BI vendors these days have modern, scalable, function rich, robust BI tools. So a real challenge today is not with the tools, but with governance, integration, support, organizational structures, processes etc – something that only experienced consultants can help with.

Q2: Do you provide all components necessary for an end to end BI environment (data integration, data cleansing, data warehousing, performance management, portals, etc in addition to reports, queries, OLAP and dashboards)?

If a vendor does not you'll have to integrate these components from multiple vendors.

Q3. Within the top layer of BI, do you provide all components necessary for reporting, querying and analysis such as report writer, query builder, OLAP engine, dashboard/data visualization tool, real time reporting/analysis, text analytics, BI workspace/sandbox, advanced analytics, ability to analyze data without a data model (usually associate with in-memory engines)

If a vendor does not, believe me the users will ask for them sooner or later, so you'll have to integrate these components from multiple vendors.

I also strongly recommended that the editor discounts questions that vendors and other analysts may provide like:
  • do you have modern architecture
  • do you use SOA
  • can you enable end user self service
  • is your BI app user friendly
because these are all mostly a commodity these days.

CO-PA vs Billing

There are 2 type of datasources in SAP R/3 that can be extracted to BI for the sales volume figure:
1) CO-PA
2) Billing (2LIS_XXX - PO and SD)

The difference between these two extractors are the point of time the data is updated to either logistic or accounting table.

In order-to-cash business scenario, if the sales volume is required to be measured in the initial stage where the sales order is keyed into the system, then CO-PA can't be used as during that stage, no accounting document is created yet. Financial impact will only occur in good issue stage where table BKPF and BSEG are updated. The final stage of this process in logistic is invoice creation (updating table VBRK and VBRF). In accounting, the final stage would be when receivable is cleared and payment is received where another accounting document is created in BKPF and BSEG and open receivable is cleared and BSAD is updated.

In procure-to-pay scenario, the early stage (PR to PO) all the way to Good Receipt does not involve any financial process until invoice is received by other party and financial table BKPF and BSEG is updated.BSAK is updated when invoice is paid via payment run and payable is cleared.

Saturday, April 3, 2010

CO-PA Profitability Analysis

CO-PA Profitability Analysis is a drilldown report which allows slice & dice multi-dimensional sales profitability analysis by variety of analytical segment (which is called Characteristics) such as market / region, product group, customer group, customer hierarchy, product hierarchy, profit center etc.., most of those characteristics can be filled in by standard SAP functionality i.e. customer master or material master.
By making Distribution Channel as a characteristic, CO-PA enables more flexible analysis. Distribution channel is filled in when sales order is created. User can differentiate distribution channel if they need to take separate analysis (such as between wholesale / retail, goods sale / commission sale / service sale etc..) without maintaining master data. (The customer & material master maintenance hassle caused by multiple distribution channels can be solved by VOR1 (IMG Sales and Distribution > Master Data > Define Common Distribution Channels).)
CO-PA allows to take in non SAP standard data by using External Data Transfer, but the major benefit of CO-PA is that it has close relation with the SAP SD module. SD profitability data is automatically sent forward and stored within the same system. In fact, SAP SD module without CO-PA Profitability Analysis is like air without oxygen. It is pointless using SAP without it.
Overhead cost allocation based on sale is possible by CO-PA, which is not possible in the cost center accounting.
Gross Profit report is possible by sales order base (as a forecast, in addition to billing base actual) by using cost-based CO-PA.CO-PA is especially important since it is the only module that shows financial figures which is appropriate in terms of cost-revenue perspective. For this reason, BW(BI) is taking data from CO-PA at many projects. BW(BI) consultants are sometimes setting up CO-PA without FI/CO consultants' knowing. CO-PA captures cost of goods sold (COGS) and revenue at billing (FI release) at the same time (cost-based CO-PA). This becomes important when there is timing difference between shipment and customer acceptance. COGS should not be recognized, but FI module automatically creates COGS entry at shipment, while revenue entry will not be created until billing (or FI release). Such case happens when for example customer will not accept payment unless they finish quality inspection, or for example when goods delivery takes months because goods are sent across by ocean etc..It is especially delicate to customize the copa infosource to map to your specific operating concern. This seems to be easier since ECC6.0 Enh pack 4 (auto generated).
For this point, CO-PA (cost-based) does not reconcile with FI. But this is the whole point of CO-PA, and this makes CO-PA essential. Account-based CO-PA is more close to FI module in this point. Account-based CO-PA is added later on, and it could be that it is simply for comparison purpose with the cost-based. Cost-based CO-PA is used more often.
When CO-PA is used in conjunction with CO-PC Product Cost, it is even more outstanding. If fixed cost and variable cost in CO-PC cost component are appropriately assigned to CO-PA value fields, Break-Even-Point analysis is possible, not to mention contribution margin or Gross Profit analyses. Consider that BEP analysis or GP analysis are possible by detailed level such as market / region, product group, customer group etc..
This makes no wonder. CO-PC is for COGM and inventory. CO-PA is for COGS. What do you do without CO-PA when you use CO-PC? It’s a set functionality. This doesn’t necessary mean CO-PA is only for manufacturing business, though.
Conservative finance/sales managers are reluctant to implement SAP R/3 sometimes because they are frightened to expose financial figures of harsh reality. Needless to say it helps boosting agile corporate decision-making, and this is where Top-down decision to implement SAP R/3 is necessary. Those managers will never encourage R/3. SAP R/3 realizes this BEP analysis even for manufacturing company. No other ERP software has realized such functionality yet. Even today R/3 is this revolutionary if CO-PA is properly used.
Role of capturing COGS-Sale figures is even more eminent in the cases of sales order costing or Resource Related Billing, and variant configurable materials with PS module or PM/CS module. (Equipment master of PM/CS is also a must to learn.) After determining WIP by Result Analysis, CO-PA is the only module that displays cost-revenue-wise correct financial figures. PS is necessary for heavy industry or large organization, variant configurable materials are also handy for large manufacturer or sales company. RRB is usable to non-manufacturing industry. RRB is indispensable for IT industry or consulting companies. The importance of CO-PA will be proven if used with these.
Production Cost Variance analysis is possible by assigning variance categories to different COPA value fields in the customizing. There are projects who had to develop production variance reports because they kicked COPA out of the scope without ever considering SAP standard functionality. Why do you cripple standard SAP functionality, simply because you are ignorant of anything more than CCA Cost Center Accounting or PCA Profit Center Accounting? Naturally it takes time to apprehend overall SAP functionality. This is where experience makes difference, which makes no wonder.
Settling production variances to COPA raises one issue. Variances originated from WIP or Finished Goods at month end all go to COPA i.e. COGS. Actual Costing by using ML Material Ledger solves this issue for the most part. Variance reallocation whose origin is unknown is only made to COGS and FG, not made to WIP. This is something SAP should have rectified long time ago. They made excuses that they didn't have enough resource to do that, developing BW, SEM-BCS, or New G/L on the other hand. Realtime consolidation became impossible in SEM-BCS, and New G/L isn't adding much of new functionality other than parallel fiscal year variants, in a practical sense. What SAP does is, they spent all their resource and effort in only revising the same functionality using the new technology, but nothing much was made possible from an accounting point of view.
ML can also be used to reflect transfer pricing or group valuations with specific buckets into copa value fields.Actual cost component split is also possible with ML, but you have to plan well in advance lest you use up value fields.COPA can also handle planning and actual/plan reporting. it has a built in forecasting engine (planning framework) and can handle top down or bottom up planning (using different versions, as well as plan allocations and distributions from Cost centers.A FI/MM interface allows you to post to specific COPA value fields for FI or MM based transactions (overheads or non-trade specific charges that impact the product profitability) such as trade show expenses etc.).
Configuration of CO-PA can sometimes be a bit of hassle. But it is far easier, cheaper and quicker than building infocube in BW(BI) from the scratch. If you know what you do with BW(BI), in many projects they take data from CO-PA table. Then why bother creating additional work of building BW(BI)?
Training course of CO-PA is just 5 days. Competent consultant should not spare such small investment. You will see there is more to learn about it in addition to that.
SAP is whimsical and they sometimes exclude CO-PA, CO-PC from the academy curriculum. They are not eager to keep trainees to have the right understanding of how to use their product. This is why many FI/CO consultants are ignorant of CO-PA and CO-PC, and only an experienced consultant knows its necessity.
CO-PA is a must for experienced FI/CO consultant.
One point which has to be added is, keep Segment Level Characteristics as little as possible.
I sometimes hear users linked too many characteristics, and completely ruined CO-PA database. If you look in their config, they link 5 customer hierarchies, 6 user-defined product hierarchies on top of standard product hierarchy, material code, sales rep in the sales order line level, and 4 other user-defined derivation segments as Segment Level Characteristics. Now their CO-PA table doesn't respond other than short dump in 3 years usage.
SAP clearly explains and dissuades from just adding sales order as segment level, and it is always a struggle. Whatever they were thinking in adding 45 segment levels.
It was once a controversy. Data segregation from program logic, data normalization and elimination of duplicate entries. That gave birth to SQL database which SAP is running on. Now SAP users don't know such history, and repeat the same failure.
They have corporate reshuffling and need to revise product lineup, then maintain product hierarchy. Why adding segment levels every time you have reshuffling?
No matter how you have new tool, there's no end. It's not tool itself. It's people who is twisting the case.
Successful usage would be product hierarchy 1, and maybe 2. If characteristic is configured, segment data is stored at that level. You can download the segment data, this may be a remedy. Data feeding and presentation in BW maybe another way.

*Article plucked from it.toolbox.com

Flows to CO-PA

From FI
Value fields is the building blocks of CO-PA
The updates that flow to FI and CO from different processes
- during Supply Chain process
Supply chain processing is linked to SD (Sales Distribution) and MM (Materials Management) modules
- during Material Ledger process
At month end after completing the material ledger close the actual periodic price will be generated and cost of sales updated at its actual cost for FI and CO-PA
- during Project System and Cost Center Accounting
CO-PA can be reconciled to PS after Settlement
CO-PA can be reconciled to CCA after Assessment Cycle

From SD
http://learnmysap.com/sales-distribution/264-ke4s-post-billing-documents-to-co-pa-manually.html

Friday, April 2, 2010

LIS vs LO

LIS & LO extractors- LIS is old technique through which we can load the data. Here we use typical delta and full up-loading techniques to get data from an logistics application, it uses V1 and V2 update, it means that it was a synchronous or asynchronous update from the application documents to LIS structures at the time when the application document was posted.

LO Cockpit is new technique which uses V3 which is an update that you can schedule, so it does not update at the time when you process the application documents, but it will be posted at a later stage. Another big difference between LO and LIS is that, you have separate datasources for header level, item level and schedule line level available in LO, you can choose at which level you want to extract your data and switch off others, which results in reduced data.When you configure LIS structure in ECC, the system environment  needs to be opened for changes where else not required for LO.