Write Optimized DSO is used when a Data storage object is required for storing lowest granularity records such as address and when overwrites functionality is not needed. It consists of the table of active data only, hence no need for data activation which increases data process. Data store object is available immediately for further processing; it is used as a temporary storage area for large set of data.
Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the Info Cube.
- PSA receives data unchanged to the Source system
- Data is posted at document level, After loading in to standard DSOs data is deleted
- Data is posted to Corporate memory write –optimized DSO from pass thru write-optimized DSO
- Data is Distributed from write-optimized “pass thru” to Standard DSOs as per business requirement
Write Optimized DSO Properties:
- It is used for initial staging of source system data.
- Data stored is of lowest granularity.
- Data loads can be faster since it does not have the separate activation step.
- Every record has a technical key and hence aggregation of records is not possible. New records are inserted every time.
Creation Of Write-Optimized DSO:
Step 1)
- Go to transaction code RSA1
- Click the OK button.
Step 2)
- Navigate to Modelling tab->Info Provider.
- Right click on Info Area.
- Click on “Create Data Store Object” from the context menu.
Step 3)
- Enter the Technical Name.
- Enter the Description.
- Click on the “Create” button.
Step 4)
Click on the Edit button of “Type of DataStore Object”.
Step 5)
Choose the Type “Write-Optimized”.
Technical keys include Request ID, Data package, Record number. No additional objects can be included under this.
Semantic keys are similar to key fields, however, here the uniqueness is not considered for over write functionality. They are instead used in conjunction with setting “Do not check uniqueness of data”.
The Purpose of Semantic Key is to identify error in incoming records or Duplicate records .
Duplicate Records are written into error stack in the subsequent order. These records in the error stack can be handled or re-loaded by defining Semantic Group in DTP.
Semantic Groups need not be defined if there will be no possibility of duplicate records or error records.
If we do not check the Check Box “Allow Duplicate Data Record “, the data coming from source is checked for duplication, i.e, if the same record (semantic keys) already exist in the DSO, then the current load is terminated.
If we select the check box , Duplicate records are loaded as a new record. There is no relevance of semantic keys in this case.
Step 6)
Activate the DSO.