InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 101. |
What Is Transfer Method And What Are The Types Of Transfer Methods? |
|
Answer» The transfer method only determines how the data is transferred. IDoc transfer method: A data IDoc consists of a control record, a data record, and a status record. The control record contains administration information such as receiver, sender, and client. The status record describes the status of the IDoc, for example "MODIFIED". The data stores in the ALE INBOX and outbox have to be emptied or reorganized. PSA (tRFC) transfer method: With this transfer method, a transactional Remote Function Call is used to transfer the data directly from the source system to the SAP BW. Here, there is the option of STORING the data in the PSA (the tables have the same structure as the transfer structure.) This is the preferred transfer method, because it improves performance BETTER than the IDoc method. When you USE tRFCs to transfer data, the maximum number of fields that can be used is restricted to 255. The length of a data record is also restricted to 1962 bytes when you use tRFCs (IDoc --> 1000 bytes.) The transfer method only determines how the data is transferred. IDoc transfer method: A data IDoc consists of a control record, a data record, and a status record. The control record contains administration information such as receiver, sender, and client. The status record describes the status of the IDoc, for example "modified". The data stores in the ALE inbox and outbox have to be emptied or reorganized. PSA (tRFC) transfer method: With this transfer method, a transactional Remote Function Call is used to transfer the data directly from the source system to the SAP BW. Here, there is the option of storing the data in the PSA (the tables have the same structure as the transfer structure.) This is the preferred transfer method, because it improves performance better than the IDoc method. When you use tRFCs to transfer data, the maximum number of fields that can be used is restricted to 255. The length of a data record is also restricted to 1962 bytes when you use tRFCs (IDoc --> 1000 bytes.) |
|
| 102. |
What Are 6 Types Of Connections Between The Source Systems And The Bw? |
Answer»
|
|
| 103. |
What Are 3 Types Of Transfer Rules? |
| Answer» | |
| 104. |
What Are 2 Types Of Infosources? |
Answer»
|
|
| 105. |
What Are 4 Different Types Of Datasources? |
| Answer» | |
| 106. |
What Are 5 Different Types Of Source Systems Are? |
| Answer» | |
| 107. |
What Are Serialized And Unserialized V3 Updates? |
|
Answer» In serialized V3 Update DATA is TRANSFERRED from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area. With Unserialized V3 Update mode, the extraction data continues to be WRITTEN to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE). In serialized V3 Update data is transferred from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area. With Unserialized V3 Update mode, the extraction data continues to be written to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE). |
|
| 108. |
What Is Extraction Queue? What Does It Contain? |
|
Answer» Newly GENERATED records will be stored in the EXTRACTION QUEUE and from there a scheduled job will push it to delta queue. Newly generated records will be stored in the extraction queue and from there a scheduled job will push it to delta queue. |
|
| 109. |
What Is Delta Queue (rsa7)? When Will The Data Queue Here And From Where? |
|
Answer» DELTA queue stores records that have been generated after last delta upload and YET to be sent to BW. The queued data will be sent to BW from here. DEPENDING on the method selected, generated records will EITHER come directly to this queue or through EXTRACTION queue. Delta queue stores records that have been generated after last delta upload and yet to be sent to BW. The queued data will be sent to BW from here. Depending on the method selected, generated records will either come directly to this queue or through extraction queue. |
|
| 110. |
What Is Sales Flow? |
|
Answer» Quotation à INQUIRY à SALES ORDER à Delivery à Post goods issue àInvoice à ACCOUNTING document Quotation à inquiry à Sales order à Delivery à Post goods issue àInvoice à Accounting document |
|
| 111. |
What Are The Places We Use Abap Code In Bw? |
| Answer» | |
| 112. |
There Are 5 Characteristics In An Infocube. We Have To Assign These Characteristics To A Dimension Based On What We Assign Characteristics To Dimension? |
| Answer» | |
| 113. |
Delta Has Been Done Successfully In Lo. Later Some Fields Were Added To That Particular Data Sources Then There Will Be Any Effect To The Previous Data Records. |
|
Answer» No. If there is DATA in the Data Source we can only append the fields. No data will be LOST. But you need to have separate mechanism to FILL in the HISTORICAL data for the newly added fields. No. If there is data in the Data Source we can only append the fields. No data will be lost. But you need to have separate mechanism to fill in the historical data for the newly added fields. |
|
| 114. |
I Want To Create An Infoobject That Is A Dependent Infoobject. How To Do It? |
|
Answer» Go to the FIRST INFOOBJECT screen in ADMINISTRATION work BENCH go to compounding tab, create the InfoObject that is dependent on the former InfoObject and ACTIVATE. Go to the first InfoObject screen in administration work bench go to compounding tab, create the InfoObject that is dependent on the former InfoObject and activate. |
|
| 115. |
There Is An Infoobject Called 0plant I Activated And Using It After Some Days One More Person Came And Activated It Again. What Will Happen, Whether There Will Be Any Effect Merge Or No Effect. |
|
Answer» Reactivating the INFOOBJECT shouldn't AFFECT UNLESS he has made some CHANGES to that and then REACTIVATED it. Reactivating the InfoObject shouldn't affect unless he has made some changes to that and then reactivated it. |
|
| 116. |
How You Did Data Modeling In Your Project? Explain |
|
Answer» We had collected data from the user and created HLD (High level Design document) and we analyzed to find the source for the data. Then data models were done indicating DATAFLOW, lookups. While designing the data MODEL considerations were given to use existing objects (like ODS and InfoCube) not STORING redundant data, volume of data, Batch dependency. We had collected data from the user and created HLD (High level Design document) and we analyzed to find the source for the data. Then data models were done indicating dataflow, lookups. While designing the data model considerations were given to use existing objects (like ODS and InfoCube) not storing redundant data, volume of data, Batch dependency. |
|
| 117. |
Differences Between Multicube And Remotecube? |
|
Answer» A MultiCube is a TYPE of InfoProvider that combines data from a NUMBER of InfoCubes and makes them available as a whole to reporting. A RemoteCube is an InfoCube whose TRANSACTION data is not managed in the BW but EXTERNALLY. Only the structure of the RemoteCube is DEFINED in BW. The data is read for reporting using a BAPI from another system. A MultiCube is a type of InfoProvider that combines data from a number of InfoCubes and makes them available as a whole to reporting. A RemoteCube is an InfoCube whose transaction data is not managed in the BW but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system. |
|
| 118. |
Tell About A Situation When You Implemented A Remote Cube. |
|
Answer» Remote Cube is USED when we like to report on TRANSNATIONAL data. In a Remote Cube data is not STORED on BW side. IDEALLY used when detailed data is required and we WANT to bypass loading of data into BW. Remote Cube is used when we like to report on transnational data. In a Remote Cube data is not stored on BW side. Ideally used when detailed data is required and we want to bypass loading of data into BW. |
|
| 119. |
What Is A Remote Cube And How Is It Accessed And Used? |
|
Answer» A Remote Cube is an INFO Cube whose DATA is not managed in the BW but externally. Only the structure of the Remote Cube is defined in BW. The data is READ for reporting using a BAPI from another system. With a Remote Cube, we can report using data in external systems without having to physically store transaction data in BW. We can, for example, include an external system from market data providers using a Remote Cube. This is best USED only for small volume of data and when LESS users access the query. A Remote Cube is an Info Cube whose data is not managed in the BW but externally. Only the structure of the Remote Cube is defined in BW. The data is read for reporting using a BAPI from another system. With a Remote Cube, we can report using data in external systems without having to physically store transaction data in BW. We can, for example, include an external system from market data providers using a Remote Cube. This is best used only for small volume of data and when less users access the query. |
|
| 120. |
What Is F Table? |
|
Answer» FACT table Fact table |
|
| 122. |
When Are Tables Created In Bw? |
|
Answer» When the OBJECTS are ACTIVATED, the tables are CREATED. The location depends on the BASIS installation. When the objects are activated, the tables are created. The location depends on the Basis installation. |
|
| 123. |
What Are The Major Errors In Bw And R3 Pertaining To Bw? |
| Answer» | |
| 124. |
Differences Between Star And Extended Star Schema? |
|
Answer» Star SCHEMA: Only characteristics of the dimension tables can be used to access FACTS. No structured drill downs can be created. SUPPORT for many languages is difficult. Extended star schema: Master data tables and their ASSOCIATED fields (ATTRIBUTES), External hierarchy tables for structured access to data, Text tables with extensive multilingual descriptions are supported using SIDs. Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult. Extended star schema: Master data tables and their associated fields (attributes), External hierarchy tables for structured access to data, Text tables with extensive multilingual descriptions are supported using SIDs. |
|
| 125. |
What Is The Importance Of Ods Object? |
|
Answer» ODS is MAINLY USED as a STAGING AREA. ODS is mainly used as a staging area. |
|
| 126. |
Where Does Bw Extract Data From During Generic Extraction And Lo Extraction? |
|
Answer» All deltas are TAKEN from the delta queue. The WAY of POPULATING the delta queue differs for LO and other Data SOURCES. All deltas are taken from the delta queue. The way of populating the delta queue differs for LO and other Data Sources. |
|
| 127. |
You Get New Status Or Additive Delta. If I Set Here (on R/3) What Is The Need Of Setting In Bw. |
|
Answer» In R/3 the RECORD mode determines this as seen in the RODELTAM table i.e., whether it will be a new status or additive delta for the RESPECTIVE DataSource. Based on this you need to select the appropriate update type for the data TARGET in BW. For e.g., ODS supports additive as well as Overwrite function. Depending on which DataSource is updating the ODS, and the record mode supported by this DataSource, you need to do the right SELECTION in BW. In R/3 the record mode determines this as seen in the RODELTAM table i.e., whether it will be a new status or additive delta for the respective DataSource. Based on this you need to select the appropriate update type for the data target in BW. For e.g., ODS supports additive as well as Overwrite function. Depending on which DataSource is updating the ODS, and the record mode supported by this DataSource, you need to do the right selection in BW. |
|
| 128. |
What Is Direct Update Of Infoobject? |
|
Answer» This is UPDATING of InfoObject without using UPDATE RULES but only the Transfer Rules. This is updating of InfoObject without using Update Rules but only the Transfer Rules. |
|
| 129. |
What Is Content Extraction? |
|
Answer» These are extractors supplied by SAP for SPECIFIC BUSINESS modules. Eg. 2FI_AR_4: Customers: Line Items with Delta EXTRACTION / 2FI_GL_6: General Ledger SALES Figures via Delta Extraction. These are extractors supplied by SAP for specific business modules. Eg. 2FI_AR_4: Customers: Line Items with Delta Extraction / 2FI_GL_6: General Ledger Sales Figures via Delta Extraction. |
|
| 130. |
What Exactly Happens (background) When We Are Deactivating/activating The Extract Structure For Lo Cockpit? |
|
Answer» If the extract structure is activated then any ONLINE TRANSACTION or on the compilation of SETUP tables, the data is posted to the extract structures depending on the update METHOD selected. Activation marks the DataSource with green else it is yellow. The activation/deactivation MAKES entries to the TMC EXACT table. If the extract structure is activated then any online transaction or on the compilation of setup tables, the data is posted to the extract structures depending on the update method selected. Activation marks the DataSource with green else it is yellow. The activation/deactivation makes entries to the TMC EXACT table. |
|
| 131. |
What Is The Difference Between The Transactions Lbwf And Rsa7? |
|
Answer» RSA7 is to view BW delta queue. This gets overwritten each time LBWF is the Log for LO Extract Structures. This is POPULATED only when the User parameter MCL is SET, and is recommended only for testing purposes. RSA7 is to view BW delta queue. This gets overwritten each time LBWF is the Log for LO Extract Structures. This is populated only when the User parameter MCL is set, and is recommended only for testing purposes. |
|
| 132. |
How Is The Delta Load Different For An Infocube And Ods? |
|
Answer» An InfoCube will have additive Delta, but you will still be able to see all individual RECORDS in the InfoCube contents. This is because if you choose to delete the current request - then the records have to be rolled back to the prior STATUS. You build a query on the InfoCube and on the query you will FIND that the data is ACTUALLY summed up. The ODS records will not have duplicate records. You will have only one RECORD. An InfoCube will have additive Delta, but you will still be able to see all individual records in the InfoCube contents. This is because if you choose to delete the current request - then the records have to be rolled back to the prior status. You build a query on the InfoCube and on the query you will find that the data is actually summed up. The ODS records will not have duplicate records. You will have only one record. |
|
| 133. |
What Are Mc Ekko, Mc Ekpo In The Maintenance Of Datasource? |
|
Answer» These are PURCHASING RELATED COMMUNICATION STRUCTURES. These are purchasing related communication structures. |
|
| 134. |
What Is The Maintenance Of Extract Structure |
|
Answer» EXTRACT structures are maintained in case of LO Data Sources. There are multiple extract structures for each Data Source in the LO for DIFFERENT APPLICATIONS. Any enhancements to Data Source in case of LO are done using maintenance of extract structures. Extract structures are maintained in case of LO Data Sources. There are multiple extract structures for each Data Source in the LO for different applications. Any enhancements to Data Source in case of LO are done using maintenance of extract structures. |
|
| 135. |
What Is The Maintenance Of Data Source? |
|
Answer» It is the maintenance of required FIELDS in a particular Data Source for which there are REPORTING requirements in BW and data for the same NEEDS to be extracted. It is the maintenance of required fields in a particular Data Source for which there are reporting requirements in BW and data for the same needs to be extracted. |
|
| 136. |
What Is The Infocube For Inventory? |
|
Answer» INFOCUBE: 0IC_C03 InfoCube: 0IC_C03 |
|
| 137. |
Suppose One Million Records Are Uploaded To Infocube. Now I Want To Delete 20 Records In Infocube. How Can We Delete 20 Records? |
|
Answer» This you COULD do with SELECTIVE DELETION This you could do with selective deletion |
|
| 138. |
I Replicate The Data Source To Bw System. I Want To Add One More Field To Data Source. How Do I Do It? |
|
Answer» ADD the FIELD to extract structure and replicate the Data SOURCE again into BW and this field will APPEAR in BW ALSO. Add the field to extract structure and replicate the Data Source again into BW and this field will appear in BW also. |
|
| 139. |
Master Data Is Stored In Master Data Tables. Then What Is The Importance Of Dimensions? |
|
Answer» DIMENSION TABLES link Master Data tables with the FACT table through SID's. Dimension tables link Master Data tables with the fact table through SID's. |
|
| 140. |
How To Work Master Data Delta? |
|
Answer» We ALWAYS do full LOAD for Master Data. It would always overwrite the PREVIOUS entries. We always do full load for Master Data. It would always overwrite the previous entries. |
|
| 141. |
Will There Be Any Data In The Application Tables After Sending Data To Setup Tables? |
|
Answer» There will be DATA in APPLICATION tables even after fill up of setup tables. Setup tables are just TEMP tables that fill up from application tables for setting up Init/Full loads for BW. There will be data in application tables even after fill up of setup tables. Setup tables are just temp tables that fill up from application tables for setting up Init/Full loads for BW. |
|
| 142. |
With What Data The Setup Table Is Filling (is It R3 Data)? |
|
Answer» The init LOADS in BW pull DATA from the Setup TABLES. The setup tables are only USED in CASE of first init/full loads. The init loads in BW pull data from the Setup tables. The setup tables are only used in case of first init/full loads. |
|
| 143. |
Why We Need To Delete The Setup Table First Then Filling? |
|
Answer» During the Setup RUN, these setup tables are FILLED. NORMALLY it's a good practice to delete the existing setup tables before EXECUTING the setup runs so as to avoid duplicate records for the same selections. During the Setup run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate records for the same selections. |
|
| 144. |
When Filling The Set Tables, Is There Any Need Of Delete The Setup Tables? |
|
Answer» Yes. By deleting the SETUP tables we are deleting the data that is in the setup tables from the PREVIOUS update. This AVOIDS updating the records TWICE into the BW. Yes. By deleting the setup tables we are deleting the data that is in the setup tables from the previous update. This avoids updating the records twice into the BW. |
|
| 145. |
What Are The Setup Tables? Why Use Setup Tables? |
|
Answer» In LO Extraction MECHANISM when we fill the SETUP TABLES the extract structure is filled with the data. When we SCHEDULE InfoPackage using FULL / INITI DELTA from BW, the data is picked from the setup tables In LO Extraction Mechanism when we fill the setup tables the extract structure is filled with the data. When we schedule InfoPackage using FULL / INITI DELTA from BW, the data is picked from the setup tables |
|
| 146. |
What Are Setup Tables And Why Should We Delete The Setup Tables First Before Extraction? |
|
Answer» Setup tables are filled with data from application tables. They are the OLTP tables STORING transaction record. They interface between the application tables and EXTRACTOR. LO extractor takes data from Setup table while initialization and full upload. It is not needed to access the application table for data SELECTION. As setup tables are required only for full and init load we can delete the data after loading in order to AVOID duplicate data. Setup tables are filled with data from application tables. They are the OLTP tables storing transaction record. They interface between the application tables and extractor. LO extractor takes data from Setup table while initialization and full upload. It is not needed to access the application table for data selection. As setup tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data. |
|
| 147. |
What Are The Inverted Fields In Data Source? |
|
Answer» They ALLOW to do reverse POSTING. It WOULD ACTUALLY MULTIPLY the field by -1. They allow to do reverse posting. It would actually multiply the field by -1. |
|
| 148. |
When I Run Initial Load It Failed Then What Should I Do? |
|
Answer» Deletion of an initial load can be done in the InfoPackage. First set the QM STATUS of the request to red if not yet done, then delete it from all data targets. After that we go to the InfoPackage and choose from menu SCHEDULER à “initialization options for the source system”. There you should see your red request. Mark it and delete it. Accept deletion QUESTION and accept post information message. Now the request should be deleted from the initialization options. Now you can run a new init. You can also run a repair request. That's a full request. With this you correct your data in the data target because of FAILED deltas or wrong inits. You do this in the InfoPackage too. Choose menu scheduler repair full request. But if you want to use the init/delta load you have to make a successful init first. Deletion of an initial load can be done in the InfoPackage. First set the QM status of the request to red if not yet done, then delete it from all data targets. After that we go to the InfoPackage and choose from menu scheduler à “initialization options for the source system”. There you should see your red request. Mark it and delete it. Accept deletion question and accept post information message. Now the request should be deleted from the initialization options. Now you can run a new init. You can also run a repair request. That's a full request. With this you correct your data in the data target because of failed deltas or wrong inits. You do this in the InfoPackage too. Choose menu scheduler repair full request. But if you want to use the init/delta load you have to make a successful init first. |
|
| 149. |
What Are Aggregates And When Are They Used? |
|
Answer» • An aggregate is a materialized, aggregated view of the DATA in an InfoCube. In an aggregate, the dataset of an InfoCube is saved REDUNDANTLY and persistently in a consolidated form. Aggregates make it possible to access InfoCube data QUICKLY in Reporting. Aggregates can be used in following cases:
• An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases: |
|