Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

151.

While Creating Aggregates System Gives Manual Or Automatic Option. What Are These?

Answer»

If we select the automatic option, system will propose AGGREGATES based on the BW statistics. i.e., how many times the InfoCube is used to fetch data, ETC. ELSE we can manually select the DATASET which should FORM the aggregate.

If we select the automatic option, system will propose aggregates based on the BW statistics. i.e., how many times the InfoCube is used to fetch data, etc. Else we can manually select the dataset which should form the aggregate.

152.

What Is Switching On And Off Of Aggregates? How Do We Do That?

Answer»

When we switch off an aggregate, it is not available to supply data to queries, but the data remains in the aggregate, so if REQUIRED, we can TURN it on and update the data, instead of re-aggregating all the data. However if we deactivate an aggregate, it is not available for reporting and also we lose the aggregated data. So when you ACTIVATE it, it starts the aggregation anew. To do this select the relevant aggregate and choose the Switch On/Off (red and green button). An aggregate that is SWITCHED off is marked in column Filled/Switched off with Grey Button.

When we switch off an aggregate, it is not available to supply data to queries, but the data remains in the aggregate, so if required, we can turn it on and update the data, instead of re-aggregating all the data. However if we deactivate an aggregate, it is not available for reporting and also we lose the aggregated data. So when you activate it, it starts the aggregation anew. To do this select the relevant aggregate and choose the Switch On/Off (red and green button). An aggregate that is switched off is marked in column Filled/Switched off with Grey Button.

153.

Which Object Connects Aggregates And Infocube?

Answer»

ReadPointer connects Aggregates and InfoCube. We can view the ReadPointer in table RSDDAGGRDIR, the field name is RN_SID, whenever we are rolling up the data, it CONTAINS the request number, it will check with the NEXT request for SECOND ROLL up. Just follow the table for a particular InfoCube and roll up the data.

ReadPointer connects Aggregates and InfoCube. We can view the ReadPointer in table RSDDAGGRDIR, the field name is RN_SID, whenever we are rolling up the data, it contains the request number, it will check with the next request for second roll up. Just follow the table for a particular InfoCube and roll up the data.

154.

What Are The Different Ways Data Transfer?

Answer»

Full Update: All the data from the InfoStructure is transferred according to the selection CRITERIA DEFINED in the SCHEDULER in the SAP BW.

DELTA Update: Only the data that has been changed or is new since the LAST update is transferred.

Full Update: All the data from the InfoStructure is transferred according to the selection criteria defined in the scheduler in the SAP BW.

Delta Update: Only the data that has been changed or is new since the last update is transferred.

155.

What Are The Different Update Modes?

Answer»

Direct Delta: In this METHOD, extraction data from DOCUMENT postings is transferred directly to BW delta queue.

Queued Delta: In this method, extraction data from document postings is collected in an extraction queue, from which a periodic COLLECTIVE run is used to transfer the data to BW delta queue. o The transfer SEQUENCE and the order in which the data was created are the same in both Direct and Queued Delta.

Unserialized V3 Update: In this method, the extraction data is written to the update tables and then is transferred to the BW delta queues WITHOUT taking the sequence into account.

Direct Delta: In this method, extraction data from document postings is transferred directly to BW delta queue.

Queued Delta: In this method, extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data to BW delta queue. o The transfer sequence and the order in which the data was created are the same in both Direct and Queued Delta.

Unserialized V3 Update: In this method, the extraction data is written to the update tables and then is transferred to the BW delta queues without taking the sequence into account.

156.

What Is I_t_select?

Answer»

Table with the SELECTION CRITERIA stored in the SCHEDULER of the SAP BW. This is not normally REQUIRED.

Table with the selection criteria stored in the Scheduler of the SAP BW. This is not normally required.

157.

What Is I_updmode?

Answer»

TRANSFER mode as REQUESTED in the SCHEDULER of the BW. Not normally required.

Transfer mode as requested in the Scheduler of the BW. Not normally required.

158.

What Is C_t_data?

Answer»

Table with the DATA received from the API in the format of SOURCE structure entered in table ROIS (field ROIS-STRUCTURE).

Table with the data received from the API in the format of source structure entered in table ROIS (field ROIS-STRUCTURE).

159.

What Is I_t_fields?

Answer»

List of the TRANSFER structure fields. Only these fields are actually FILLED in the data table and can be SENSIBLY addressed in the PROGRAM.

List of the transfer structure fields. Only these fields are actually filled in the data table and can be sensibly addressed in the program.

160.

What Is I_isource?

Answer»

ANSWER :NAME of the INFOSOURCE

161.

How Does The Time Dependent Work For Bw Objects?

Answer»

TIME Dependent attributes have values that are VALID for a specific range of DATES (i.e., valid period).

Time Dependent attributes have values that are valid for a specific range of dates (i.e., valid period).

162.

What Is Datamining Concept?

Answer»

Process of FINDING HIDDEN patterns and relationships in the DATA.

With typical data analysis requirements fulfilled by data warehouses, business users have an idea of what information they WANT to see.

Some opportunities embody data discovery requirements, where the business user wants to correlate sets of data to determine anomalies or patterns in the data.

Process of finding hidden patterns and relationships in the data.

With typical data analysis requirements fulfilled by data warehouses, business users have an idea of what information they want to see.

Some opportunities embody data discovery requirements, where the business user wants to correlate sets of data to determine anomalies or patterns in the data.

163.

What Is Entity Relationship Model In Data Modeling?

Answer»

An ERD (Entity RELATION Diagram) can be used to generate a physical database.

It is a HIGH level data model.

It is a schematic that shows all the ENTITIES within the scope of integration and the DIRECT relationship between the entities.

An ERD (Entity Relation Diagram) can be used to generate a physical database.

It is a high level data model.

It is a schematic that shows all the entities within the scope of integration and the direct relationship between the entities.

164.

What Is The Difference Between Extract Structure And Datasource?

Answer»

DataSource defines the data from different source system, where an extract structure CONTAINS the replicated data of DataSource and where we define extract rules and TRANSFER rules

  • Extract Structure is a RECORD layout of INFOOBJECTS.
  • Extract Structure is created on SAP BW system.

DataSource defines the data from different source system, where an extract structure contains the replicated data of DataSource and where we define extract rules and transfer rules

165.

How To Load Data From One Infocube To Another Infocube?

Answer»

Through DataMarts DATA can be LOADED from ONE INFOCUBE to ANOTHER InfoCube.

Through DataMarts data can be loaded from one InfoCube to another InfoCube.

166.

What Is The Difference Between Table View And Infoset Query?

Answer»

An InfoSet Query is a query using FLAT TABLES while a view table is a view of ONE or more existing tables. Parts of these tables are hidden, and others remain visible.

An InfoSet Query is a query using flat tables while a view table is a view of one or more existing tables. Parts of these tables are hidden, and others remain visible.

167.

What Is The Common Method Of Finding The Tables Used In Any R/3 Extraction?

Answer»

By using the TRANSACTION LIST SCHEMA we can NAVIGATE the TABLES.

By using the transaction LIST SCHEMA we can navigate the tables.

168.

What Is Db Connect And Where Is It Used?

Answer»

DB connect is a DATABASE CONNECTING program. It is USED in connecting third party tools with BW for reporting purpose.

DB connect is a database connecting program. It is used in connecting third party tools with BW for reporting purpose.

169.

What Are Secondary Indexes With Respect To Infocubes?

Answer»

It is an Index created in ADDITION to the primary index of the INFOCUBE. When you ACTIVATE a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

It is an Index created in addition to the primary index of the InfoCube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

170.

What Is The Function Of 'reconstruction' Tab In An Infocube?

Answer»

It reconstructs the deleted requests from the InfoCube. If a REQUEST has been deleted and we WANT the data records of that request to be added to the InfoCube, we can use the reconstruction tab to add those records. It goes to the PSA and BRINGS the data to the InfoCube.

It reconstructs the deleted requests from the InfoCube. If a request has been deleted and we want the data records of that request to be added to the InfoCube, we can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the InfoCube.

171.

How Many Hierarchy Levels Can Be Created For A Characteristic Infoobject?

Answer»

MAXIMUM of 98 LEVELS.

Maximum of 98 levels.

172.

When Given A Choice Between Using An Infocube And A Multiprovider, What Factors To Consider Before Making A Decision?

Answer»

One would have to SEE if the InfoCubes are used individually. If these InfoCubes are often used individually, then it is better to go for a MultiProvider with many InfoCubes SINCE the reporting would be FASTER for an individual InfoCube query RATHER than for a big InfoCube with lot of data.

One would have to see if the InfoCubes are used individually. If these InfoCubes are often used individually, then it is better to go for a MultiProvider with many InfoCubes since the reporting would be faster for an individual InfoCube query rather than for a big InfoCube with lot of data.

173.

What Happens When You Load Transaction Data Without Loading Master Data?

Answer»

The transaction DATA GETS LOADED and the MASTER data fields remain blank.

The transaction data gets loaded and the master data fields remain blank.

174.

When We Collapse An Infocube, Is The Consolidated Data Stored In The Same Infocube Or Is It Stored In The New Infocube?

Answer»

When the cube is collapsed the data is STORED in the same cube, data is stored in F table before the COMPRESS and in E table after the COMPRESSION. These TWO tables are for the same cube.

When the cube is collapsed the data is stored in the same cube, data is stored in F table before the compress and in E table after the compression. These two tables are for the same cube.

175.

What Is The Function Of 'selective Deletion' Tab In The Manage Contents Of An Infocube?

Answer»

It ALLOWS US to select a PARTICULAR value of a particular field and delete its contents.

It allows us to select a particular value of a particular field and delete its contents.

176.

When An Ods Is In 'overwrite' Mode, Does Uploading The Same Data Again And Again Create New Entries In The Change Log Each Time Data Is Uploaded?

Answer»

No.

No.

177.

When Is Idoc Data Transfer Used?

Answer»

IDOCs are USED for communication between logical systems LIKE SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

178.

What Is The Importance Of The Table Roidocprms?

Answer»

It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data PACKET size, maximum number of LINES in a data packet, etc. The data packet size can be changed through the control PARAMETERS OPTION on SBIW i.e., the CONTENTS of this table can be changed.

It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

179.

What Does A Data Idoc Contain?

Answer»

Data IDOC contains:

  • Control Record à Contains administrator information such as receiver, SENDER and CLIENT.
  • Data record
  • STATUS Record à Describes status of the record e.g., modified.

Data IDoc contains:

180.

Can You Add Programs In The Scheduler?

Answer»

YES. Through EVENT HANDLING.

Yes. Through event handling.

181.

What Is The Importance Of 0requid?

Answer»

It is the INFOOBJECT for Request ID. OREQUID ENABLES BW to distinguish between different data RECORDS.

It is the InfoObject for Request ID. OREQUID enables BW to distinguish between different data records.

182.

What Is The Maximum Number Of Key Fields That You Can Have In An Ods Object?

Answer»

16

16

183.

What Internally Happens When Bw Objects Like Infoobject, Infocube Or Ods Are Created And Activated?

Answer»

When an InfoObject, InfoCube or ODS object is CREATED, BW maintains a saved VERSION of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

184.

What Are The Inputs For An Infoset?

Answer»

The INPUTS for an INFOSET are ODS objects and InfoObjects (with master DATA or TEXT).

The inputs for an InfoSet are ODS objects and InfoObjects (with master data or text).

185.

What Are The Delta Options Available When You Load From Flat File?

Answer»

The 3 options for Delta MANAGEMENT with Flat Files:

  • Full Upload
  • New Status for CHANGED records (ODS OBJECT only)
  • Additive Delta (ODS Object & InfoCube)

The 3 options for Delta Management with Flat Files:

186.

What Are The Steps To Extract Data From R/3?

Answer»

187.

What Are Bw Statistics And What Is Its Use?

Answer»

They are group of Business Content INFOCUBES which are used to measure performance for Query and LOAD MONITORING. It also shows the usage of aggregates, OLAP and WAREHOUSE MANAGEMENT.

They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.

188.

How Do You Transform Open Hub Data?

Answer»

USING BADI we can transform OPEN Hub Data according to the DESTINATION REQUIREMENT.

Using BADI we can transform Open Hub Data according to the destination requirement.

189.

What Is Open Hub Service?

Answer»

The Open Hub SERVICE enables us to DISTRIBUTE data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution STATUS in the BW system.

The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

190.

Can An Infoobject Be An Infoprovider, How And Why?

Answer»

YES, when we want to report on Characteristics or MASTER Data. We have to right click on the InfoArea and select “Insert CHARACTERISTIC as data target”. For example, we can MAKE 0CUSTOMER as an INFOPROVIDER and report on it.

Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.

191.

What Are Conversion Routines For Units And Currencies In The Update Rule?

Answer»

Using this option we can write ABAP CODE for Units / CURRENCIES conversion. If we enable this flag then unit of KEY Figure appears in the ABAP code as an additional parameter. For example, we can convert units in POUNDS to Kilos.

Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.

192.

How Would You Optimize The Dimensions?

Answer»

We should DEFINE as many dimensions as possible and we have to TAKE care that no SINGLE DIMENSIONAL crosses more than 20% of the fact table size.

We should define as many dimensions as possible and we have to take care that no single dimensional crosses more than 20% of the fact table size.

193.

What Are The Options Available In Transfer Rule?

Answer»

194.

How Many Extra Partitions Are Created And Why?

Answer»

TWO PARTITIONS are CREATED for date before the begin date and after the END date.

Two partitions are created for date before the begin date and after the end date.

195.

What Is Table Partitioning And What Are The Benefits Of Partitioning In An Infocube?

Answer»

It is the method of dividing a table which would ENABLE a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is STORED in the relevant PARTITIONS. ALSO table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.

It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.

196.

What Is Rollup?

Answer»

This is USED to load new DATA Packages (REQUESTS) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while REPORTING on the aggregate.

This is used to load new Data Packages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.

197.

What Is Compression?

Answer»

It is a process USED to DELETE the REQUEST IDs and this saves SPACE.

It is a process used to delete the Request IDs and this saves space.

198.

How Do Start Routine And Return Table Synchronize With Each Other?

Answer»

Return table is USED to return the Value FOLLOWING the EXECUTION of START routine

Return table is used to return the Value following the execution of start routine

199.

What Are Return Tables?

Answer»

When we want to RETURN multiple RECORDS, instead of single value, we USE the return table in the Update Routine. Example: If we have total telephone expense for a Cost CENTER, using a return table we can GET expense per employee.

When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.

200.

Explain How You Used Start Routines In Your Project?

Answer»

Start routines are used for MASS processing of records. In start routine all the records of Data Package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is FORECASTED to say 100 in MAY. Then after APPLYING size %(Small 20%, Medium 40%, Large 20%, EXTRA Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.

Start routines are used for mass processing of records. In start routine all the records of Data Package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.