InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
Is File Format In Data Services Type Of A Data Store? |
|
Answer» No, FILE FORMAT is not a DATASTORE TYPE. No, File format is not a datastore type. |
|
| 2. |
How Do You Manage Slowly Changing Dimensions? What Are The Fields Required In Managing Different Types If Scd? |
Answer»
|
|
| 3. |
What Is Slowly Changing Dimension? |
|
Answer» SCDS are DIMENSIONS that have DATA that CHANGES over TIME. SCDs are dimensions that have data that changes over time. |
|
| 4. |
Suppose You Have Updated The Version Of Data Services Software? Is It Required To Update The Repository Version? |
|
Answer» If you update version of SAP Data Services, there is a need to update version of Repository. Below points should be considered when migrating a central repository to upgrade version: Point 1 Take the backup of central repository all tables and objects. Point 2 To maintain version of objects in data services, maintain a central repository for each version. Create a new central history with new version of Data Services SOFTWARE and copy all objects to this repository. Point 3 It is always recommended if you install new version of Data Services, you should upgrade your central repository to new version of objects. Point 4 Also upgrade your local repository to same version as different version of central and local repository may not work at the same time. Point 5 Before migrating the central repository, check in all the objects. As you don’t upgrade central and local repository SIMULTANEOUSLY, so there is a need to check in all the objects. As once you have your central repository upgraded to new version, you will not be able to check in objects from local repository which is having older version of Data Services. If you update version of SAP Data Services, there is a need to update version of Repository. Below points should be considered when migrating a central repository to upgrade version: Point 1 Take the backup of central repository all tables and objects. Point 2 To maintain version of objects in data services, maintain a central repository for each version. Create a new central history with new version of Data Services software and copy all objects to this repository. Point 3 It is always recommended if you install new version of Data Services, you should upgrade your central repository to new version of objects. Point 4 Also upgrade your local repository to same version as different version of central and local repository may not work at the same time. Point 5 Before migrating the central repository, check in all the objects. As you don’t upgrade central and local repository simultaneously, so there is a need to check in all the objects. As once you have your central repository upgraded to new version, you will not be able to check in objects from local repository which is having older version of Data Services. |
|
| 5. |
What Are The Different Types Of Embedded Data Flow? |
|
Answer» One Input: EMBEDDED data flow is ADDED at the end of dataflow. One Output: Embedded data flow is added at the BEGINNING of a data flow. No input or output: REPLICATE an existing data flow. One Input: Embedded data flow is added at the end of dataflow. One Output: Embedded data flow is added at the beginning of a data flow. No input or output: Replicate an existing data flow. |
|
| 6. |
What Is An Embedded Data Flow? |
|
Answer» Embedded DATA FLOW is known as data flows which are called from another data flow in the design. The embedded data flow can contain MULTIPLE number of source and TARGETS but only one input or output pass data to MAIN data flow. Embedded data flow is known as data flows which are called from another data flow in the design. The embedded data flow can contain multiple number of source and targets but only one input or output pass data to main data flow. |
|
| 7. |
What Is The Use Of Query Transformation? |
|
Answer» This is most common TRANSFORMATION used in Data Services and you can perform below functions:
This is most common transformation used in Data Services and you can perform below functions: |
|
| 8. |
What Are The Common Transformations That Are Available In Data Services? |
Answer»
|
|
| 9. |
What Is A Transformation In Data Services? |
|
Answer» TRANSFORMS are used to manipulate data sets as inputs and CREATING one or multiple outputs. There are VARIOUS transforms that can be used in Data SERVICES. Transforms are used to manipulate data sets as inputs and creating one or multiple outputs. There are various transforms that can be used in Data Services. |
|
| 10. |
What Is The Use Of Conditionals? |
|
Answer» You can also ADD Conditionals to workflow. This ALLOWS you to implement If/Else/Then logic on the WORKFLOWS. You can also add Conditionals to workflow. This allows you to implement If/Else/Then logic on the workflows. |
|
| 11. |
Give An Example Of Work Flow In Production? |
|
Answer» There is a fact table that you want to update and you have created a data flow with the transformation. Now If you want to move the data from source system, you have to check last MODIFICATION for fact table so that you EXTRACT only rows that has been added after last update. In order to achieve this, you have to create one script which determines last update DATE and then PASS this as input parameter to data flow. You also have to check if data connection to a PARTICULAR fact table is active or not. If it is not active, you need to setup a catch block which automatically sends an email to administrator to notify about this problem. There is a fact table that you want to update and you have created a data flow with the transformation. Now If you want to move the data from source system, you have to check last modification for fact table so that you extract only rows that has been added after last update. In order to achieve this, you have to create one script which determines last update date and then pass this as input parameter to data flow. You also have to check if data connection to a particular fact table is active or not. If it is not active, you need to setup a catch block which automatically sends an email to administrator to notify about this problem. |
|
| 12. |
Is It Possible That A Workflow Call Itself In Daa Services Job? |
|
Answer» Yes Yes |
|
| 13. |
What Is The Use Of Data Flow In Ds? |
|
Answer» Data flow is used to extract, transform and load data from SOURCE to target system. All the transformations, loading and formatting occurs in DATAFLOW. Data flow is used to extract, transform and load data from source to target system. All the transformations, loading and formatting occurs in dataflow. |
|
| 14. |
You Want To Extract Data From An Excel Work Book. How You Can Do This? |
|
Answer» You can use Microsoft Excel workbook as data SOURCE using file formats in Data Services. Excel WORK book should be available on WINDOWS file SYSTEM or Unix File system. You can use Microsoft Excel workbook as data source using file formats in Data Services. Excel work book should be available on Windows file system or Unix File system. |
|
| 15. |
What Are The Different Types Of Files Can Be Used As Source And Target File Format? |
| Answer» | |
| 16. |
You Want To Import Application Metadata Into Repository. How You Can Perform This? |
|
Answer» Adapter Datastore allows you to import APPLICATION metadata into repository. You can also access application metadata and you can also move batch and real TIME data between DIFFERENT applications and SOFTWARE. Adapter Datastore allows you to import application metadata into repository. You can also access application metadata and you can also move batch and real time data between different applications and software. |
|
| 17. |
What Is Linked Data Store? Explain With An Example? |
|
Answer» There are various database vendors which only provides one way communication path from one database to another database. These paths are known as database links. In SQL Server, Linked server allows one way communication path from one database to other. Example: Consider a LOCAL database Server name “Product” stores database link to access information on remote database server called Customer. Now users that are connected to remote database server Customer can’t use the same link to access data in database server Product. USER that are connected to “Customer” should have a separate link in data dictionary of the server to access the data in Product database server. This communication path between two databases are called database link and Datastores which are created between these linked database relationships is known as linked Datastores. There is a POSSIBILITY to connect DATASTORE to another Datastore and importing an external database link as option of Datastore. There are various database vendors which only provides one way communication path from one database to another database. These paths are known as database links. In SQL Server, Linked server allows one way communication path from one database to other. Example: Consider a local database Server name “Product” stores database link to access information on remote database server called Customer. Now users that are connected to remote database server Customer can’t use the same link to access data in database server Product. User that are connected to “Customer” should have a separate link in data dictionary of the server to access the data in Product database server. This communication path between two databases are called database link and Datastores which are created between these linked database relationships is known as linked Datastores. There is a possibility to connect Datastore to another Datastore and importing an external database link as option of Datastore. |
|
| 18. |
How Do You Improve The Performance Of Data Flows Using Memory Datastore? |
|
Answer» You can create Datastore using memory as DATABASE type. Memory Datastore are used to improve the performance of data flows in real time jobs as it STORES the data in memory to facilitate quick access and doesn’t require to go to ORIGINAL data source. A memory Datastore is used to store memory table schemas in the repository. These memory tables get data from tables in Relational database or using hierarchical data files like XML MESSAGE and IDocs. The memory tables remain alive till job executes and data in memory tables can’t be shared between DIFFERENT real time jobs. You can create Datastore using memory as database type. Memory Datastore are used to improve the performance of data flows in real time jobs as it stores the data in memory to facilitate quick access and doesn’t require to go to original data source. A memory Datastore is used to store memory table schemas in the repository. These memory tables get data from tables in Relational database or using hierarchical data files like XML message and IDocs. The memory tables remain alive till job executes and data in memory tables can’t be shared between different real time jobs. |
|
| 19. |
How Do You Check Existing Objects In Ds Repository? |
|
Answer» In OBJECT LIBRARY in DS DESIGNER. In Object library in DS Designer. |
|
| 20. |
What Is Sap Data Services Designer? What Are Main Etl Functions That Can Be Performed In Designer Tool? |
|
Answer» It is a developer TOOL which is used to create objects consist of data MAPPING, transformation, and logic. It is GUI BASED and work as designer for Data Services. You can create various objects using Data Services Designer like PROJECT, Jobs, Work Flow, Data Flow, mapping, transformations, etc. It is a developer tool which is used to create objects consist of data mapping, transformation, and logic. It is GUI based and work as designer for Data Services. You can create various objects using Data Services Designer like Project, Jobs, Work Flow, Data Flow, mapping, transformations, etc. |
|
| 21. |
How Do You Check The Execution History Of A Job Or A Data Flow? |
|
Answer» DS Management CONSOLE → JOB EXECUTION HISTORY DS Management Console → Job Execution History |
|
| 22. |
What Is The Template Table? |
|
Answer» In DATA Services, you can create a TEMPLATE table to move to TARGET system that has same STRUCTURE and data type as SOURCE table. In Data Services, you can create a template table to move to target system that has same structure and data type as source table. |
|
| 23. |
You Want To Generate The Quality Reports In Ds System, Data Validation, And Documentation. Where You Can See This? |
|
Answer» DATA SERVICES MANAGEMENT CONSOLE Data Services Management Console |
|
| 24. |
How Do You Manage Object Versions In Bods? |
|
Answer» CENTRAL repository is used to control the version management of the objects and is used for multiuse development. Central Repository STORES all the VERSIONS of an application OBJECT so it allows you to move to previous versions. Central repository is used to control the version management of the objects and is used for multiuse development. Central Repository stores all the versions of an application object so it allows you to move to previous versions. |
|
| 25. |
You Want To Set Up A New Repository In Bods. How Do You Create It? |
|
Answer» To CREATE BODS Repository you need a database installed. You can use SQL Server, ORACLE database, My SQL, SAP HANA, Sybase, etc. You have to create below users in database while installing BODS and to create Repositories. These users are REQUIRED to login to different servers CMS Server, AUDIT Server. To create a new repository, you have to login to Repository manager. To create BODS Repository you need a database installed. You can use SQL Server, Oracle database, My SQL, SAP HANA, Sybase, etc. You have to create below users in database while installing BODS and to create Repositories. These users are required to login to different servers CMS Server, Audit Server. To create a new repository, you have to login to Repository manager. |
|
| 26. |
What Is Single Object And Reusable Objects In Data Services? |
|
Answer» Reusable Objects: Most of the objects that are stored in REPOSITORY can be reused. When a reusable objects is defined and save in the local repository, you can reuse the object by creating calls to the definition. Each reusable object has only one definition and all the calls to that object refer to that definition. Now if definition of an object is changed at one place you are CHANGING object definition at all the PLACES where that object appears. An object library is used to contain object definition and when an object is drag and drop from library, it means a new reference to an existing object is created. All the objects that are defined specifically to a job or data flow, they are called single use objects. Example-specific transformation used in any data load. Reusable Objects: Most of the objects that are stored in repository can be reused. When a reusable objects is defined and save in the local repository, you can reuse the object by creating calls to the definition. Each reusable object has only one definition and all the calls to that object refer to that definition. Now if definition of an object is changed at one place you are changing object definition at all the places where that object appears. An object library is used to contain object definition and when an object is drag and drop from library, it means a new reference to an existing object is created. Single Use Objects: All the objects that are defined specifically to a job or data flow, they are called single use objects. Example-specific transformation used in any data load. |
|
| 27. |
What Is A Repository In Bods? What Are The Different Types Of Repositories In Bods? |
|
Answer» Repository is used to store meta-data of objects used in BO Data Services. Each Repository should be registered in Central Management Console CMC and is linked with SINGLE or many job servers which is responsible to execute jobs that are created by you. There are three types of Repositories: Local Repository: It is used to store the METADATA of all objects created in Data Services Designer LIKE project, jobs, data flow, work flow, etc. Central Repository: It is used to control the version management of the objects and is used for multiuse development. Central Repository stores all the versions of an APPLICATION object so it allows you to move to previous versions. Profiler Repository: This is used to manage all the metadata related to profiler tasks performed in SAP BODS designer. CMS Repository stores metadata of all the tasks performed in CMC on BI platform. Information Steward Repository stores all the metadata of profiling tasks and objects created in information steward. Repository is used to store meta-data of objects used in BO Data Services. Each Repository should be registered in Central Management Console CMC and is linked with single or many job servers which is responsible to execute jobs that are created by you. There are three types of Repositories: Local Repository: It is used to store the metadata of all objects created in Data Services Designer like project, jobs, data flow, work flow, etc. Central Repository: It is used to control the version management of the objects and is used for multiuse development. Central Repository stores all the versions of an application object so it allows you to move to previous versions. Profiler Repository: This is used to manage all the metadata related to profiler tasks performed in SAP BODS designer. CMS Repository stores metadata of all the tasks performed in CMC on BI platform. Information Steward Repository stores all the metadata of profiling tasks and objects created in information steward. |
|
| 28. |
What Is Sap Data Services? |
|
Answer» SAP BO DATA SERVICES is an ETL tool used for Data integration, data quality, data profiling and data PROCESSING and ALLOWS you to integrate, transform trusted data to data warehouse system for analytical reporting. BO Data Services consists of a UI development interface, metadata repository, data connectivity to SOURCE and target system and management console for scheduling of jobs. SAP BO Data Services is an ETL tool used for Data integration, data quality, data profiling and data processing and allows you to integrate, transform trusted data to data warehouse system for analytical reporting. BO Data Services consists of a UI development interface, metadata repository, data connectivity to source and target system and management console for scheduling of jobs. |
|
| 29. |
Why Do We Need A Staging Area In An Etl Process? |
|
Answer» There is a staging area that is required during ETL load. There are VARIOUS reasons why a staging area is required: As source systems are only available for specific period of time to extract data and this time is less than TOTAL data load time so Staging area allows you to extract the data from source system and keep it in staging area before time slot is ended. Staging area is required when you want to get data from multiple data sources together. If you want to JOIN two or more systems together. Example- You will not be able to perform a SQL query joining two tables from two physically different databases. Data EXTRACTIONS time slot for different systems vary as per the time zone and operational hours. Data extracted from source systems can be used in multiple data warehouse system, Operation Data stores, etc. During ETL you can perform complex transformations that allows you to perform complex transformations and require extra area to store the data. There is a staging area that is required during ETL load. There are various reasons why a staging area is required: As source systems are only available for specific period of time to extract data and this time is less than total data load time so Staging area allows you to extract the data from source system and keep it in staging area before time slot is ended. Staging area is required when you want to get data from multiple data sources together. If you want to join two or more systems together. Example- You will not be able to perform a SQL query joining two tables from two physically different databases. Data extractions time slot for different systems vary as per the time zone and operational hours. Data extracted from source systems can be used in multiple data warehouse system, Operation Data stores, etc. During ETL you can perform complex transformations that allows you to perform complex transformations and require extra area to store the data. |
|
| 30. |
What Is The Difference Between Oltp And A Data Warehouse? |
|
Answer» Indexes: OLTP system has only few indexes while in an OLAP system there are many indexes for performance optimization. Joins: In an OLTP system, LARGE NUMBER of joins and data is normalized however in an OLAP system there are less joins and de-normalized. AGGREGATION: In an OLTP system data is not AGGREGATED while in an OLAP database more aggregations are used. Indexes: OLTP system has only few indexes while in an OLAP system there are many indexes for performance optimization. Joins: In an OLTP system, large number of joins and data is normalized however in an OLAP system there are less joins and de-normalized. Aggregation: In an OLTP system data is not aggregated while in an OLAP database more aggregations are used. |
|
| 31. |
What Are The Different Strategies You Can Use To Avoid Duplicate Rows Of Data When Re-loading A Job? |
| Answer» | |
| 32. |
Give Two Examples Of How The Data Cleanse Transform Can Enhance (append) Data? |
|
Answer» The Data CLEANSE transform can generate NAME match standards and greetings. It can also assign gender CODES and prenames such as Mr. and MRS. The Data Cleanse transform can generate name match standards and greetings. It can also assign gender codes and prenames such as Mr. and Mrs. |
|
| 33. |
Describe When To Use The Usa Regulatory And Global Address Cleanse Transforms? |
|
Answer» Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. GLOBAL ADDRESS Cleanse should be utilized when processing multi-country data. Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. Global Address Cleanse should be utilized when processing multi-country data. |
|
| 34. |
A Project Requires The Parsing Of Names Into Given And Family, Validating Address Information, And Finding Duplicates Across Several Systems. Name The Transforms Needed And The Task They Will Perform? |
| Answer» | |
| 35. |
Give Some Examples Of How Data Can Be Enhanced Through The Data Cleanse Transform, And Describe The Benefit Of Those Enhancements? |
Answer»
|
|
| 36. |
List The Data Quality Transforms? |
| Answer» | |
| 37. |
List The Data Integrator Transforms? |
| Answer» | |
| 38. |
Name The Transform That You Would Use To Combine Incoming Data Sets To Produce A Single Output Data Set With The Same Schema As The Input Data Sets? |
|
Answer» The Merge transform. |
|
| 39. |
List The Three Types Of Input Formats Accepted By The Address Cleanse Transform? |
|
Answer» DISCRETE, MULTILINE, and HYBRID. Discrete, multiline, and hybrid. |
|
| 40. |
What Is Repository? List The Types Of Repositories? |
|
Answer» The DataServices repository is a SET of TABLES that holds user-created and predefined system objects, source and TARGET METADATA, and transformation rules. There are 3 types of repositories.
The DataServices repository is a set of tables that holds user-created and predefined system objects, source and target metadata, and transformation rules. There are 3 types of repositories. |
|
| 42. |
What Is The Use Of Compace Repository? |
|
Answer» REMOVE redundant and OBSOLETE OBJECTS from the REPOSITORY tables. Remove redundant and obsolete objects from the repository tables. |
|
| 43. |
How Many Types Of Data Stores Are Present In Data Services? |
|
Answer» Three.
Three. |
|
| 44. |
Arrange These Objects In Order By Their Hierarchy: Dataflow, Job, Project, And Workflow? |
|
Answer» PROJECT, JOB, WORKFLOW, DATAFLOW. Project, Job, Workflow, Dataflow. |
|
| 45. |
Define The Terms Job, Workflow, And Dataflow? |
Answer»
|
|
| 46. |
Define Data Services Components? |
|
Answer» DATA Services INCLUDES the following standard components:
Data Services includes the following standard components: |
|
| 47. |
What Is The Use Of Businessobjects Data Services? |
|
Answer» BusinessObjects Data Services provides a graphical interface that allows you to EASILY create jobs that EXTRACT data from heterogeneous sources, transform that data to meet the BUSINESS requirements of your organization, and load the data into a single LOCATION. BusinessObjects Data Services provides a graphical interface that allows you to easily create jobs that extract data from heterogeneous sources, transform that data to meet the business requirements of your organization, and load the data into a single location. |
|