This section includes 7 InterviewSolutions, each offering curated multiple-choice questions to sharpen your Current Affairs knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
What Are The Various Tools? - Name A Few. |
|
Answer» - Abinitio - Abinitio |
|
| 2. |
Where Do We Use Connected And Un Connected Lookups |
|
Answer» If RETURN PORT only ONE then we can go for unconnected. More than one return port is not POSSIBLE with Unconnected. If more than one return port then go for CONNECTED. If return port only one then we can go for unconnected. More than one return port is not possible with Unconnected. If more than one return port then go for Connected. |
|
| 3. |
How To Fine Tune The Mappings? |
|
Answer» 1.Use filter condition in source QUALIFIES WITHOUT USING filter 1.Use filter condition in source qualifies without using filter |
|
| 4. |
Give Some Etl Tool Functionalities? |
|
Answer» While the selection of a database and a hardware platform is a MUST, the selection of an ETL tool is highly recommended, but it's not a must. When you evaluate ETL tools, it pays to LOOK for the following characteristics: Functional capability: This includes both the 'transformation' piece and the 'cleansing' piece. In general, the typical ETL tools are either geared towards having strong transformation capabilities or having strong cleansing capabilities, but they are seldom very strong in both. As a result, if you know your data is going to be dirty coming in, make sure your ETL tool has strong cleansing capabilities. If you know there are going to be a lot of different data transformations, it then makes sense to pick a tool that is strong in transformation. Ability to read directly from your data source: For each ORGANIZATION, there is a different set of data sources. Make sure the ETL tool you select can connect directly to your source data. Metadata support: The ETL tool plays a key role in your metadata because it maps the source data to the destination, which is an important piece of the metadata. In fact, some organizations have come to rely on the documentation of their ETL tool as their metadata source. As a result, it is very important to select an ETL tool that WORKS with your overall metadata strategy. While the selection of a database and a hardware platform is a must, the selection of an ETL tool is highly recommended, but it's not a must. When you evaluate ETL tools, it pays to look for the following characteristics: Functional capability: This includes both the 'transformation' piece and the 'cleansing' piece. In general, the typical ETL tools are either geared towards having strong transformation capabilities or having strong cleansing capabilities, but they are seldom very strong in both. As a result, if you know your data is going to be dirty coming in, make sure your ETL tool has strong cleansing capabilities. If you know there are going to be a lot of different data transformations, it then makes sense to pick a tool that is strong in transformation. Ability to read directly from your data source: For each organization, there is a different set of data sources. Make sure the ETL tool you select can connect directly to your source data. Metadata support: The ETL tool plays a key role in your metadata because it maps the source data to the destination, which is an important piece of the metadata. In fact, some organizations have come to rely on the documentation of their ETL tool as their metadata source. As a result, it is very important to select an ETL tool that works with your overall metadata strategy. |
|
| 5. |
Give Some Popular Tools? |
|
Answer» Popular Tools: Popular Tools: |
|
| 6. |
What Are Snapshots? What Are Materialized Views & Where Do We Use Them? What Is A Materialized View Do? |
|
Answer» Materialized view is a view in which DATA is also STORED in some temp table. i.e if we will go with the View concept in DB in that we only store QUERY and once we call View it extract data from DB. But In materialized View data is stored in some temp tables. Materialized view is a view in which data is also stored in some temp table. i.e if we will go with the View concept in DB in that we only store query and once we call View it extract data from DB. But In materialized View data is stored in some temp tables. |
|
| 7. |
How To Determine What Records To Extract? |
|
Answer» When addressing a table some DIMENSION key must REFLECT the NEED for a record to get extracted. Mostly it will be from time dimension (e.g. date >= 1ST of current month) or a transaction flag (e.g. Order Invoiced Stat). Foolproof would be adding an archive flag to record which GETS reset when record changes. When addressing a table some dimension key must reflect the need for a record to get extracted. Mostly it will be from time dimension (e.g. date >= 1st of current month) or a transaction flag (e.g. Order Invoiced Stat). Foolproof would be adding an archive flag to record which gets reset when record changes. |
|
| 8. |
What Are The Various Transformation Available? |
Answer»
|
|
| 9. |
What Are Parameter Files ? Where Do We Use Them? |
|
Answer» Parameter FILE defines the value for parameter and VARIABLE USED in a workflow, work let or SESSION. Parameter file defines the value for parameter and variable used in a workflow, work let or session. |
|
| 10. |
What Are The Different Lookup Methods Used In Informatica? |
|
Answer» 1. Connected lookup Connected lookup will receive input from the PIPELINE and sends OUTPUT to the pipeline and can return any number of values. it does not contain return port. Unconnected lookup can return only one column. it contain return port. 1. Connected lookup Connected lookup will receive input from the pipeline and sends output to the pipeline and can return any number of values. it does not contain return port. Unconnected lookup can return only one column. it contain return port. |
|
| 11. |
What Are Active Transformation / Passive Transformations? |
|
Answer» ACTIVE transformation can change the number of rows that pass through it. (Decrease or INCREASE rows) Active transformation can change the number of rows that pass through it. (Decrease or increase rows) |
|
| 12. |
What Is The Difference Between Etl Tool And Olap Tools ? |
|
Answer» ETL tool is meant for EXTRACTION data from the legacy systems and load into specified data BASE with some process of cleansing data. ETL tool is meant for extraction data from the legacy systems and load into specified data base with some process of cleansing data. |
|
| 13. |
What Are The Various Test Procedures Used To Check Whether The Data Is Loaded In The Backend, Performance Of The Mapping, And Quality Of The Data Loaded In Informatica. |
|
Answer» The best procedure to take a help of debugger where we monitor each and EVERY PROCESS of mappings and how data is LOADING based on CONDITIONS breaks. The best procedure to take a help of debugger where we monitor each and every process of mappings and how data is loading based on conditions breaks. |
|
| 14. |
What Is The Difference Between Joiner And Lookup ? |
|
Answer» joiner is USED to join two or more tables to retrieve DATA from tables (just like JOINS in sql). joiner is used to join two or more tables to retrieve data from tables (just like joins in sql). |
|
| 15. |
How Do We Extract Sap Data Using Informatica? What Is Abap? What Are Idocs? |
|
Answer» SAP DATA can be loaded into INFORMATICA in the form of FLAT files. SAP Data can be loaded into Informatica in the form of Flat files. |
|
| 16. |
Lets Suppose We Have Some 10,000 Odd Records In Source System And When Load Them Into Target.how Do We Ensure That All 10,000 Records That Are Loaded To Target Doesn't Contain Any Garbage Values? |
|
Answer» we can do LTRIM, rtrim in the EXPRESSION or can have CHECK for NULL and then insert the data. we can do ltrim, rtrim in the expression or can have check for null and then insert the data. |
|
| 17. |
If A Flat File Contains 1000 Records How Can I Get First And Last Records Only? |
|
Answer» By using AGGREGATOR TRANSFORMATION with first and last FUNCTIONS we can GET first and last record. By using Aggregator transformation with first and last functions we can get first and last record. |
|
| 18. |
What Are The Modules In Power Mart? |
|
Answer» 1. PowerMart DESIGNER 1. PowerMart Designer |
|
| 19. |
How Do You Calculate Fact Table Granularity? |
|
Answer» Granularity, is the level of DETAIL in which the fact table is DESCRIBING, for example if we are making TIME analysis so the granularity maybe day BASED - MONTH based or year based Granularity, is the level of detail in which the fact table is describing, for example if we are making time analysis so the granularity maybe day based - month based or year based |
|
| 20. |
When Do We Analyze The Tables? How Do We Do It? |
|
Answer» The ANALYZE statement allows you to VALIDATE and compute statistics for an index, table, or cluster. These statistics are used by the cost-based optimizer when it calculates the most efficient PLAN for retrieval. In addition to its ROLE in statement optimization, ANALYZE also helps in validating object structures and in managing space in your system. You can choose the following operations: COMPUTER, ESTIMATE, and DELETE. EARLY version of Oracle7 produced unpredictable results when the ESTIMATE operation was used. It is best to compute your statistics. The ANALYZE statement allows you to validate and compute statistics for an index, table, or cluster. These statistics are used by the cost-based optimizer when it calculates the most efficient plan for retrieval. In addition to its role in statement optimization, ANALYZE also helps in validating object structures and in managing space in your system. You can choose the following operations: COMPUTER, ESTIMATE, and DELETE. Early version of Oracle7 produced unpredictable results when the ESTIMATE operation was used. It is best to compute your statistics. |
|
| 21. |
Compare Etl & Manual Development? |
|
Answer» ETL - The process of extracting data from MULTIPLE sources.(ex. flat files, XML, COBOL, SAP etc) is more simpler with the help of tools. These are some differences b/w manual and ETL development. ETL - The process of extracting data from multiple sources.(ex. flat files, XML, COBOL, SAP etc) is more simpler with the help of tools. These are some differences b/w manual and ETL development. |
|
| 22. |
What Are The Various Tools? |
|
Answer» - Cognos DECISION Stream - Cognos Decision Stream |
|
| 23. |
What Is Latest Version Of Power Center / Power Mart? |
|
Answer» The Latest Version is 7.2 |
|
| 24. |
What Is Ods (operation Data Source) |
|
Answer» ODS - Operational DATA Store. ODS - Operational Data Store. |
|
| 25. |
What Are The Different Versions Of Informatica? |
|
Answer» Here are some POPULAR VERSIONS of INFORMATICA. Here are some popular versions of Informatica. |
|
| 26. |
What Are The Various Methods Of Getting Incremental Records Or Delta Records From The Source Systems? |
|
Answer» One foolproof method is to maintain a field CALLED 'Last Extraction Date' and then IMPOSE a condition in the CODE SAYING 'current_extraction_date > last_extraction_date'. One foolproof method is to maintain a field called 'Last Extraction Date' and then impose a condition in the code saying 'current_extraction_date > last_extraction_date'. |
|
| 27. |
Techniques Of Error Handling - Ignore , Rejecting Bad Records To A Flat File , Loading The Records And Reviewing Them (default Values) |
|
Answer» Rejection of records either at the database due to constraint key VIOLATION or the informatica server when writing data into target table. These rejected records we can find in the BAD files FOLDER where a reject file will be created for a session. we can CHECK why a record has been rejected. And this bad file contains first column a row indicator and second column a column indicator. These row indicators or of four types: Rejection of records either at the database due to constraint key violation or the informatica server when writing data into target table. These rejected records we can find in the bad files folder where a reject file will be created for a session. we can check why a record has been rejected. And this bad file contains first column a row indicator and second column a column indicator. These row indicators or of four types: |
|
| 28. |
What Is Informatica Metadata And Where Is It Stored? |
|
Answer» INFORMATICA METADATA is DATA about data which STORES in Informatica REPOSITORIES. Informatica Metadata is data about data which stores in Informatica repositories. |
|
| 29. |
Do We Need An Etl Tool? When Do We Go For The Tools In The Market? |
|
Answer» ETL Tool: ETL Tool: |
|
| 30. |
Can We Lookup A Table From Source Qualifier Transformation. Ie. Unconnected Lookup |
|
Answer» You cannot lookup from a source QUALIFIER directly. HOWEVER, you can override the SQL in the source qualifier to JOIN with the lookup table to perform the lookup. You cannot lookup from a source qualifier directly. However, you can override the SQL in the source qualifier to join with the lookup table to perform the lookup. |
|
| 31. |
What Is The Difference Between Power Center & Power Mart? |
|
Answer» POWERCENTER - ability to organize repositories into a data mart DOMAIN and SHARE METADATA across repositories. PowerCenter - ability to organize repositories into a data mart domain and share metadata across repositories. |
|
| 32. |
How Do We Call Shell Scripts From Informatica? |
|
Answer» Specify the FULL PATH of the Shell SCRIPT the "Post session properties of session/workflow". Specify the Full path of the Shell script the "Post session properties of session/workflow". |
|
| 33. |
What Is A Staging Area? Do We Need It? What Is The Purpose Of A Staging Area? |
|
Answer» Data STAGING is actually a COLLECTION of processes used to prepare SOURCE system data for loading a data warehouse. Staging includes the following steps: Data staging is actually a collection of processes used to prepare source system data for loading a data warehouse. Staging includes the following steps: |
|
| 34. |
Is There Any Way To Read The Ms Excel Data's Directly Into Informatica? Like Is There Any Possibilities To Take Excel File As Target? |
|
Answer» we can’t directly import the xml file in informatica. we can’t directly import the xml file in informatica. |
|
| 35. |
What Is Full Load & Incremental Or Refresh Load? |
|
Answer» FULL Load: completely erasing the contents of one or more tables and reloading with fresh data. Full Load: completely erasing the contents of one or more tables and reloading with fresh data. |
|
| 36. |
What Is Etl Process ?how Many Steps Etl Contains Explain With Example? |
|
Answer» ETL is extraction , transforming , loading process , you will extract data from the source and apply the business role on it then you will LOAD it in the TARGET ETL is extraction , transforming , loading process , you will extract data from the source and apply the business role on it then you will load it in the target |
|
| 37. |
Can Informatica Load Heterogeneous Targets From Heterogeneous Sources? |
|
Answer» No, In INFORMATICA 5.2 and No, In Informatica 5.2 and |
|
| 38. |
What Are Snapshots? What Are Materialized Views & Where Do We Use Them? What Is A Materialized View Log? |
|
Answer» Snapshots are read-only copies of a master table located on a remote node which is periodically refreshed to REFLECT changes MADE to the master table. Snapshots are mirror or REPLICAS of tables. VIEWS are built using the columns from one or more tables. The SINGLE Table View can be updated but the view with multi table cannot be updated. A View can be updated/deleted/inserted if it has only one base table if the view is based on columns from one or more tables then insert, update and delete is not possible. Snapshots are read-only copies of a master table located on a remote node which is periodically refreshed to reflect changes made to the master table. Snapshots are mirror or replicas of tables. Views are built using the columns from one or more tables. The Single Table View can be updated but the view with multi table cannot be updated. A View can be updated/deleted/inserted if it has only one base table if the view is based on columns from one or more tables then insert, update and delete is not possible. |
|
| 39. |
How Can We Use Mapping Variables In Informatica? Where Do We Use Them? |
|
Answer» Yes. we can USE mapping variable in INFORMATICA. The Informatica server saves the VALUE of mapping variable to the repository at the END of SESSION run and uses that value next time we run the session. Yes. we can use mapping variable in Informatica. The Informatica server saves the value of mapping variable to the repository at the end of session run and uses that value next time we run the session. |
|
| 40. |
Can We Override A Native Sql Query Within Informatica? Where Do We Do It? How Do We Do It? |
|
Answer» Yes,we can override a native sql query in SOURCE QUALIFIER and lookup transformation. In lookup transformation we can FIND "Sql override" in lookup PROPERTIES. by using this option we can do this. Yes,we can override a native sql query in source qualifier and lookup transformation. In lookup transformation we can find "Sql override" in lookup properties. by using this option we can do this. |
|
| 41. |
What Is The Metadata Extension? |
|
Answer» Informatica allows END users and partners to extend the metadata stored in the repository by associating information with individual OBJECTS in the repository. For example, when you create a mapping, you can store your CONTACT information with the mapping. You associate information with repository metadata using metadata extensions. Informatica Client applications can contain the FOLLOWING types of metadata extensions: User-defined: You create user-defined metadata extensions using PowerCenter/PowerMart. You can create, edit, delete, and view user-defined metadata extensions. You can also change the values of user-defined extensions. Informatica allows end users and partners to extend the metadata stored in the repository by associating information with individual objects in the repository. For example, when you create a mapping, you can store your contact information with the mapping. You associate information with repository metadata using metadata extensions. Informatica Client applications can contain the following types of metadata extensions: User-defined: You create user-defined metadata extensions using PowerCenter/PowerMart. You can create, edit, delete, and view user-defined metadata extensions. You can also change the values of user-defined extensions. |
|
| 42. |
What Is A Three Tier Data Warehouse? |
|
Answer» A DATA warehouse can be thought of as a three-tier SYSTEM in which a middle system PROVIDES usable data in a secure way to END users. On either side of this middle system are the end users and the back-end data stores. A data warehouse can be thought of as a three-tier system in which a middle system provides usable data in a secure way to end users. On either side of this middle system are the end users and the back-end data stores. |
|
| 43. |
What Is Etl? |
|
Answer» ETL stands for EXTRACTION transformation and loading ETL stands for extraction transformation and loading |
|