Explore topic-wise InterviewSolutions in Current Affairs.

This section includes 7 InterviewSolutions, each offering curated multiple-choice questions to sharpen your Current Affairs knowledge and support exam preparation. Choose a topic below to get started.

1.

Techniques Of Error Handling - Ignore , Rejecting Bad Records To A Flat File , Loading The Records And Reviewing Them (default Values)?

Answer»

Rejection of records either at the database due to constraint key violation or the informatica server when writing data into target table. These rejected records we can find in the bad files FOLDER where a REJECT file will be CREATED for a session. We can check why a record has been rejected. And this bad file contains first column a row indicator and second column a column indicator.
These row indicators are of FOUR types:
D-valid data,
O-overflowed data,
N-null data,
T- Truncated data,
And depending on these indicators we can changes to load data successfully to target.

Rejection of records either at the database due to constraint key violation or the informatica server when writing data into target table. These rejected records we can find in the bad files folder where a reject file will be created for a session. We can check why a record has been rejected. And this bad file contains first column a row indicator and second column a column indicator.
These row indicators are of four types:
D-valid data,
O-overflowed data,
N-null data,
T- Truncated data,
And depending on these indicators we can changes to load data successfully to target.

2.

What Is The Difference Between Joiner And Lookup?

Answer»

joiner is used to join two or more tables to retrieve DATA from tables(just LIKE joins in sql).
Look up is used to CHECK and compare source table and target table .(just like correlated sub-query in sql).

joiner is used to join two or more tables to retrieve data from tables(just like joins in sql).
Look up is used to check and compare source table and target table .(just like correlated sub-query in sql).

3.

What Are The Various Test Procedures Used To Check Whether The Data Is Loaded In The Backend, Performance Of The Mapping, And Quality Of The Data Loaded In Informatica?

Answer»

The best procedure to TAKE a help of DEBUGGER where we monitor each and every PROCESS of MAPPINGS and how DATA is loading based on conditions breaks.

The best procedure to take a help of debugger where we monitor each and every process of mappings and how data is loading based on conditions breaks.

4.

Where Do We Use Connected And Un Connected Lookups?

Answer»

If RETURN PORT only one then we can go for UNCONNECTED. More than one return port is not possible with Unconnected. If more than one return port then go for CONNECTED.

If return port only one then we can go for unconnected. More than one return port is not possible with Unconnected. If more than one return port then go for Connected.

5.

What Is The Difference Between Informatica 7.0&8.0?

Answer»

The only difference b/w informatica 7 & 8 is... 8 is a SOA (Service Oriented ARCHITECTURE) whereas 7 is not. SOA in informatica is handled through different grid designed in server.

The only difference b/w informatica 7 & 8 is... 8 is a SOA (Service Oriented Architecture) whereas 7 is not. SOA in informatica is handled through different grid designed in server.

6.

How Can I Edit The Xml Target, Are There Anyways Apart From The Editing The Xsd File. Can I Directly Edit The Xml Directly In Informatica Designer?

Answer»

No you cannot EDIT it from Informatica DESIGNER. But still you can CHANGE the precession of the ports if xml source is IMPORTED from DTD file.

No you cannot edit it from Informatica designer. But still you can change the precession of the ports if xml source is imported from DTD file.

7.

What Is Conformed Fact? What Are Conformed Dimensions Used For?

Answer»

CONFORMED fact in a warehouse allows itself to have same name in SEPARATE tables. They can be compared and combined mathematically. Conformed dimensions can be used across multiple data MARTS. They have a STATIC structure. Any DIMENSION table that is used by multiple fact tables can be conformed dimensions.

Conformed fact in a warehouse allows itself to have same name in separate tables. They can be compared and combined mathematically. Conformed dimensions can be used across multiple data marts. They have a static structure. Any dimension table that is used by multiple fact tables can be conformed dimensions.

8.

Define Non-additive Facts?

Answer»

Non additive FACTS are facts that cannot be SUMMED up for any dimensions present in FACT table. These columns cannot be added for producing any RESULTS.

Non additive facts are facts that cannot be summed up for any dimensions present in fact table. These columns cannot be added for producing any results.

9.

What Is Data Purging?

Answer»

Deleting data from data warehouse is known as data purging. Usually junk data like ROWS with NULL VALUES or SPACES are cleaned up.
Data purging is the process of CLEANING this kind of junk values.

Deleting data from data warehouse is known as data purging. Usually junk data like rows with null values or spaces are cleaned up.
Data purging is the process of cleaning this kind of junk values.

10.

What Are The Different Problems That Data Mining Can Slove?

Answer»

Data mining can be used in a variety of fields/industries LIKE marketing of products and SERVICES, AI, government intelligence.
The US FBI uses data mining for screening SECURITY and intelligence for identifying illegal and INCRIMINATING e-information DISTRIBUTED over internet.

Data mining can be used in a variety of fields/industries like marketing of products and services, AI, government intelligence.
The US FBI uses data mining for screening security and intelligence for identifying illegal and incriminating e-information distributed over internet.

11.

What Are Different Stages Of Data Mining?

Answer»

A stage of data mining is a logical process for searching large AMOUNT information for finding important data.
Stage 1: Exploration: One will want to explore and prepare data. The goal of the exploration stage is to FIND important VARIABLES and determine their nature.
Stage 2: pattern identification: Searching for patterns and choosing the one which allows making best prediction, is the primary action in this stage.
Stage 3: Deployment stage: Until consistent pattern is found in stage 2, which is highly predictive, this stage cannot be REACHED. The pattern found in stage 2, can be applied for the purpose to see whether the desired outcome is achieved or not.

A stage of data mining is a logical process for searching large amount information for finding important data.
Stage 1: Exploration: One will want to explore and prepare data. The goal of the exploration stage is to find important variables and determine their nature.
Stage 2: pattern identification: Searching for patterns and choosing the one which allows making best prediction, is the primary action in this stage.
Stage 3: Deployment stage: Until consistent pattern is found in stage 2, which is highly predictive, this stage cannot be reached. The pattern found in stage 2, can be applied for the purpose to see whether the desired outcome is achieved or not.

12.

What Is Data Cleaning?

Answer»
  • DATA cleaning is ALSO known as data scrubbing.
  • Data cleaning is a process which ensures the SET of data is correct and accurate. Data accuracy and consistency, data integration is checked during data cleaning. Data cleaning can be applied for a set of RECORDS or multiple sets of data which NEED to be merged.

13.

What Is Data Cube Technology Used For?

Answer»

Data cubes are commonly used for EASY INTERPRETATION of data. It is used to represent data along with dimensions as some measures of BUSINESS needs. Each dimension of the cube represents some attribute of the database. E.g profit per DAY, MONTH or year.

Data cubes are commonly used for easy interpretation of data. It is used to represent data along with dimensions as some measures of business needs. Each dimension of the cube represents some attribute of the database. E.g profit per day, month or year.

14.

What Are Critical Success Factors?

Answer»

Key areas of activity in which FAVORABLE results are necessary for a company to obtain its GOAL.
There are four basic types of CSFs which are:

Key areas of activity in which favorable results are necessary for a company to obtain its goal.
There are four basic types of CSFs which are:

15.

What Is Data Modeling And Data Mining?

Answer»

Data Modeling is a technique used to define and analyze the requirements of data that supports organization’s BUSINESS process. In SIMPLE terms, it is used for the analysis of data OBJECTS in order to identify the relationships among these data objects in any business.
Data Mining is a technique used to analyze datasets to derive useful insights/information. It is mainly used in retail, consumer goods, telecommunication and financial organizations that have a strong consumer orientation in order to determine the impact on SALES, customer SATISFACTION and profitability.

Data Modeling is a technique used to define and analyze the requirements of data that supports organization’s business process. In simple terms, it is used for the analysis of data objects in order to identify the relationships among these data objects in any business.
Data Mining is a technique used to analyze datasets to derive useful insights/information. It is mainly used in retail, consumer goods, telecommunication and financial organizations that have a strong consumer orientation in order to determine the impact on sales, customer satisfaction and profitability.

16.

What Is Active Data Wearhousing?

Answer»

An active data warehouse represents a single state of the business. It CONSIDERS the analytic PERSPECTIVES of customers and suppliers. It HELPS to deliver the updated data through reports.

An active data warehouse represents a single state of the business. It considers the analytic perspectives of customers and suppliers. It helps to deliver the updated data through reports.

17.

What Is Virtual Data Wearhousing?

Answer»

A virtual data WAREHOUSE provides a COLLECTIVE VIEW of the completed data. It can be CONSIDERED as a logical data model of the CONTAINING metadata.

A virtual data warehouse provides a collective view of the completed data. It can be considered as a logical data model of the containing metadata.

18.

What Is Data Wearhousing?

Answer»
  • A data WAREHOUSE can be considered as a storage AREA where relevant data is stored irrespective of the SOURCE.
  • Data warehousing merges data from MULTIPLE sources into an easy and complete form.

19.

What Is Cube Grouping?

Answer»

A transformer built set of similar CUBES is known as CUBE grouping. They are generally used in creating smaller cubes that are BASED on the DATA in the LEVEL of dimension.

A transformer built set of similar cubes is known as cube grouping. They are generally used in creating smaller cubes that are based on the data in the level of dimension.

20.

Define Slowly Changing Dimensions (scd)?

Answer»

SCD are dimensions whose DATA changes very slowly.
eg: city or an employee.

  • This dimension will change very slowly.
  • The row of this data in the dimension can be EITHER replaced completely WITHOUT any track of old record OR a new row can be INSERTED, OR the change can be tracked.

SCD are dimensions whose data changes very slowly.
eg: city or an employee.

21.

Explain The Use Lookup Tables And Aggregate Tables?

Answer»

22.

What Is Real Time Data-wearhousing?

Answer»
  • In real time DATA-warehousing, the WAREHOUSE is UPDATED every time the system PERFORMS a transaction.
  • It REFLECTS the real time business data.
  • This means that when the query is fired in the warehouse, the state of the business at that time will be returned.

23.

Can We Use Procedural Logic Inside Infromatica? If Yes How , If No How Can We Use External Procedural Logic In Informatica?

Answer»

Yes, you can use ADVANCED external transformation, You can use C++ language on unix and c++, vb vc++ on WINDOWS server.

Yes, you can use advanced external transformation, You can use c++ language on unix and c++, vb vc++ on windows server.

24.

What Are Parameter Files? Where Do We Use Them?

Answer»

Parameter FILE DEFINES the value for parameter and VARIABLE used in a WORKFLOW, worklet or SESSION.

Parameter file defines the value for parameter and variable used in a workflow, worklet or session.

25.

What Is A Mapping, Session, Worklet, Workflow, Mapplet?

Answer»
  • A mapping represents dataflow from SOURCES to targets.
  • A MAPPLET creates or configures a set of transformations.
  • A WORKFLOW is a set of INSTRUCTIONS that tell the Informatica server how to execute the TASKS.
  • A worklet is an object that represents a set of tasks.
  • A session is a set of instructions that describe how and when to move data from sources to targets.

26.

What Are Non-additive Facts In Detail?

Answer»
  • A fact may be measure, metric or a dollar value. Measure and metric are non ADDITIVE facts.
  • Dollar value is additive fact. If we WANT to find out the amount for a particular place for a particular period of time, we can add the dollar amounts and come up with the total amount.
  • A non additive fact, for eg; measure height(s) for 'citizens by GEOGRAPHICAL location' , when we rollup 'city' data to 'state' level data we should not add HEIGHTS of the citizens RATHER we may want to use it to derive 'count'.

27.

Where Do We Use Semi And Non Additive Facts?

Answer»

Additive: A MEASURE can participate arithmetic calculations using all or any dimensions.

Ex: Sales PROFIT
Semi additive: A measure can participate arithmetic calculations using some dimensions.

Ex: Sales AMOUNT
Non Additive:A measure can't participate arithmetic calculations using dimensions.

Ex: temperature

Additive: A measure can participate arithmetic calculations using all or any dimensions.

Ex: Sales profit
Semi additive: A measure can participate arithmetic calculations using some dimensions.

Ex: Sales amount
Non Additive:A measure can't participate arithmetic calculations using dimensions.

Ex: temperature

28.

Where Do We Use Connected And Unconnected Lookups?

Answer»
  • If return PORT only one then we can go for unconnected. More than one return port is not possible with Unconnected. If more than one return port then go for Connected.
  • If you REQUIRE dynamic CACHE i.e where your data will change dynamically then you can go for connected lookup. If your data is static where your data won't change when the session LOADS you can go for unconnected lookups .

29.

What Is Ods (operation Data Source)?

Answer»
  • ODS - Operational DATA Store.
  • ODS COMES between staging area & Data Warehouse. The data is ODS will be at the LOW level of GRANULARITY.
  • Once data was populated in ODS aggregated data will be loaded into EDW through ODS.

30.

Can We Lookup A Table From Source Qualifier Transformation. Ie. Unconnected Lookup?

Answer»

You cannot LOOKUP from a source qualifier DIRECTLY. However, you can OVERRIDE the SQL in the source qualifier to join with the lookup table to PERFORM the lookup.

You cannot lookup from a source qualifier directly. However, you can override the SQL in the source qualifier to join with the lookup table to perform the lookup.

31.

What Is The Difference Between Etl Tool And Olap Tools?

Answer»

ETL tool is meant for extraction data from the legacy systems and LOAD into specified database with some process of CLEANSING data.

ex: Informatica, data STAGE ....etc

OLAP is meant for Reporting purpose in OLAP data AVAILABLE in Multidirectional model. so that you can WRITE simple query to extract data from the data base.

ex: Business objects, Cognos....etc

ETL tool is meant for extraction data from the legacy systems and load into specified database with some process of cleansing data.

ex: Informatica, data stage ....etc

OLAP is meant for Reporting purpose in OLAP data available in Multidirectional model. so that you can write simple query to extract data from the data base.

ex: Business objects, Cognos....etc

Previous Next