Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

How can we improve the performance of Informatica Aggregator Transformation?

Answer»

To increase the performance of Informatica Aggregator Transformation, consider the following FACTORS

  • Using sorted input reduces the amount of data CACHED, thus IMPROVING session performance.
  • REDUCE the amount of unnecessary aggregation by filtering the unnecessary data before it is aggregated.
  • To reduce the size of a data cache, CONNECT only the inputs/outputs needed to subsequent transformations.
2.

Give a few mapping design tips for Informatica.

Answer»

Tips for mapping design 

  • Standards: FOLLOWING a good standard consistently will benefit a project in the long run. These standards include naming conventions, environmental settings, documentation, parameter files, etc.
  • Re-usability: REUSABLE transformations enable you to react quickly to potential changes. You should use Informatica components like mapplets, worklets, and transformations.
  • Scalability: While designing, it is IMPORTANT to consider scalability. The VOLUME must be correct when developing MAPPINGS.
  • Simplicity: Different mappings are always better than one complex mapping. A simple and logical design process is ultimately more important than a complex one.
  • Modularity: Utilize modular techniques in designing.
3.

What do you mean by surrogate key?

Answer»

Surrogate keys also referred to as artificial keys or IDENTITY keys, are system-generated identifiers used to uniquely identify each and EVERY record in the Dimension table. As a replacement for the natural primary key (changes and makes updates more DIFFICULT), the surrogate key makes updating the table easier. Also, it serves as a method for preserving historical information in SCDs (Slowly Changing Dimension). 

4.

What is the scenario in which the Informatica server rejects files?

Answer»

Servers reject files when they ENCOUNTER rejections in the update strategy TRANSFORMATION. Data and information in a database also get disrupted. As you can SEE, this is a rare scenario or SITUATION

5.

What is OLAP and write its type?

Answer»

The Online Analytical Processing (OLAP) method is used to PERFORM multidimensional analyses on large volumes of data from multiple database systems simultaneously. Apart from managing large amounts of historical data, it provides aggregation and summation capabilities (computing and presenting data in a summarized form for STATISTICAL analysis), as well as STORING INFORMATION at different levels of granularity to assist in decision-making. Among its types are DOLAP (Desktop OLAP), ROLAP (Relation OLAP), MOLAP (Multi OLAP), and HOLAP (Hybrid OLAP). 

6.

State the difference between mapping parameter and mapping variable?

Answer»

Mapping Parameter: Mapping parameters in Informatica are constant values that are set in parameter files before a session is run and retain the same values until the session ends. To change a mapping parameter value, we MUST update the parameter file between session runs.   

Mapping Variable: Mapping variables in Informatica are values that do not REMAIN constant and change throughout the session. At the end of the session run, the integration service saves the mapping variable value to the repository and uses it for the next round of SESSIONS. SetMaxVariable, SetMinVariable, SetVariable, SetCountVariable are some variables functions used to change the variable value. 

7.

What are different ways of parallel processing?

Answer»

As the name suggests, parallel processing involves processing data in parallel, which INCREASES PERFORMANCE. In Informatica, parallel processing can be IMPLEMENTED by using a number of methods. According to the SITUATION and the preference of the user, the method is selected. The following types of partition algorithms can be used to implement parallel processing:  

  • Database Partitioning: This partitioning technique involves querying the database for table partition information and reading partitioned data from corresponding nodes in the database.
  • Round-Robin Partitioning: With this service, data is evenly distributed across all partitions. It also facilitates a correct grouping of data.
  • Hash Auto-keys partitioning: The power center server uses the hash auto keys partition to group data rows across partitions. The Integration Service uses these grouped ports as a compound partition.
  • Hash User-Keys Partitioning: In this type of partitioning, rows of data are grouped according to a user-defined or a user-friendly partition key. Ports can be selected individually that define the key correctly.
  • Key Range Partitioning: By using key range partitioning, we can use one or more ports to create compound partition keys specific to a particular source. The Integration Service passes data based on the mentioned and specified range for each partition.
  • Pass-through Partitioning: In this PORTIONING, all rows are passed without being redistributed from one partition point to another by the Integration service.