Explore topic-wise InterviewSolutions in Current Affairs.

This section includes 7 InterviewSolutions, each offering curated multiple-choice questions to sharpen your Current Affairs knowledge and support exam preparation. Choose a topic below to get started.

1.

Explain About The View Selection Problem?

Answer»

Often calculating all the data is not possible by AGGREGATIONS for this reason some of the complex data problems are solved. In order to determine which data should be solved and calculated, developers use VIEW SELECTION application. This solution is often USED to reduce CALCULATION problem.

Often calculating all the data is not possible by aggregations for this reason some of the complex data problems are solved. In order to determine which data should be solved and calculated, developers use View selection application. This solution is often used to reduce calculation problem.

2.

Explain About Aggregations?

Answer»

OLAP can PROCESS complex queries and give the output in less than 0.1 seconds, for it to achieve such a performance OLAP uses aggregations. Aggregations are BUILT by aggregating and changing the DATA along the dimensions. POSSIBLE combination of aggregations can be determined by the combination possibilities of dimension granularities.

OLAP can process complex queries and give the output in less than 0.1 seconds, for it to achieve such a performance OLAP uses aggregations. Aggregations are built by aggregating and changing the data along the dimensions. Possible combination of aggregations can be determined by the combination possibilities of dimension granularities.

3.

Explain About Rolap?

Answer»

Functioning of ROLAP occurs simultaneously with relational databases. Data and TABLES are STORED as relational tables. To hold NEW information or data new tables are created. Functioning of ROLAP DEPENDS upon SPECIALIZED schema design.

Functioning of ROLAP occurs simultaneously with relational databases. Data and tables are stored as relational tables. To hold new information or data new tables are created. Functioning of ROLAP depends upon specialized schema design.

4.

Explain About Molap?

Answer»

Classic form of OLAP is known as MOLAP and it is often called as OLAP. Simple database structures such as time period, product, location, ETC are used. Functioning of each and every dimension or data structure is defined by one or more HIERARCHIES.

Classic form of OLAP is known as MOLAP and it is often called as OLAP. Simple database structures such as time period, product, location, etc are used. Functioning of each and every dimension or data structure is defined by one or more hierarchies.

5.

Explain About The Functionality Of Olap?

Answer»

HYPER cube or multidimensional cube forms the CORE of OLAP system. This consists of measures which are arranged according to DIMENSIONS. Hyper cube Meta data is created by star or snow flake SCHEMA of tables in RDBMS. Dimensions are extracted from dimension table and measures from the fact table.

Hyper cube or multidimensional cube forms the core of OLAP system. This consists of measures which are arranged according to dimensions. Hyper cube Meta data is created by star or snow flake schema of tables in RDBMS. Dimensions are extracted from dimension table and measures from the fact table.

6.

Explain About Olap ?

Answer»

OLAP is KNOWN as online analytical processing which PROVIDES answers to queries which are multi dimensional in nature. It composes relational reporting and DATA mining for providing solutions to business intelligence. This term OLAP is created from the term OLTP.

OLAP is known as online analytical processing which provides answers to queries which are multi dimensional in nature. It composes relational reporting and data mining for providing solutions to business intelligence. This term OLAP is created from the term OLTP.

7.

What Is Factless Facts Table?

Answer»

A FACT table which does not CONTAIN numeric fact COLUMNS it is CALLED factless facts table.

A fact table which does not contain numeric fact columns it is called factless facts table.

8.

What Are Non-additive Facts?

Answer»

Non-additive facts are facts that cannot be summed up for any of the DIMENSIONS PRESENT in the fact table. However they are not CONSIDERED as useless. If there is CHANGES in dimensions the same facts can be USEFUL.

Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table. However they are not considered as useless. If there is changes in dimensions the same facts can be useful.

9.

What Is A Level Of Granularity Of A Fact Table?

Answer»

Level of granularity MEANS level of detail that you PUT into the fact table in a data WAREHOUSE. Level of granularity WOULD mean what detail are you willing to put for each TRANSACTIONAL fact.

Level of granularity means level of detail that you put into the fact table in a data warehouse. Level of granularity would mean what detail are you willing to put for each transactional fact.

10.

How Do You Load The Time Dimension?

Answer»

Time dimensions are usually LOADED by a PROGRAM that loops through all possible dates that may appear in the data. 100 years may be represented in a time dimension, with ONE ROW per day.

Time dimensions are usually loaded by a program that loops through all possible dates that may appear in the data. 100 years may be represented in a time dimension, with one row per day.

11.

What Is Real Time Data-warehousing?

Answer»

DATA WAREHOUSING captures business activity data. Real-time data warehousing captures business activity data as it occurs. As SOON as the business activity is complete and there is data about it, the completed activity data flows into the data warehouse and BECOMES available instantly.

Data warehousing captures business activity data. Real-time data warehousing captures business activity data as it occurs. As soon as the business activity is complete and there is data about it, the completed activity data flows into the data warehouse and becomes available instantly.

12.

What Are Aggregate Tables?

Answer»

Aggregate table contains the summary of existing warehouse data which is grouped to CERTAIN levels of dimensions. It is always easy to RETRIEVE data from aggregated tables than visiting original table which has million records. Aggregate tables reduces the load in the DATABASE server and increases the performance of the QUERY and can retrieve the result quickly.

Aggregate table contains the summary of existing warehouse data which is grouped to certain levels of dimensions. It is always easy to retrieve data from aggregated tables than visiting original table which has million records. Aggregate tables reduces the load in the database server and increases the performance of the query and can retrieve the result quickly.

13.

What Are Lookup Tables?

Answer»

A lookup table is the table placed on the TARGET table BASED upon the PRIMARY key of the target, it just updates the table by allowing only modified (NEW or updated) records based on thelookup CONDITION.

A lookup table is the table placed on the target table based upon the primary key of the target, it just updates the table by allowing only modified (new or updated) records based on thelookup condition.

14.

If De-normalized Is Improves Data Warehouse Processes, Why Fact Table Is In Normal Form?

Answer»

FOREIGN keys of facts tables are PRIMARY keys of DIMENSION tables. It is clear that fact table contains columns which are primary KEY to other table that itself make normal form table.

Foreign keys of facts tables are primary keys of Dimension tables. It is clear that fact table contains columns which are primary key to other table that itself make normal form table.

15.

Is Oltp Database Is Design Optimal For Data Warehouse?

Answer»

No. OLTP database tables are normalized and it will add additional time to queries to return results. Additionally OLTP database is smaller and it does not CONTAIN LONGER period (many years) data, which needs to be analyzed. A OLTP system is basically ER model and not Dimensional Model. If a complex query is executed on a OLTP system, it may CAUSE a heavy OVERHEAD on the OLTP SERVER that will affect the normal business processes.

No. OLTP database tables are normalized and it will add additional time to queries to return results. Additionally OLTP database is smaller and it does not contain longer period (many years) data, which needs to be analyzed. A OLTP system is basically ER model and not Dimensional Model. If a complex query is executed on a OLTP system, it may cause a heavy overhead on the OLTP server that will affect the normal business processes.

16.

What Is Ods?

Answer»

ODS is abbreviation of OPERATIONAL Data Store. A database structure that is a repository for near real-time operational data rather than long term TREND data. The ODS MAY further BECOME the enterprise shared operational database, allowing operational systems that are being re-engineered to use the ODS as there OPERATION databases.

ODS is abbreviation of Operational Data Store. A database structure that is a repository for near real-time operational data rather than long term trend data. The ODS may further become the enterprise shared operational database, allowing operational systems that are being re-engineered to use the ODS as there operation databases.

17.

What Is Data Validation Strategies For Data Mart Validation After Loading Process

Answer»

DATA validation is generally done manually in DWH in this case if source and TGT are RELATIONAL you NEED to create SQL scripts to validate source and target data and if source isFlat file or non relational database you can use excel if data is very LESS or create dummy tables to validate your ETL code.

Data validation is generally done manually in DWH in this case if source and TGT are relational you need to create SQL scripts to validate source and target data and if source isFlat file or non relational database you can use excel if data is very less or create dummy tables to validate your ETL code.

18.

What Is The Difference Between A Data Warehouse And A Data Mart?

Answer»

A DATA mart is a SUBJECT oriented database which supports the business needs of individual departments within the ENTERPRISE.It is an subset of the enterprise data warehouse.It is also known as high performance query STRUCTURES.

A data mart is a subject oriented database which supports the business needs of individual departments within the enterprise.It is an subset of the enterprise data warehouse.It is also known as high performance query structures.

19.

What Are Data Marts

Answer»

DATA Mart is a segment of a data warehouse that can provide data for reporting and analysis on a section, UNIT, department or operation in the company, e.g. SALES, PAYROLL, production. Data marts are SOMETIMES complete individual data warehouses which are usually smaller than the corporate data warehouse.

Data Mart is a segment of a data warehouse that can provide data for reporting and analysis on a section, unit, department or operation in the company, e.g. sales, payroll, production. Data marts are sometimes complete individual data warehouses which are usually smaller than the corporate data warehouse.

20.

What Is The Definitions For Datawarehose And Datamart?

Answer»

Datamart is subset of DATAWAREHOUSE we can SAY a datamart is collection of individual departmental information...Where as datawarehouse in collection of datamart.

DATA mart is a single subject and datawarehouse is a integration of multiple SUBJECTS.

Datamart is subset of Datawarehouse we can say a datamart is collection of individual departmental information...Where as datawarehouse in collection of datamart.

Data mart is a single subject and datawarehouse is a integration of multiple subjects.

21.

What Is Data Mart?

Answer»

Data Marts is used on a business division/department level. A data mart only contains the REQUIRED subject specific data for local ANALYSIS. A database, or collection of databases, designed to HELP managers make strategicdecisions about their business. data marts are usually smaller and focus on a particular subject or department. Some data marts, called DEPENDENT data marts, are subsets of larger data warehouses. A data mart is a simpler form of a data warehouse focused on a single subject (or functional area) such as sales, finance, marketing, HR etc. Data Mart represents data from single business process.

Data Marts is used on a business division/department level. A data mart only contains the required subject specific data for local analysis. A database, or collection of databases, designed to help managers make strategicdecisions about their business. data marts are usually smaller and focus on a particular subject or department. Some data marts, called dependent data marts, are subsets of larger data warehouses. A data mart is a simpler form of a data warehouse focused on a single subject (or functional area) such as sales, finance, marketing, HR etc. Data Mart represents data from single business process.

22.

What Is Galaxy Schema?

Answer»

Galaxy schema is also known as fact constellation SCHEME. It requires no of fact tables to share DIMENSION tables. In DATA, wares HOUSING mainly the people are using the conceptual hierarchy.

Galaxy schema is also known as fact constellation scheme. It requires no of fact tables to share dimension tables. In data, wares housing mainly the people are using the conceptual hierarchy.

23.

Suppose You Are Filtering The Rows Using A Filter Transformation Only The Rows Meet The Condition Pass To The Target. Tell Me Where The Rows Will Go That Does Not Meet The Condition.

Answer»

Informatica FILTER transformation default value is 1 i.e. TRUE. If you place a break point on filter transformation and RUN the mapping in a DEBUGGER mode, you will find these values 1 or 0 for each row PASSING through filter. If you change 0 to 1, the particular row will be passed to next stage.

Informatica filter transformation default value is 1 i.e. true. If you place a break point on filter transformation and run the mapping in a debugger mode, you will find these values 1 or 0 for each row passing through filter. If you change 0 to 1, the particular row will be passed to next stage.

24.

After We Create A Scd Table, Can We Use That Particular Dimension As A Dimension Table For Star Schema?

Answer»

Yes.

Yes.

25.

What Is Core Dimension?

Answer»

CORE Dimension is a Dimension table, which is used DEDICATED for single FACT table or DATAMART. Conform Dimension is a Dimension table which is used across fact tables or Data marts.

Core Dimension is a Dimension table, which is used dedicated for single fact table or Datamart. Conform Dimension is a Dimension table which is used across fact tables or Data marts.

26.

How Much Data Hold In One Universe.

Answer»

UNIVERSE does not hold any data. However, practically the universe is KNOWN to have issues when the OBJECTS CROSS 6000.

Universe does not hold any data. However, practically the universe is known to have issues when the objects cross 6000.

27.

Can Any One Explain About Core Dimension, Balanced Dimension, And Dirty Dimension?

Answer»

Dirty Dimension is NOTHING but Junk Dimensions. CORE Dimensions are DEDICATED for a fact table or Data mart. Conformed Dimensions are used ACROSS fact TABLES or Data marts.

Dirty Dimension is nothing but Junk Dimensions. Core Dimensions are dedicated for a fact table or Data mart. Conformed Dimensions are used across fact tables or Data marts.

28.

Can Any One Explain The Hierarchies Level Data Warehousing.

Answer»

In Data warehousing, levels are columns available in dimension table. Levels are having attributes. Hierarchies are USED for navigational purpose; there are two types of Hierarchies. You can define hierarchies in TOP down or bottom up.

1. Natural Hierarchy: BEST example is Time Dimension - Year, Month, Day etc. In natural Hierarchy definite relationship exists between each level
2. Navigational Hierarchy: You can have levels like
Ex - PRODUCTION cost of Product, Sales Cost of Product.
Ex - Lead Time defined to procure, ACTUAL Procurement time,
In this, two levels need not to have relationship. This Hierarchy is created for navigational purpose.

In Data warehousing, levels are columns available in dimension table. Levels are having attributes. Hierarchies are used for navigational purpose; there are two types of Hierarchies. You can define hierarchies in top down or bottom up.

1. Natural Hierarchy: Best example is Time Dimension - Year, Month, Day etc. In natural Hierarchy definite relationship exists between each level
2. Navigational Hierarchy: You can have levels like
Ex - Production cost of Product, Sales Cost of Product.
Ex - Lead Time defined to procure, Actual Procurement time,
In this, two levels need not to have relationship. This Hierarchy is created for navigational purpose.

29.

What Is Data Cleaning? How Can We Do That?

Answer»

Data cleaning is a self-explanatory TERM. Most of the data warehouses in the world source data from multiple SYSTEMS - systems that were created LONG before data warehousing was well understood, and HENCE without the vision to consolidate the same in a single repository of information. In such a scenario, the possibilities of the following are there:

► Missing information for a column from one of the data sources;
► Inconsistent information among different data sources;
► Orphan RECORDS;
► Outlier data points;
► Different data types for the same information among various data sources, leading to improper conversion;
► Data breaching business rules

In order to ensure that the data warehouse is not infected by any of these discrepancies, it is important to cleanse the data using a set of business rules, before it makes its way into the data warehouse.

Data cleaning is a self-explanatory term. Most of the data warehouses in the world source data from multiple systems - systems that were created long before data warehousing was well understood, and hence without the vision to consolidate the same in a single repository of information. In such a scenario, the possibilities of the following are there:

► Missing information for a column from one of the data sources;
► Inconsistent information among different data sources;
► Orphan records;
► Outlier data points;
► Different data types for the same information among various data sources, leading to improper conversion;
► Data breaching business rules

In order to ensure that the data warehouse is not infected by any of these discrepancies, it is important to cleanse the data using a set of business rules, before it makes its way into the data warehouse.

30.

What Is Dimension Modeling?

Answer»

A logical design technique that seeks to PRESENT the data in a STANDARD, intuitive framework that ALLOWS for high-performance access. There are different data modeling concepts like ER Modeling (ENTITY Relationship modeling), DM (Dimensional modeling), Hierarchal Modeling, NETWORK modeling. However, popular are ER and DM only.

A logical design technique that seeks to present the data in a standard, intuitive framework that allows for high-performance access. There are different data modeling concepts like ER Modeling (Entity Relationship modeling), DM (Dimensional modeling), Hierarchal Modeling, Network modeling. However, popular are ER and DM only.

31.

Where The Cache Files Stored?

Answer»

CACHES are STORED in REPOSITORY.

Caches are stored in Repository.

32.

How Can You Import Tables From A Database?

Answer»

In BUSINESS OBJECTS UNIVERSE Designer you can open Table Browser and select the TABLES needed then INSERT them to designer.

In Business Objects Universe Designer you can open Table Browser and select the tables needed then insert them to designer.

33.

What Is Drilling Across?

Answer»

DRILL ACROSS corresponds to switching from 1 CLASSIFICATION in 1 dimension to a DIFFERENT classification in different dimension.

Drill across corresponds to switching from 1 classification in 1 dimension to a different classification in different dimension.

34.

How Many Different Schemas Or Dw Models Can Be Used In Siebel Analytics. I Know Only Star And Snow Flake And Any Other Model That Can Be Used?

Answer»

Integrated schema design is also USED to define an integrated schema design we have to define the following concepts

► Fact constellation
► Act less fact table
► Onformed dimension
A: A fact constellation is the process of joining two or more fact tables
B: A fact table with out any FACTS is KNOWN as fact less fact table
C:A dimension which is re useful and fixed is known as conformed dimensionA dimension, which is, shared with multiple fact tables known as conformed dimension.

Integrated schema design is also used to define an integrated schema design we have to define the following concepts

► Fact constellation
► Act less fact table
► Onformed dimension
A: A fact constellation is the process of joining two or more fact tables
B: A fact table with out any facts is known as fact less fact table
C:A dimension which is re useful and fixed is known as conformed dimensionA dimension, which is, shared with multiple fact tables known as conformed dimension.

35.

What Is An Error Log Table In Informatica Occurs And How To Maintain It In Mapping?

Answer»

Error Log in Informatica is a ONE of output FILE CREATED by Informatica Server while running the session for error MESSAGES. It is created in Informatica home directory.

Error Log in Informatica is a one of output file created by Informatica Server while running the session for error messages. It is created in Informatica home directory.

36.

What Is Loop In Data Warehousing?

Answer»

In DWH loops may exist between the tables. If loops exist, then QUERY generation will take more time, because more than ONE path is available. It creates ambiguity also. Loops can be avoided by CREATING aliases of the table or by context.

Example: 4 Tables - Customer, Product, Time, Cost forming a close loop. Create alias for the cost to avoid loop.

In DWH loops may exist between the tables. If loops exist, then query generation will take more time, because more than one path is available. It creates ambiguity also. Loops can be avoided by creating aliases of the table or by context.

Example: 4 Tables - Customer, Product, Time, Cost forming a close loop. Create alias for the cost to avoid loop.

37.

How Many Clustered Indexes Can U Create For A Table In Dwh? In Case Of Truncate And Delete Command What Happens To Table, Which Has Unique Id.

Answer»

You can have only ONE clustered index per table. If you use delete COMMAND, you can ROLLBACK... it fills your redo log FILES.

If you do not want records, you may use truncate command, which will be faster and does not FILL your redo log file.

You can have only one clustered index per table. If you use delete command, you can rollback... it fills your redo log files.

If you do not want records, you may use truncate command, which will be faster and does not fill your redo log file.

38.

What Is Hybrid Slowly Changing Dimension?

Answer»

Hybrid SCDS are combination of both SCD 1 and SCD 2.It MAY happen that in a table, some columns are important and we need to track CHANGES for them i.e. capture the historical data for them WHEREAS in some columns even if the data changes, we don't care.For such tables we implement Hybrid SCDs, where in some columns are Type 1 and some are Type 2.You can add that it is not an intelligent key but SIMILAR to a sequence number and tied to a timestamp typically!

Hybrid SCDs are combination of both SCD 1 and SCD 2.It may happen that in a table, some columns are important and we need to track changes for them i.e. capture the historical data for them whereas in some columns even if the data changes, we don't care.For such tables we implement Hybrid SCDs, where in some columns are Type 1 and some are Type 2.You can add that it is not an intelligent key but similar to a sequence number and tied to a timestamp typically!

39.

Can A Dimension Table Contain Numeric Values?

Answer»

Yes. However, those data type will be char (only the VALUES can numeric/char).Yes, dimensions even CONTAIN numerical because these are DESCRIPTIVE elements of our BUSINESS.

Yes. However, those data type will be char (only the values can numeric/char).Yes, dimensions even contain numerical because these are descriptive elements of our business.

40.

What Is The Difference Between Star And Snowflake Schemas?

Answer»

Star schema:
A SINGLE FACT table with N NUMBER of DimensionSnowflake schema: Any dimensions with extended dimensions are known as snowflake schema.

Star schema:
A single fact table with N number of DimensionSnowflake schema: Any dimensions with extended dimensions are known as snowflake schema.

41.

What Is The Difference Between Snowflake And Star Schema? What Are Situations Where Snowflake Schema Is Better Than Star Schema When The Opposite Is True?

Answer»

Star schema CONTAINS the dimension tables mapped around ONE or more FACT tables.It is a renormalized model and no need to USE complicated joins. Also Queries results fast.Snowflake schema is the normalized FORM of Star schema. It contains in-depth joins, because the tables are spited in to many pieces. We can easily do modification directly in the tables.We have to use complicated joins, since we have more tables. There will be some delay in processing the Query.

Star schema contains the dimension tables mapped around one or more fact tables.It is a renormalized model and no need to use complicated joins. Also Queries results fast.Snowflake schema is the normalized form of Star schema. It contains in-depth joins, because the tables are spited in to many pieces. We can easily do modification directly in the tables.We have to use complicated joins, since we have more tables. There will be some delay in processing the Query.

42.

What Is Er Diagram?

Answer»

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the REAL world as entities and relationships. A basic component of the model is the Entity-Relationship diagram, which is used to visually represent data OBJECTS. Since Chen wrote his paper the model has been extended and today it is COMMONLY used for database design for the database designer, the utility of the ER model is: it MAPS well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. It is simple and easy to understand with a minimum of training. Therefore, the database designer to communicate the design to the end user can use the model. In addition, the model can be used as a design plan by the database developer to implement a data model in specific database management software.

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram, which is used to visually represent data objects. Since Chen wrote his paper the model has been extended and today it is commonly used for database design for the database designer, the utility of the ER model is: it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. It is simple and easy to understand with a minimum of training. Therefore, the database designer to communicate the design to the end user can use the model. In addition, the model can be used as a design plan by the database developer to implement a data model in specific database management software.

43.

What Is Degenerate Dimension Table?

Answer»

Degenerate Dimensions: If a table contains the values, which r NEITHER dimension nor measures is called degenerate dimensions. For EXAMPLE INVOICE id, employee no.A degenerate dimension is DATA that is dimensional in nature but STORED in a fact table.

Degenerate Dimensions: If a table contains the values, which r neither dimension nor measures is called degenerate dimensions. For example invoice id, employee no.A degenerate dimension is data that is dimensional in nature but stored in a fact table.

44.

What Is Vldb?

Answer»

The perception of what constitutes a VLDB continues to grow. A one-terabyte database WOULD NORMALLY be considered VLDB.Degenerate DIMENSION: it does not have any link with DIMENSIONS and it will not have any attribute.

The perception of what constitutes a VLDB continues to grow. A one-terabyte database would normally be considered VLDB.Degenerate dimension: it does not have any link with dimensions and it will not have any attribute.

45.

What Is Dimensional Modeling?

Answer»

Dimensional Modeling is a design concept USED by many data warehouse designers to build their data warehouse. In this design model all the data is stored in TWO types of tables - Facts table and Dimension table. Fact table contains the facts/measurements of the business and the dimension table contains the context of measurements i.e., the DIMENSIONS on which the facts are calculated.Dimension modeling is a method for DESIGNING data warehouse. Three types of modeling are there

1. Conceptual modeling
2. LOGICAL modeling
3. Physical modeling.

Dimensional Modeling is a design concept used by many data warehouse designers to build their data warehouse. In this design model all the data is stored in two types of tables - Facts table and Dimension table. Fact table contains the facts/measurements of the business and the dimension table contains the context of measurements i.e., the dimensions on which the facts are calculated.Dimension modeling is a method for designing data warehouse. Three types of modeling are there

1. Conceptual modeling
2. Logical modeling
3. Physical modeling.

46.

What Are The Various Etl Tools In The Market?

Answer»

Various ETL tools used in MARKET are Informatica Data Stage ORACLE WAREHOUSE Builder Ab Initio Data JUNCTION.

Various ETL tools used in market are Informatica Data Stage Oracle Warehouse Builder Ab Initio Data Junction.

47.

What Are The Possible Data Marts In Retail Sales?

Answer»

PRODUCT INFORMATION and SALES information.

Product information and sales information.

48.

What Is Meant By Metadata In Context Of A Data Warehouse And How It Is Important?

Answer»

Metadata is the DATA about data; Business Analyst or data modeler usually capture information about data - the source (where and how the data is originated), nature of data (char, varchar, nullable, existence, valid values etc) and behavior of data (how it is modified / derived and the LIFE cycle) in data dictionary.

Metadata is also presented at the Datamart level, SUBSETS, fact and dimensions, ODS etc. For a DW user, metadata PROVIDES vital information for analysis / DSS.

Metadata is the data about data; Business Analyst or data modeler usually capture information about data - the source (where and how the data is originated), nature of data (char, varchar, nullable, existence, valid values etc) and behavior of data (how it is modified / derived and the life cycle) in data dictionary.

Metadata is also presented at the Datamart level, subsets, fact and dimensions, ODS etc. For a DW user, metadata provides vital information for analysis / DSS.

49.

What Is A Linked Cube?

Answer»

LINKED cube in which a sub-set of the DATA can be analyzed into detail. The LINKING ensures that the data in the cubes remain consistent.

Linked cube in which a sub-set of the data can be analyzed into detail. The linking ensures that the data in the cubes remain consistent.

50.

What Is Surrogate Key? Where We Use It? Explain With Examples.

Answer»

Surrogate key is a substitution for the natural primary key.It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it is unique for each row in the table.

Data warehouses typically use a surrogate, (also known as artificial or identity key), key for the dimension tables primary keys. They can use Info sequence generator, or Oracle sequence, or SQL SERVER Identity values for the surrogate key.

It is useful because the natural primary key (i.e. Customer Number in Customer table) can change and this makes updates more difficult.

Some tables have columns such as AIRPORT_NAME OR CITY_NAME which are STATED as the primary keys (according to the business users) but ,not only can these change, indexing on a numerical value is probably better and you could consider creating a surrogate key called, say, AIRPORT_ID. This would be internal to the system and as FAR as the client is concerned, you may DISPLAY only the AIRPORT_NAME.

Surrogate key is a substitution for the natural primary key.It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it is unique for each row in the table.

Data warehouses typically use a surrogate, (also known as artificial or identity key), key for the dimension tables primary keys. They can use Info sequence generator, or Oracle sequence, or SQL Server Identity values for the surrogate key.

It is useful because the natural primary key (i.e. Customer Number in Customer table) can change and this makes updates more difficult.

Some tables have columns such as AIRPORT_NAME OR CITY_NAME which are stated as the primary keys (according to the business users) but ,not only can these change, indexing on a numerical value is probably better and you could consider creating a surrogate key called, say, AIRPORT_ID. This would be internal to the system and as far as the client is concerned, you may display only the AIRPORT_NAME.