Explore topic-wise InterviewSolutions in Current Affairs.

This section includes 7 InterviewSolutions, each offering curated multiple-choice questions to sharpen your Current Affairs knowledge and support exam preparation. Choose a topic below to get started.

1.

How To Run The Graph Without Gde?

Answer»

In RUN ==> Deploy >> As SCRIPT , it create a .bat FILE at ur host DIRECTORY ,and then run .bat file from Command prompt.

In RUN ==> Deploy >> As script , it create a .bat file at ur host directory ,and then run .bat file from Command prompt.

2.

What Is Local And Formal Parameter?

Answer»

Two are graph level PARAMETERS but in local you need to INITIALIZE the value at the time of declaration where as globle no need to initialize the data it will PROMT at the time of RUNNING the graph for that parameter.

Two are graph level parameters but in local you need to initialize the value at the time of declaration where as globle no need to initialize the data it will promt at the time of running the graph for that parameter.

3.

What Is Brodcasting And Replicate?

Answer»

Broadcast - Takes data from multiple INPUTS, combines it and sends it to all the output ports.

Eg - You have 2 incoming FLOWS (This can be data parallelism or component parallelism) on Broadcast component, one with 10 records & other with 20 records. Then on all the outgoing flows (it can be any number of flows) will have 10 + 20 = 30 records.

Replicate - It replicates the data for a particular partition and send it out to multiple out ports of the component, but MAINTAINS the partition integrity.

Eg - Your incoming flow to replicate has a data parallelism level of 2. with one partition having 10 recs & other one having 20 recs. Now suppose you have 3 output flos from replicate. Then each flow will have 2 data partitions with 10 & 20 records respectively.

Broadcast - Takes data from multiple inputs, combines it and sends it to all the output ports.

Eg - You have 2 incoming flows (This can be data parallelism or component parallelism) on Broadcast component, one with 10 records & other with 20 records. Then on all the outgoing flows (it can be any number of flows) will have 10 + 20 = 30 records.

Replicate - It replicates the data for a particular partition and send it out to multiple out ports of the component, but maintains the partition integrity.

Eg - Your incoming flow to replicate has a data parallelism level of 2. with one partition having 10 recs & other one having 20 recs. Now suppose you have 3 output flos from replicate. Then each flow will have 2 data partitions with 10 & 20 records respectively.

4.

What Is The Importance Of Eme In Abinitio?

Answer»

EME is a REPOSITORY in Ab Inition and it USED for checkin and checkout for graphs also MAINTAINS graph version.

EME is a repository in Ab Inition and it used for checkin and checkout for graphs also maintains graph version.

5.

What Is M_dump?

Answer»

m_dump COMMAND PRINTS the DATA in a FORMATTED WAY.
m_dump

m_dump command prints the data in a formatted way.
m_dump

6.

What Is The Difference Between A Scan Component And A Rollup Component?

Answer»

ROLLUP is for group by and Scan is for successive total. Basically, when we need to PRODUCE summary then we USE scan. Rollup is USED to aggregate DATA.

Rollup is for group by and Scan is for successive total. Basically, when we need to produce summary then we use scan. Rollup is used to aggregate data.

7.

What Is Skew And Skew Measurement?

Answer»

skew is the mesaureof DATA flow to each partation .
suppose i/p is comming from 4 files and size is 1 GB
1 gb= ( 100mb+200mb+300mb+5oomb)
1000mb/4= 250 mb
(100- 250 )/500= --> -150/500 == cal ur self it wil COME in -ve value.
calclu for 200,500,300.
+ve value of skew is ALLWAYS desriable.
skew is a indericet measure of graph.

skew is the mesaureof data flow to each partation .
suppose i/p is comming from 4 files and size is 1 gb
1 gb= ( 100mb+200mb+300mb+5oomb)
1000mb/4= 250 mb
(100- 250 )/500= --> -150/500 == cal ur self it wil come in -ve value.
calclu for 200,500,300.
+ve value of skew is allways desriable.
skew is a indericet measure of graph.

8.

How To Get Dml Using Utilities In Unix?

Answer»

If your source is a COBOL COPYBOOK, then we have a command in UNIX which generates the required in Ab Initio. here it is:
cobol-to-dml.

If your source is a cobol copybook, then we have a command in unix which generates the required in Ab Initio. here it is:
cobol-to-dml.

9.

What Is The Datatype Of The Surrogate Key?

Answer»

Normally Surrogate keys are sequencers which KEEP on INCREASING with new RECORDS being injected into the table. The standard DATATYPE is integer.

Normally Surrogate keys are sequencers which keep on increasing with new records being injected into the table. The standard datatype is integer.

10.

Give Examples Of Degenerated Dimensions

Answer»

Degenerated Dimension is a dimension KEY without corresponding dimension. EXAMPLE:

In the PointOfSale Transaction FACT table, we have:

Date Key (FK), Product Key (FK), Store Key (FK), Promotion Key?(FP),?and POS Transaction Number?? Date Dimension corresponds to Date Key, Production Dimension corresponds to Production Key. In a traditional parent-child database, POS Transactional Number would be?the key to the transaction header record that contains all the info valid for the transaction as a whole, such as the transaction date and store?identifier.?But in this?dimensional model, we have ALREADY extracted this info into other dimension. Therefore, POS Transaction Number?looks like a dimension key in the fact table but does not have the corresponding dimension table.

Therefore, POS Transaction Number is a degenerated dimension.

Degenerated Dimension is a dimension key without corresponding dimension. Example:

In the PointOfSale Transaction Fact table, we have:

Date Key (FK), Product Key (FK), Store Key (FK), Promotion Key?(FP),?and POS Transaction Number?? Date Dimension corresponds to Date Key, Production Dimension corresponds to Production Key. In a traditional parent-child database, POS Transactional Number would be?the key to the transaction header record that contains all the info valid for the transaction as a whole, such as the transaction date and store?identifier.?But in this?dimensional model, we have already extracted this info into other dimension. Therefore, POS Transaction Number?looks like a dimension key in the fact table but does not have the corresponding dimension table.

Therefore, POS Transaction Number is a degenerated dimension.

11.

Which Automation Tool Is Used In Data Warehouse Testing?

Answer»

No TOOL TESTING in DONE in DWH, only MANUAL testing is done.

No Tool testing in done in DWH, only manual testing is done.

12.

What Are The Advantages Data Mining Over Traditional Approaches?

Answer»

Data Mining is used for?the ESTIMATION of future. For example,?if we take a company/BUSINESS organization, by USING the concept of Data Mining, we can predict the future of business interms of Revenue (or) EMPLOYEES (or) Cutomers (or) Orders ETC.

Traditional approches use?simple algorithms?for estimating the future. But, it does not give accurate results when compared to Data Mining.

Data Mining is used for?the estimation of future. For example,?if we take a company/business organization, by using the concept of Data Mining, we can predict the future of business interms of Revenue (or) Employees (or) Cutomers (or) Orders etc.

Traditional approches use?simple algorithms?for estimating the future. But, it does not give accurate results when compared to Data Mining.

13.

How Are The Dimension Tables Designed?

Answer»

Find where data for this DIMENSION are located.
FIGURE out how to extract this data.
DETERMINE how to maintain CHANGES to this dimension.
Change FACT table and DW population routines.

Find where data for this dimension are located.
Figure out how to extract this data.
Determine how to maintain changes to this dimension.
Change fact table and DW population routines.

14.

What Is The Difference Between Oltp And Olap?

Answer»

OLTP is NOTHING but OnLine Transaction PROCESSING ,which contains a normalised tables and online DATA,which have FREQUENT insert/updates/delete.

OLTP is nothing but OnLine Transaction Processing ,which contains a normalised tables and online data,which have frequent insert/updates/delete.

15.

What Is Snow Flake Schema?

Answer»

SNOWFLAKE schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table. For EXAMPLE, a product dimension table in a star schema might be normalized into a products table, a product_category table, and a product_manufacturer table in a snowflake schema. While this SAVES space, it INCREASES the number of dimension tables and requires more FOREIGN key joins. The result is more complex queries and reduced query performance.

Snowflake schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table. For example, a product dimension table in a star schema might be normalized into a products table, a product_category table, and a product_manufacturer table in a snowflake schema. While this saves space, it increases the number of dimension tables and requires more foreign key joins. The result is more complex queries and reduced query performance.

16.

Need For Surrogate Key Not Primary Key

Answer»

If a column is made a primary key and later there NEEDS a CHANGE in the data TYPE or the length for that column then all the foreign keys that are dependent on that primary key should be changed MAKING the database Unstable . Surrogate Keys make the database more STABLE because it insulates the Primary and foreign key relationships from changes in the data types and length.

If a column is made a primary key and later there needs a change in the data type or the length for that column then all the foreign keys that are dependent on that primary key should be changed making the database Unstable . Surrogate Keys make the database more stable because it insulates the Primary and foreign key relationships from changes in the data types and length.

17.

What Is The Need Of Surrogate Key;why Primary Key Not Used As Surrogate Key?

Answer»

Surrogate KEY is an artificial identifier for an entity. In surrogate key values are generated by the system sequentially(Like Identity property in SQL Server and Sequence in Oracle). They do not DESCRIBE anything. Primary Key is a NATURAL identifier for an entity. In Primary KEYS all the values are entered manually by the user which are uniquely identified. There will be no repetition of data.

Surrogate Key is an artificial identifier for an entity. In surrogate key values are generated by the system sequentially(Like Identity property in SQL Server and Sequence in Oracle). They do not describe anything. Primary Key is a natural identifier for an entity. In Primary keys all the values are entered manually by the user which are uniquely identified. There will be no repetition of data.

18.

What Is A General Purpose Scheduling Tool?

Answer»

The BASIC purpose of the scheduling TOOL in a DW Application is to stream line the flow of DATA from Source To Target at SPECIFIC time or based on some condition.

The basic purpose of the scheduling tool in a DW Application is to stream line the flow of data from Source To Target at specific time or based on some condition.

19.

What Is The Main Differnce Between Schema In Rdbms And Schemas In Datawarehouse?

Answer»

RDBMS Schema
* USED for OLTP SYSTEMS
* Traditional and old schema
* NORMALIZED
* Difficult to understand and navigate
* Cannot solve extract and complex problems
* Poorly modelled
DWH Schema
* Used for OLAP systems
* New generation schema
* DE Normalized
* EASY to understand and navigate
* Extract and complex problems can be easily solved
* Very good model.

RDBMS Schema
* Used for OLTP systems
* Traditional and old schema
* Normalized
* Difficult to understand and navigate
* Cannot solve extract and complex problems
* Poorly modelled
DWH Schema
* Used for OLAP systems
* New generation schema
* De Normalized
* Easy to understand and navigate
* Extract and complex problems can be easily solved
* Very good model.

20.

What Are Modeling Tools Available In The Market

Answer»

These tools are used for Data/dimension modeling
Oracle Designer
ERWIN (ENTITY RELATIONSHIP for windows)
Informatica (Cubes/Dimensions)
Embarcadero
POWER Designer SYBASE.

These tools are used for Data/dimension modeling
Oracle Designer
ERWin (Entity Relationship for windows)
Informatica (Cubes/Dimensions)
Embarcadero
Power Designer Sybase.

21.

What Is Meant By Metadata In Context Of A Datawarehouse And How It Is Important?

Answer»

Metadata or META Data Metadata is data about data. Examples of metadata include data element descriptions, data type descriptions, attribute/property descriptions, range/domain descriptions, and process/method descriptions. The repository environment encompasses all corporate metadata resources: database CATALOGS, data dictionaries, and navigation services. Metadata includes things like the name, LENGTH, valid values, and description of a data element. Metadata is stored in a data dictionary and repository. It insulates the data warehouse from changes in the schema of operational systems. Metadata Synchronization The process of consolidating, relating and synchronizing data elements with the same or SIMILAR meaning from different systems. Metadata synchronization joins these differing elements together in the data warehouse to allow for easier access.

Metadata or Meta Data Metadata is data about data. Examples of metadata include data element descriptions, data type descriptions, attribute/property descriptions, range/domain descriptions, and process/method descriptions. The repository environment encompasses all corporate metadata resources: database catalogs, data dictionaries, and navigation services. Metadata includes things like the name, length, valid values, and description of a data element. Metadata is stored in a data dictionary and repository. It insulates the data warehouse from changes in the schema of operational systems. Metadata Synchronization The process of consolidating, relating and synchronizing data elements with the same or similar meaning from different systems. Metadata synchronization joins these differing elements together in the data warehouse to allow for easier access.

22.

What Are The Data Types Present In Bo?n What Happens If We Implement View In The Designer N Report

Answer»

my knowlegde, these are?called as object types in the Business Objects.And ALIAS is different from VIEW in the universe. View is at DATABASE LEVEL, but alias?is a different name given for the same table to resolve the LOOPS in universe.

my knowlegde, these are?called as object types in the Business Objects.And alias is different from view in the universe. View is at database level, but alias?is a different name given for the same table to resolve the loops in universe.

23.

What Are The Various Reporting Tools In The Market?

Answer»

1. MS-Excel
2. Business Objects (Crystal Reports)
3. Cognos (Impromptu, Power PLAY)
4. Microstrategy
5. MS reporting services
6. Informatica Power ANALYZER
7. Actuate
8. Hyperion (BRIO)
9. ORACLE Express OLAP
10. Proclarity.

1. MS-Excel
2. Business Objects (Crystal Reports)
3. Cognos (Impromptu, Power Play)
4. Microstrategy
5. MS reporting services
6. Informatica Power Analyzer
7. Actuate
8. Hyperion (BRIO)
9. Oracle Express OLAP
10. Proclarity.

24.

What Are Slowly Changing Dimensions?

Answer»

Dimensions that change over time are called Slowly Changing Dimensions. For instance, a product price changes over time; People change their names for some reason; Country and State names MAY change over time. These are a few examples of Slowly Changing Dimensions since some changes are happening to them over a period of time.

If the data in the DIMENSION table happen to change very rarely,then it is called as slowly changing dimension.

EX: changing the NAME and address of a PERSON,which happens rerely.

Dimensions that change over time are called Slowly Changing Dimensions. For instance, a product price changes over time; People change their names for some reason; Country and State names may change over time. These are a few examples of Slowly Changing Dimensions since some changes are happening to them over a period of time.

If the data in the Dimension table happen to change very rarely,then it is called as slowly changing dimension.

ex: changing the name and address of a person,which happens rerely.

25.

What Are The Different Methods Of Loading Dimension Tables?

Answer»

Conventional Load:
Before LOADING the DATA, all the Table constraints will be CHECKED against the data.
DIRECT load:(Faster Loading)
All the Constraints will be disabled. Data will be loaded directly.Later the data will be checked against the table constraints and the BAD data won't be indexed.

Conventional Load:
Before loading the data, all the Table constraints will be checked against the data.
Direct load:(Faster Loading)
All the Constraints will be disabled. Data will be loaded directly.Later the data will be checked against the table constraints and the bad data won't be indexed.

26.

Explain Degenerated Dimension.

Answer»

A DEGENERATE dimension?is a?Dimension which has only a single attribute.

This dimension is typically represented as a single field in a fact table.

The data items thar are not facts and data items that do not fit into the existing dimensions are

termed as Degenerate Dimensions.
Degenerate Dimensions are the fastest WAY to group similar transactions.
Degenerate Dimensions are USED when fact TABLES represent transactional data.
They can be used as primary KEY for the fact table but they cannot act as foreign keys.

A Degenerate dimension?is a?Dimension which has only a single attribute.

This dimension is typically represented as a single field in a fact table.

The data items thar are not facts and data items that do not fit into the existing dimensions are

termed as Degenerate Dimensions.
Degenerate Dimensions are the fastest way to group similar transactions.
Degenerate Dimensions are used when fact tables represent transactional data.
They can be used as primary key for the fact table but they cannot act as foreign keys.

27.

Is It Correct/feasible Develop A Data Mart Using An Ods?

Answer»

Yes it is correct to DEVELOP a DATA Mart using an ODS.becoz ODS which is used to?STORE TRANSACTION data and few Days (LESS historical data) this is what datamart is required so it is coct to develop datamart using ODS .

Yes it is correct to develop a Data Mart using an ODS.becoz ODS which is used to?store transaction data and few Days (less historical data) this is what datamart is required so it is coct to develop datamart using ODS .

28.

What Are Semi-additive And Factless Facts And In Which Scenario Will You Use Such Kinds Of Fact Tables?

Answer»

Semi-Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others. For example:

Current_Balance and Profit_Margin are the facts. Current_Balance is a semi-additive fact, as it MAKES sense to add them up for all ACCOUNTS (what's the total current balance for all accounts in the bank?), but it does not make sense to add them up through time (adding up all current balances for a given account for each day of the month does not give us any useful information A factless fact table captures the many-to-many RELATIONSHIPS between dimensions, but contains no numeric or textual facts. They are often used to record events or coverage information.

COMMON examples of factless fact tables include:

- Identifying product promotion events (to determine promoted products that didn?t sell)
- Tracking student attendance or registration events
- Tracking insurance-related accident events
- Identifying building, facility, and equipment schedules for a hospital or UNIVERSITY.

Semi-Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others. For example:

Current_Balance and Profit_Margin are the facts. Current_Balance is a semi-additive fact, as it makes sense to add them up for all accounts (what's the total current balance for all accounts in the bank?), but it does not make sense to add them up through time (adding up all current balances for a given account for each day of the month does not give us any useful information A factless fact table captures the many-to-many relationships between dimensions, but contains no numeric or textual facts. They are often used to record events or coverage information.

Common examples of factless fact tables include:

- Identifying product promotion events (to determine promoted products that didn?t sell)
- Tracking student attendance or registration events
- Tracking insurance-related accident events
- Identifying building, facility, and equipment schedules for a hospital or university.

29.

What Type Of Indexing Mechanism Do We Need To Use For A Typical Datawarehouse?

Answer»

On the fact table it is best to use bitmap INDEXES. Dimension TABLES can use bitmap and/or the other types of clustered/non-clustered, unique/non-unique indexes.

To my knowledge, SQLServer does not SUPPORT bitmap indexes. Only Oracle SUPPORTS bitmaps.

On the fact table it is best to use bitmap indexes. Dimension tables can use bitmap and/or the other types of clustered/non-clustered, unique/non-unique indexes.

To my knowledge, SQLServer does not support bitmap indexes. Only Oracle supports bitmaps.

30.

Why Is Data Modeling Important?

Answer»

DATA modeling is PROBABLY the most labor intensive and TIME consuming part of the DEVELOPMENT process. Why bother especially if you are PRESSED for time? A common.

Data modeling is probably the most labor intensive and time consuming part of the development process. Why bother especially if you are pressed for time? A common.

31.

Steps In Building The Data Model

Answer»

While ER MODEL lists and defines the constructs REQUIRED to BUILD a DATA model, there is no standard process for doing so. Some methodologies, such as IDEFIX, specify a bottom-up.

While ER model lists and defines the constructs required to build a data model, there is no standard process for doing so. Some methodologies, such as IDEFIX, specify a bottom-up.

32.

What Is A Data Warehouse?

Answer»

Data Warehouse is a repository of integrated information, available for queries and analysis. Data and information are EXTRACTED from heterogeneous sources as they are generated….This makes it much EASIER and more EFFICIENT to run queries over data that originally came from different sources. Typical relational databases are designed for on-line TRANSACTIONAL processing (OLTP) and do not MEET the requirements for effective on-line analytical processing (OLAP). As a result, data warehouses are designed differently than traditional relational databases.

Data Warehouse is a repository of integrated information, available for queries and analysis. Data and information are extracted from heterogeneous sources as they are generated….This makes it much easier and more efficient to run queries over data that originally came from different sources. Typical relational databases are designed for on-line transactional processing (OLTP) and do not meet the requirements for effective on-line analytical processing (OLAP). As a result, data warehouses are designed differently than traditional relational databases.

33.

What Is Fact Table?

Answer»

Fact Table contains the MEASUREMENTS or metrics or facts of business PROCESS. If your business process is "Sales" , then a measurement of this business process such as "MONTHLY sales number" is captured in the Fact table. Fact table also contains the foriegn keys for the dimension TABLES.

Fact Table contains the measurements or metrics or facts of business process. If your business process is "Sales" , then a measurement of this business process such as "monthly sales number" is captured in the Fact table. Fact table also contains the foriegn keys for the dimension tables.

34.

Differences Between Star And Snowflake Schemas ?

Answer»

The star schema is CREATED when all the dimension tables DIRECTLY link to the fact table. Since the graphical representation resembles a star it is called a star schema. It must be noted that the foreign keys in the fact table link to the primary key of the dimension table. This SAMPLE provides the star schema for a sales_ fact for the year 1998. The dimensions created are Store, CUSTOMER, Product_class and time_by_day. The PRODUCT table links to the product_class table through the primary key and indirectly to the fact table. The fact table contains foreign keys that link to the dimension tables.

The star schema is created when all the dimension tables directly link to the fact table. Since the graphical representation resembles a star it is called a star schema. It must be noted that the foreign keys in the fact table link to the primary key of the dimension table. This sample provides the star schema for a sales_ fact for the year 1998. The dimensions created are Store, Customer, Product_class and time_by_day. The Product table links to the product_class table through the primary key and indirectly to the fact table. The fact table contains foreign keys that link to the dimension tables.

35.

What Does Level Of Granularity Of A Fact Table Signify?

Answer»

In simple terms, LEVEL of granularity defines the extent of detail. As an example, let US look at GEOGRAPHICAL level of granularity. We MAY analyze data at the levels of COUNTRY, REGION, TERRITORY, CITY and STREET. In this case, we say the highest level of granularity is STREET.

In simple terms, level of granularity defines the extent of detail. As an example, let us look at geographical level of granularity. We may analyze data at the levels of COUNTRY, REGION, TERRITORY, CITY and STREET. In this case, we say the highest level of granularity is STREET.

36.

Compare Data Warehouse Database And Oltp Database.

Answer»

DATA Warehouse is USED for business measures cannot be used to cater real time business needs of the organizationand is optimized for lot of data, unpredictable QUERIES. On the other hand, OLTP database is for real time business OPERATIONS that are used for a common set of transactions. Data warehouse does not require any validation of data. OLTP database requires validation of data.

Data Warehouse is used for business measures cannot be used to cater real time business needs of the organizationand is optimized for lot of data, unpredictable queries. On the other hand, OLTP database is for real time business operations that are used for a common set of transactions. Data warehouse does not require any validation of data. OLTP database requires validation of data.

37.

How To Enable Security In Cognos Connection In Cognos Report Net

Answer»

You can imlement security via your WINDOWS NT system accounts, LDAP accounts for Cognos connection. To do thisconfigure the DESIRED Security section in the Cognos CONFIGURATION.

You can imlement security via your Windows NT system accounts, LDAP accounts for Cognos connection. To do thisconfigure the desired Security section in the Cognos Configuration.

38.

What Is The Difference Between Etl Tool And Olap Tool? What Are Various Etl In The Market? What Are Various Olap Tools? What Is The Future For Both For Next Five Years?

Answer»

ETL is a extraction,transformation,loading tool i.e u can extract , u can transform using different TRANSFORMATIONS available in tool and aggreagte the data. The OUTPUT of thisETL tool is used as input to OLAP tool.

OLAP is online analytical process, where u can get online reports after doing some joines,creating some cubes

ETL TOOLS in market
1 INFORMATICA-- univeral tool ,good market
2 ABINITO -- fastest loading tool,very good market
3 DATASTAGE-- difficult work, no good market
4 BODI-- good market
5 ORACLE WAREHOUSE BUILDER-- good market.

ETL is a extraction,transformation,loading tool i.e u can extract , u can transform using different transformations available in tool and aggreagte the data. The output of thisETL tool is used as input to OLAP tool.

OLAP is online analytical process, where u can get online reports after doing some joines,creating some cubes

ETL tools in market
1 INFORMATICA-- univeral tool ,good market
2 ABINITO -- fastest loading tool,very good market
3 DATASTAGE-- difficult work, no good market
4 BODI-- good market
5 ORACLE WAREHOUSE BUILDER-- good market.

39.

Compare Data Warehouse Database And Oltp Database

Answer»

The data warehouse and the OLTP data base are bothrelational databases. However, the objectives of both these databases are different.

The OLTP database records transactions in real time and aims to automate clerical data ENTRY processes of a business entity. Addition, modification and deletion of data in the OLTP database is ESSENTIAL and the semantics of the applicationused in the front end impact on the organization of the data in the database.

The data warehouse on the other HAND does not cater to real time operational requirements of the enterprise. It is more a storehouse of CURRENT and historical data and may alsocontain data extracted from external data sources.

The data warehouse and the OLTP data base are bothrelational databases. However, the objectives of both these databases are different.

The OLTP database records transactions in real time and aims to automate clerical data entry processes of a business entity. Addition, modification and deletion of data in the OLTP database is essential and the semantics of the applicationused in the front end impact on the organization of the data in the database.

The data warehouse on the other hand does not cater to real time operational requirements of the enterprise. It is more a storehouse of current and historical data and may alsocontain data extracted from external data sources.

40.

What Are The Different Industries Which Use This Marketing Tool?

Answer»

Many different companies can USE this tool for DEVELOPING their BUSINESS strategy but it is often THREE major industries which use this tool more. Those three industries are Consumer goods industries, Retail industries, and financial services industry. These industry`s have huge amount of data in their disposal which makes then to use these tools to determine their exact CUSTOMER.

Many different companies can use this tool for developing their business strategy but it is often three major industries which use this tool more. Those three industries are Consumer goods industries, Retail industries, and financial services industry. These industry`s have huge amount of data in their disposal which makes then to use these tools to determine their exact customer.

41.

Explain About The Database Marketing Application Of Olap?

Answer»

Database marketing tool or application helps a USER or marketing professional in determining the right tool or strategy for his valuable add campaign. This tool collects DATA from all sources and gives relevant INFORMATION the specialist with their add campaign. It gives a complete picture to the DEVELOPER.

Database marketing tool or application helps a user or marketing professional in determining the right tool or strategy for his valuable add campaign. This tool collects data from all sources and gives relevant information the specialist with their add campaign. It gives a complete picture to the developer.

42.

Explain About Multidimensional Features Present In Olap?

Answer»

Multidimensional SUPPORT is very essential if we are to include multiple hierarchies in our data analysis. Multidimensional feature ALLOWS a user to ANALYZE BUSINESS and organization. OLAP EFFICIENTLY handles support for multidimensional features.

Multidimensional support is very essential if we are to include multiple hierarchies in our data analysis. Multidimensional feature allows a user to analyze business and organization. OLAP efficiently handles support for multidimensional features.

43.

Explain About Analysis?

Answer»

Analysis defines about the LOGICAL and STATISTICAL analysis REQUIRED for an EFFICIENT output. This involves writing of code and performing calculations, but most part of these languages does not require complex programming language knowledge. There are MANY specific features which are included such as time analysis, currency translation, etc.

Analysis defines about the logical and statistical analysis required for an efficient output. This involves writing of code and performing calculations, but most part of these languages does not require complex programming language knowledge. There are many specific features which are included such as time analysis, currency translation, etc.

44.

Explain About Shared Features Of Olap?

Answer»

Shared implements most of the security features into OLAP. If multiple accesses are REQUIRED ADMIN can MAKE necessary changes. The default security level for all OLAP products is read only. For multiple UPDATES it is PREDOMINANT to make necessary security changes.

Shared implements most of the security features into OLAP. If multiple accesses are required admin can make necessary changes. The default security level for all OLAP products is read only. For multiple updates it is predominant to make necessary security changes.

45.

Explain About Api`s Of Olap?

Answer»

Microsoft in the late 1997 introduced a standard API known as OLE DB. After which XML was used for ANALYSIS specification and this specification was LARGELY used by MANY VENDORS throughout the world as a standard specification. MDX is the standards specification for OLAP.

Microsoft in the late 1997 introduced a standard API known as OLE DB. After which XML was used for analysis specification and this specification was largely used by many vendors throughout the world as a standard specification. MDX is the standards specification for OLAP.

46.

Explain About Hybrid Olap?

Answer»

When a database DEVELOPER USES Hybrid OLAP it divides the data between relational and SPECIALIZED storage. In some particular modifications a HOLAP database may STORE huge amounts of data in its relational tables. Specialized data storage is used to store data which is less detailed and more aggregate.

When a database developer uses Hybrid OLAP it divides the data between relational and specialized storage. In some particular modifications a HOLAP database may store huge amounts of data in its relational tables. Specialized data storage is used to store data which is less detailed and more aggregate.

47.

Explain About Candidate Check?

Answer»

The process which is underlined during the check of BASE data is known as CANDIDATE check. When performing candidate check performance varies either towards the positive side or to the negative side. Performance of candidate check depends UPON the user query and also they examine the base data.

The process which is underlined during the check of base data is known as candidate check. When performing candidate check performance varies either towards the positive side or to the negative side. Performance of candidate check depends upon the user query and also they examine the base data.

48.

Explain About Binning?

Answer»

Binning process is very useful to SAVE SPACE. Performance may VARY depending upon the QUERY generated sometimes solution to a query can come within few seconds and sometimes it may take longer time. Binning process holds MULTIPLE values in the same bin.

Binning process is very useful to save space. Performance may vary depending upon the query generated sometimes solution to a query can come within few seconds and sometimes it may take longer time. Binning process holds multiple values in the same bin.

49.

Explain About Encoding Technique Used In Bitmaps Indexes?

Answer»

BITMAPS commonly USE one bitmap for EVERY single distinct value. Number of bitmaps used can be reduced by opting for a different type of encoding. SPACE can be optimized but when a query is GENERATED bitmaps have to be accessed.

Bitmaps commonly use one bitmap for every single distinct value. Number of bitmaps used can be reduced by opting for a different type of encoding. Space can be optimized but when a query is generated bitmaps have to be accessed.

50.

Explain About The Role Of Bitmap Indexes To Solve Aggregation Problems?

Answer»

Bitmaps are very useful in START schema to join large databases to small databases. Answer queries and bit arrays are USED to perform logical operations on the databases. Bit map indexes are very EFFICIENT in HANDLING Gender differentiation; also repetitive tasks are performed with much larger EFFICIENCY.

Bitmaps are very useful in start schema to join large databases to small databases. Answer queries and bit arrays are used to perform logical operations on the databases. Bit map indexes are very efficient in handling Gender differentiation; also repetitive tasks are performed with much larger efficiency.