BeyeBLOGS | BeyeBLOGS Home | Get Your Own Blog

November 2, 2009

Data Virtulization or near real time data for reporting with Data Warehouse

By: Milind Zodge

Business requirement
Need to report near real time data segment along with consolidated data

Details
Data virtualization is getting lot of attention now-a-days. In current world business need of a data is changing. Previously Data warehouse used to support DSS applications and reporting tool like Dashboard and Scorecards which preliminary need summarized snapshot of a data.

However now the trend looks like going towards having a mix of consolidated data and near-real time data. There are few EII techniques and tools available for this however if you have to deliver this without spending a fortune you can leverage user database layer.

You can create a Data warehouse either top down or bottom up modeled and can have an ODS tables/schema to hold the ODS data without transformation. Various change data capture techniques can be used to keep ODS data in sync with the source, like in Oracle we can use Change data capture or Streams method.

Now since we are not transforming the ODS data it need to be transformed virtually. We can create a view layer combining these two layers to deliver data for some operational reporting and near real time data needs. Key is to transform the ODS data and fit it together with the data mart or data warehouse data.

Share: del.icio.us Digg Furl ma.gnolia Netscape Newsvine reddit StumbleUpon Yahoo MyWeb  

Posted by Milind Zodge at 10:30 PM | Comments (1)

October 2, 2009

Old Blog revised: Find out how to achieve change data capture for Oracle 9i database without adding triggers on the source table

By: Milind Zodge

In the Data warehousing project you need to pull the data from different environments. The source can be different databases or even different data sources like combination of database with flat file. If the source is purely database chances are that the source and target database have different database versions even different kinds of databases like SQL Server, Oracle etc. In this article I am focusing on getting data from Oracle 9i database.

This article will help you in giving another way of pulling changed data without modifying the source table structure or without adding triggers on the source table. This article is meant for any Database developer, Data Warehouse developer, Data Warehouse Architect, Data Analysts, Managers or even ETL Architect, and, ETL Designer who wants to pull the changed data for their project.

This article is not covering the details of how to create materialized view log and materialized view and not covering the fundamentals of how materialized view and log works, it just explain in brief about these objects and how it is used in this solution. You can get more information on materialized view and materialized log from Oracle's web Site.

Overview
Consider a case of having an Oracle 9i as a source database and Oracle 10g as a target database and we want to pull only changed records from the source table. There are three ways we can do this, first, add modified and inserted date on the source table and use it in the ETL script to incrementally fetch and process the data. Second, add DML triggers on the source table to insert a record into a stage table or use Oracle CDC to fetch the incremental data. In the first two cases you need to modify the table object. If you want to pull data from different systems, sometimes it can turn into a time consuming effort. What I mean by this, is, it may trigger series of meetings if you are going to modify the table structure or going to add triggers like in on the tables as most of the time, different departments in the company have their own schedule for developing the application or even for releasing new features. Since this is going to modify the object layout, it needs to be prioritized, and go thro the standard lifecycle of the project like impact analysis etc. All these required activities may take time, which will affect your project. Now if you are in fix and wants to get a changed data with out modifying the existing table structure or even don't want to add any triggers on the existing table then you will find this article helpful.

We needed to pull the data from different databases into Data Warehouse. All these databases had different versions so using Asynchronous CDC package feature of 10g was not an option. Adding triggers was a huge effort as its going to affect the online transaction processing system. So challenge was to figure out a way to so that an incremental load process can be developed for data warehouse load which will save tremendous processing time.
To overcome this problem we had two solutions, one to store the data in stage1, read the snapshot of data from the source system, compare it with the stage1 and load the changed or new records in stage2. Then use stage 2 to transform and load the data into Data Warehouse. This was again a costly effort and was not a scalable solution. The processing time with this solution will be more as more data gets loaded in the system.

Another solution was using materialized view log. This log will be populated by the transaction log and can be used in materialized views. It is a three step process. First step was performed in the source database and other two were performed on the target database.

Step 1: Creating a Materialized Log in the source database
Create a materialized log on the desired table. A materialized view log must be in the source database in the same schema as the table. A table can have only one materialized view log defined on it.
There are two ways you can define this log, either on rowid or primary key. This log's name will be MLOG$_table_name which is an underlying table. This log can hold primary key, row ids, or object ids can also have other columns which will support a fast refresh option of materialized view which will be created based on this log.
When data changes are made to master table data, Oracle will pull these changes to the Materialized log as defined. The function of this log is to log the DML activities performed on the used table.
E.g. CREATE MATERIALIZED VIEW LOG ON table name WITH option like OBJECT ID, PRIMARY KEY or ROWID

Step 2: Creating a Materialized View in Target Database using this log
Create a Materialized view based on the above created materialized log. Materialized view is a replica of the desired table. This is like a table and needs to be refreshed periodically. You can define the needed refresh frequency to fast refresh this view based on the materialized log in the target database.
Whenever a DML operation is performed, on the defined table that activity will be recorded in the log which is in the Oracle 9i database, in the source system. Now we have a materialized view defined on this log in our system, Oracle 10g, which is target system. This view will only pull in the changes as defined in the log. These changes will be applied to the rows. One can define a desired frequency of refreshing this view. This process doesn't create any physical trigger, however there is a little overhead, as database has to store the row in the defined log table whenever a commit is issued.

Step 3: Writing triggers on Materialized View
As we know materialized views are like a table, hence we can write triggers on it. In prior two steps we saw how the changed data is pulled from the source system and will be loaded in the materialized view defined in the target system. Now the question is how to use this view to determine the changes. For this purpose we will write database triggers on this materialized view, triggers like after insert, after update and after delete.
These triggers will capture which operation was performed on the row. Now we will define a new table having same structure as of staging/target table with few additional columns. First, an indicator of which operation is done, whether it is insert/update or delete. Then a sequence number, this is important as you may get a row which is a new row and also got modified in the same time window. This time the sequence number will tell the sequence of the activity.

Now whenever a DML operation is performed on the table, the log will get refreshed by the new information based on the defined frequency, then materialized view will be refreshed with the new information based on the information available in materialized log. Appropriate trigger will be fired based on the operation performed on the data row. This trigger will create a new record in the staging table with appropriate operation mode like: I for Insert, U for Update and D for Delete with the activity sequence number.

How this works
Whenever a data is changed or added to the source table, a materialized log captures that information. Based on the refresh frequency, materialized view will be refreshed using the log. During refresh, it will insert new records in the view and will update the existing records. During this DML operation, DML triggers will be activated and will insert a row into the stage table, which can be further used to transfer the data into Data Warehouse or Data mart.

Conclusion
No matter what you do, there will be some overhead on the database. The discussed solution has some overhead too; however it is a nice handy alternative solution to pull the data.

Share: del.icio.us Digg Furl ma.gnolia Netscape Newsvine reddit StumbleUpon Yahoo MyWeb  

Posted by Milind Zodge at 10:30 PM | Comments (9)

July 9, 2007

What is metrics and what are the different types of metrics

By: Milind Zodge

Overview
Any BI application's main role is to show information based on some measurements. These measurements are metrics. E.g. you measure how much sale you have done, so total sales revenue is your metrics. In this article I am focusing on the types of metrics and when and how to use a proper one in the application.

Details
There are three main types of metrics you can use in your application:

1.Leading Indicators: If you want to measure activities like how many touches are required to convert a prospect into a customer; leading indicator metrics are used, which measures activities. Generally these indicators will show how many calls/activities you need to do to achieve you goal.

2.Lagging Indicators: If you want to measure any business financial amounts like sales revenue; lagging indicators are used, which measures outcome of the activities. Generally these indicators will show where you stand currently.

3.Key Performance Indicators (KPI): If you want to see how is your performance and where you stand, is it good or bad then Key Performance Indicators are used which measure the performance. E.g. If you want to see how is sales revenue with respect to sales quota

Conclusion
A proper metrics is used based on which application you are designing for. If it is a BAM-Business Activity Monitoring then Lagging Indicators or KPI will convey the information.

Share: del.icio.us Digg Furl ma.gnolia Netscape Newsvine reddit StumbleUpon Yahoo MyWeb  

Posted by Milind Zodge at 9:00 PM | Comments (2)

June 20, 2007

Design technique for Date type columns in fact table for maximum performance

By: Milind Zodge

Overview
In the Data warehousing project you have dimensions and facts tables. And when you design a Data warehouse or Data Mart you come across many Date data type attributes. In this article I have pointed out a design technique for Date columns which gives highest performance.

Design
Consider a data mart having a fact table "Order" with many columns like Order Number, Order Date, Shipped Date and Amount and a "Time" dimension which have an entry for each day. You use "time_id" for "Order Date" however most of the time "Shipped Date" is kept as a Date column.

Consider in your reporting system you want to design a report to report number of orders shipped in a particular year. Now you will have format the shipped date column so that you can compare its year portion to get result. If you have a massive fact table this query is going to take more time as it will not be using any index, well you can create index to solve this problem.

Now consider you have a report which report number of orders shipped in a particular month, day, quarter etc. To speed up this operation you will have create index probably more than one. However if we use the id column and index on that column then you can avoid the above problem

Add shipped_date_id column along with the Shipped Date column in the fact table. Derive the value by using Time dimension. So whenever you query you always use index.

Conclusion
This way you can achieve maximum performance without adding more indexes. You can just go to your time dimension get the required ids and join it with your fact table which will use index defined on "shipped_date_id" column.

Share: del.icio.us Digg Furl ma.gnolia Netscape Newsvine reddit StumbleUpon Yahoo MyWeb  

Posted by Milind Zodge at 7:30 PM | Comments (1)

May 5, 2007

CDC Technique for dimension table which is based on a multi-table query

By: Milind Zodge

In the Data warehousing project you have dimension and fact tables. Usually, if data is coming from a single table, we can use the approach what I have presented in the last article "Change data capture for Oracle 9i database without adding triggers on the source table".

There are also plenty of other options available like CDC, using timestamp etc. However the problems comes when you have a dimension table which is constructed based on a multi-table query. In this case none of the above approach can work directly.

Overview
Consider a case of Sales Representative dimension, this dimension is based several attributes like area, login etc. These attributes are coming from different tables. Now we will see what we can use to have an incremental update of this table.

The examples shown in this article is for Oracle database however same concept can be used for other database engines.

Step 1: Creating a Function which will return hash value
We will be using a hash value technique to compare the rows. Well we really have one more option, compare each field and see if any one of them is changed and that way determine the changed row.

However hash value method is faster than the above approach and code also become manageable with less conditional statements. Both methods are same though.

Create a function such that it will read a value as a text parameter and will return a hash value for it.

e.g.
FUNCTION salesrep_hashvalue (p_input_str VARCHAR2)
RETURN VARCHAR2 IS
l_str VARCHAR2(20);
BEGIN
l_str := dbms_obfuscation_toolkit.md5(input_string => p_input_str);
RETURN l_str;
END salesrep_hashvalue;

Step 2: Add a new column in the dimension table to hold a hashvalue
Create a new column "hashvalue" in the dimension table. And update its value by using the above created function using the required columns.

Make sure you use the same set of columns and in the same sequence in the ETL logic to create a hash value for a new row.

Step 3: Write ETL code
In the ETL code read the records from this multi-table SQL in a cursor loop. For each record find out the hash key value. Get the old hash key value by selecting the record from the dimension table using a key. If no record exist then insert the record. If record exists compare these tow hash keys if it is different update the record otherwise skip it.

Conclusion
This way you can achieve change data capture for a multi-table select statement query used for creation of a dimension table.

Share: del.icio.us Digg Furl ma.gnolia Netscape Newsvine reddit StumbleUpon Yahoo MyWeb  

Posted by Milind Zodge at 10:30 PM | Comments (4)