BeyeBLOGS | BeyeBLOGS Home | Get Your Own Blog

« What is metrics and what are the different types of metrics | Main | Data Virtulization or near real time data for reporting with Data Warehouse »

October 2, 2009

Old Blog revised: Find out how to achieve change data capture for Oracle 9i database without adding triggers on the source table

By: Milind Zodge

In the Data warehousing project you need to pull the data from different environments. The source can be different databases or even different data sources like combination of database with flat file. If the source is purely database chances are that the source and target database have different database versions even different kinds of databases like SQL Server, Oracle etc. In this article I am focusing on getting data from Oracle 9i database.

This article will help you in giving another way of pulling changed data without modifying the source table structure or without adding triggers on the source table. This article is meant for any Database developer, Data Warehouse developer, Data Warehouse Architect, Data Analysts, Managers or even ETL Architect, and, ETL Designer who wants to pull the changed data for their project.

This article is not covering the details of how to create materialized view log and materialized view and not covering the fundamentals of how materialized view and log works, it just explain in brief about these objects and how it is used in this solution. You can get more information on materialized view and materialized log from Oracle's web Site.

Consider a case of having an Oracle 9i as a source database and Oracle 10g as a target database and we want to pull only changed records from the source table. There are three ways we can do this, first, add modified and inserted date on the source table and use it in the ETL script to incrementally fetch and process the data. Second, add DML triggers on the source table to insert a record into a stage table or use Oracle CDC to fetch the incremental data. In the first two cases you need to modify the table object. If you want to pull data from different systems, sometimes it can turn into a time consuming effort. What I mean by this, is, it may trigger series of meetings if you are going to modify the table structure or going to add triggers like in on the tables as most of the time, different departments in the company have their own schedule for developing the application or even for releasing new features. Since this is going to modify the object layout, it needs to be prioritized, and go thro the standard lifecycle of the project like impact analysis etc. All these required activities may take time, which will affect your project. Now if you are in fix and wants to get a changed data with out modifying the existing table structure or even don't want to add any triggers on the existing table then you will find this article helpful.

We needed to pull the data from different databases into Data Warehouse. All these databases had different versions so using Asynchronous CDC package feature of 10g was not an option. Adding triggers was a huge effort as its going to affect the online transaction processing system. So challenge was to figure out a way to so that an incremental load process can be developed for data warehouse load which will save tremendous processing time.
To overcome this problem we had two solutions, one to store the data in stage1, read the snapshot of data from the source system, compare it with the stage1 and load the changed or new records in stage2. Then use stage 2 to transform and load the data into Data Warehouse. This was again a costly effort and was not a scalable solution. The processing time with this solution will be more as more data gets loaded in the system.

Another solution was using materialized view log. This log will be populated by the transaction log and can be used in materialized views. It is a three step process. First step was performed in the source database and other two were performed on the target database.

Step 1: Creating a Materialized Log in the source database
Create a materialized log on the desired table. A materialized view log must be in the source database in the same schema as the table. A table can have only one materialized view log defined on it.
There are two ways you can define this log, either on rowid or primary key. This log's name will be MLOG$_table_name which is an underlying table. This log can hold primary key, row ids, or object ids can also have other columns which will support a fast refresh option of materialized view which will be created based on this log.
When data changes are made to master table data, Oracle will pull these changes to the Materialized log as defined. The function of this log is to log the DML activities performed on the used table.

Step 2: Creating a Materialized View in Target Database using this log
Create a Materialized view based on the above created materialized log. Materialized view is a replica of the desired table. This is like a table and needs to be refreshed periodically. You can define the needed refresh frequency to fast refresh this view based on the materialized log in the target database.
Whenever a DML operation is performed, on the defined table that activity will be recorded in the log which is in the Oracle 9i database, in the source system. Now we have a materialized view defined on this log in our system, Oracle 10g, which is target system. This view will only pull in the changes as defined in the log. These changes will be applied to the rows. One can define a desired frequency of refreshing this view. This process doesn't create any physical trigger, however there is a little overhead, as database has to store the row in the defined log table whenever a commit is issued.

Step 3: Writing triggers on Materialized View
As we know materialized views are like a table, hence we can write triggers on it. In prior two steps we saw how the changed data is pulled from the source system and will be loaded in the materialized view defined in the target system. Now the question is how to use this view to determine the changes. For this purpose we will write database triggers on this materialized view, triggers like after insert, after update and after delete.
These triggers will capture which operation was performed on the row. Now we will define a new table having same structure as of staging/target table with few additional columns. First, an indicator of which operation is done, whether it is insert/update or delete. Then a sequence number, this is important as you may get a row which is a new row and also got modified in the same time window. This time the sequence number will tell the sequence of the activity.

Now whenever a DML operation is performed on the table, the log will get refreshed by the new information based on the defined frequency, then materialized view will be refreshed with the new information based on the information available in materialized log. Appropriate trigger will be fired based on the operation performed on the data row. This trigger will create a new record in the staging table with appropriate operation mode like: I for Insert, U for Update and D for Delete with the activity sequence number.

How this works
Whenever a data is changed or added to the source table, a materialized log captures that information. Based on the refresh frequency, materialized view will be refreshed using the log. During refresh, it will insert new records in the view and will update the existing records. During this DML operation, DML triggers will be activated and will insert a row into the stage table, which can be further used to transfer the data into Data Warehouse or Data mart.

No matter what you do, there will be some overhead on the database. The discussed solution has some overhead too; however it is a nice handy alternative solution to pull the data.

Posted by Milind Zodge at October 2, 2009 10:30 PM


This is really a fantastic idea and approach. Reading it, I can see parallels and options for other RDMS's as well. Thanks for sharing.


Posted by: Imal Lofton at January 7, 2007 6:30 PM

A good alternative to load data into a datawarehouse. Thanks for sharing this information.

Posted by: Sheel Shah at January 9, 2007 6:32 PM

I am trying to follow step 2 of creating a materialized view on a materialized view log, e.g. mlog$_author, using the following statement.

create materialized view author_log_mv1
select * from mlog$_author;

It succeeds. I do inserts on master table author and data change can be found in mlog$_author. But, when I try to refresh author_log_mv1 using the following statement, the error occurs like this:
ORA-23413: table schema."MLOG$_AUTHOR" does not have a materialized view log

What have I done wrong?

Posted by: Eric Lau at February 2, 2007 10:20 AM

Just forget to include the refresh statement.

execute dbms_mview.REFRESH('author_log_mv1', 'f');

Posted by: Eric Lau at February 2, 2007 10:22 AM

Hi Milind,
Good approach. I have already asked my team to read your article. Keep it up!!!

Posted by: Pulkit at February 7, 2007 11:57 PM

Eric, sorry for late reply, I am in TDWI world Conf.

Well create your view using a table, i.e. author. Do not use log. Oracle will use log to fast refresh your view internally.


Posted by: Milind at February 21, 2007 11:52 PM

It is a fantastic approach to get the delta for any reporting / DW applications.

Appreciate your contribution. Thanks for sharing

Posted by: Jayaprakash at April 19, 2007 8:58 AM

I am a little surprised at this approach. A materialized view log also creates internal triggers on the source table, so what is the benefit (performance wise) when compared to a normal DB trigger ?

It would be better to use any solution supported in 9i which is based on redo/archive logs - e.g streams (9i CDC does not support redo log based extraction).

Posted by: Anonymous at October 8, 2007 7:55 AM

How is it that in other blogs, we find the same content as, and on this blog? It may be fitting to point out that someone you steal items.

Posted by: escort service London at May 16, 2011 10:50 PM

Post a comment

Remember Me?