BeyeBLOGS | BeyeBLOGS Home | Get Your Own Blog

« Data Modeler's Workbench | Main | Ab Initio Profiler »

February 1, 2007

Ab Initio Demo

The folks from Ab Initio were here Tuesday to give us a demo of their product.

It was a great demo. I have worked with Ab Initio before, and all the great things I remember liking about it were still there, plus a few new features.

After working with both Informatica and Ab Initio, I have to say that I prefer Ab Initio for several reasons.
Ab Initio is easier to work with.
Informatica has the PowerCenter Designer; where you put together the mappings of the data from source to target, and enter the business rules for transformation. But to define a source there is another "tab" that you have to switch to, and the targets are defined on another tab; and the reusable pieces of maps (called mapplets) are on yet another tab. Then the connection between a logical source definition in the tool, and the actual table/file in the computer is in a completely separate tool; called the Workflow Manager. Then, when you run the thing; there is yet another application that you use to monitor the execution. If you want to see the data you are operating upon, you have to use some other tool to get to it (I use TOAD to see the data we have on Oracle, Teradata SQL assistant, and QMF for our DB2 source). If you want to see what each individual component is up to? You are out of luck. *If* you can get the debugger to run, you might be able to track what the components are up to, but beware, of you have too many components on the map, the debugger won't even load. In Informatica, parallelism is left to the physical hardware implementation at the network level. (That is, to get parallelism in Informatica, you require more than one server to run it on).

With Ab Initio, it is all in one place. You drag and drop the components in the window of the Graphical Development Environment (GDE); there are database table components and file components and myriad transformation components. Then, to define the input columns, you can import DDL, or double click on the component and use a text edit mode to enter it. Same thing for the output definitions. Plus, you can also put a URL or the database connection information into the component and actually browse the data you are defining the DDL for. The tool gives you a visual indication when the information in the component is not complete enough for the graph to execute. Then, when you are ready to run it, click on a button and it starts to execute; no window switching. Plus, you can see the record counts as each component processes, so you can see which components are working as expected; or if there are bottlenecks in your process. And thats not even mentioning the debugger, which is a quantum leap beyond the execution data. Another huge advantage of Ab Initio is that parallelism is built in from the graph level. With Ab Initio, there are components to "Partition" and "Departition" a data flow; which allows the programmer to insert parallelism in the process at the graph level. And between graphs or between checkpoints within graphs you can land your data flows to "multifile" data sets on disk.

There is even more, especially when it comes to metadata and the tools related to it, but I'm afraid my posting would stray too far toward a rant at that point.

We will not be going whole hog to Ab Initio, because we have significant sunk cost in Informatica. But we will be using Ab Initio as much as we can (sharing an implementation at the Corporate office); especially for metadata.

Posted by RDM at February 1, 2007 9:45 AM

Comments

Post a comment




Remember Me?