Monthly Archives: September 2014

One Code or Database Change has a Huge Impact

 

The Problem

Considering the ongoing difficulty IT workers have when determining the impact of changes made in databases or code.
The problem becomes much more complicated when there are more moving parts such as:

1. Source code in multiple languages and technologies or repositories.
2. Stored Procedures.
3. Functions and Triggers.
4. Processes such as ETL.
5. Configurations.
6. Integrations.
7. Cross Server or Database dependencies.
8. Replication.
9. All Applications.
10. Reports

Of course most IT shops would try to determine the impact of changes before, making those changes.

Not a trivial task in most cases. Determining the impact of changes is usually a very time consuming and manual process. Followed by extensive testing, through much iteration, we hopefully discover all the points of impact and we can proceed to push those changes into production.

In extreme cases, I have seen companies crippled, due to the complexity of their databases and processes. In other cases a lot of money is spent on data analysis resources thrown at the problem in hope of overcoming it.

In yet another common scenario, the problem is so bad, there is a high turnover in skilled IT folks who give up on the seemingly impossible to manage situation.

At a minimum, the scenario above does account for a major chunk of time highly skilled IT personnel spend on research oriented tasks, nearly every single technology based organization deals with these problems in one form or another.

What is the Solution?

It has been widely accepted and sought out for years now, that a metadata repository,
(Data Dictionary on steroids) is needed to provide a centralized place of reference for metadata.

This repository can be searched, and impact analysis reports can be generated to assist teams with their changes or new requirement activities.

Why doesn’t everyone have one?

Reason number one: Market tools are too expensive and difficult to justify.

Here are a couple of the reasons for this:
• I see over and over again, huge requirements and scope. These die on the vine more often than not.
• Because of the demand to bundle too much functionality, the price for these tools is 6 figures to start. Mostly due to the consulting and customization needed.

Reason number two: People have been seeking out the perfect solution and in its absence they give up and do nothing.

Reason number three: Trying to sell metadata to the business does not usually go well.

I’m not saying a well-planned, phased implementation of an Enterprise wide solution is the wrong way to think or not needed. But we really need to break down the problem into manageable chunks and phase capabilities over time, which is best practice for any project.

The Fast Track Approach

The classic approaches to metadata management just do not address the key problem in a turnkey fashion; they have huge start up requirements that are just unnecessary.

Simply index and catalog as much of the metadata from databases, applications, and processes and provide search and impact analysis capabilities. This is the number one problem and the solution to that problem.