Industry Talk

Regular Industry Development Updates, Opinions and Talking Points relating to Manufacturing, the Supply Chain and Logistics.

How not to do data migrations

As more banks and financial institutions adopt artificial intelligence (AI) and data mining, combined with ever more demanding customer expectations, outdated legacy systems that struggle to keep up must be replaced. This will involve the obvious product creation but also the less glamorous extracting, transforming and loading of data from one system to another — a process known as data migration.

According to Gartner, 83 per cent of data migrations either fail outright or exceed their allotted budgets and implementation schedules. There are many reasons for this, and the report highlights that it can be because Ops teams lack the flexibility to respond to the unexpected — such as transformation flaws or missing data.

Data migration is more complex than it sounds. If an organisation focuses purely on the new system’s capabilities, it may run into problems. Data from the old system is quite often a mixture of unused data, captured because the ‘system’ requires it, or just plain wrong. The data structure often represents what is good for the original system, but this has constrained the product over time and, when the time comes to move to a new system, the hitherto blocked backlog of business process improvements can now finally be unlocked.

Making structural data and business process changes are core software engineering activities, and not simply a data migration problem. Because of this, proper software engineering is required and, like all proper software engineering testing, regular deployments and clearly defined small batches of changes will be needed to do a ‘data migration’ well.

Working with an Agile vendor can help ensure the project is successful but may introduce a requirement for cultural changes and adopting Agile methodology.


Without a full picture

If an organisation attempts a data migration based on anonymised data or a limited subset, it will no doubt fail. The complete, and unredacted data being migrated must be available to the software engineering team from the very start. Migrations need to happen at least daily, and tests are created to test all the ‘edge cases’, validating that the before and after all are as expected.


Without regular testing

Because of Waterfall’s influence, many Ops teams see a new system as something to be developed, tested and deployed in one go, as a project that has an end date; this is just plain wrong.  In reality, this is always going to be something to improve continually, and without continually improving the new system will die an ugly death just like the old.

By treating the migration as a continuous software engineering problem, we can embed testing into the migration process itself. With every iteration, we can improve the quality, and prove the completeness of the new product build. In many respects, we move the team from concentrating on a ‘project’ to now owning and managing a product. Automating everything allows us to get a first cut of the new product to private beta users more rapidly to the end customers.


Not taking risks

There are typically two types of Ops people: the maintainers and the modernisers. The first group mainly cares about keeping their current system running and will only introduce modifications to keep it alive. For this group, success is judged based on whether an incident has occurred over a given period.

In contrast, modernisers regularly make changes to the applications they look after. Making changes means they are frequently doing deployments and, therefore, are running the risk that something might go wrong, such as incorrect data mapping or configuration errors. However, by bringing the right expertise on board, these risks can be avoided. Remember, evolution pays dividends — a regularly updated frontend is more likely to meet changing customer requirements and stay secure.

Don’t give yourself a headache with a half-cocked migration project. Instead, regularly build, test and deploy a new product, combined with a fully automated data migration pipeline so that your trusted users can play with the new product as it’s being built.

Accelerate your data or systems migration by visiting the Catapult website and finding out more about its Agile product management or read this case study on how it helped with a server migration for a major insurer.