/

July 25, 2021

Migration of an enterprise product to AWS Cloud – Part 3

This blog comprises of 4 parts:

  1. Part 1 – Choosing the cloud provider
  2. Part 2 – Challenges
  3. Part 3 – (This entry) Roadmap to migration
  4. Part 4 – Learnings

We hope you find this blog useful as you plan migrations to cloud from in-premise and vice-versa. Do reach out to us at [email protected] to help with your migration needs.

 

Part 3 – Roadmap to migrate

In this part 3 of the series, we will explore the different steps in the decision making process to identify an appropriate solution and create the migration plan.

Technology snapshot of existing application

Backend TechnologyNodeJs, ExpressJS deployed on provisioned VMs
UIAngular 7 deployed on a proprietary cloud app delivery solution
ETL JobsProprietary ETL solution
Data Integration/Client SystemsNode Red, Kafka based service
APISAPI Gateway + Developer portal

To plan for the movement, we did the following:

  1. Study the current system and components in detail.
  2. Divide the product into modules that could be individually prioritized and completed.
  3. Identify the high-risk areas that need a proof of concept.
  4. Identify replacements for existing components and the resultant changes to the codebase.
  5. Create a roadmap with early integration points to verify successful completion of a module.

Proof of concepts

To gain confidence about the architecture, the following POCs were done upfront :

  1. DB2 movement: Deployment of DB2 community edition in an AWS VM and move data from currently hosted DB2 to the AWS VM. This was a complex task since we did not have access to the backup of the database, nor was it simple to do it manually. To achieve this, we had to connect to the DB2 server as a remote catalog and take a backup of each table in IXF format. This backup was then restored on the AWS hosted DB2 server.
  1. Rewriting ETL processes: We evaluated using AWS Glue as a solution and then settled on building a custom solution using NodeJs and a set of stored procedures that did the job elegantly.
  1. Data ingestion: For data ingestion into the system, we ended up with testing a solution based on SNS and SQS queues. This enabled us to push data to one SNS topic and publish it to the production as well as the UAT environment simultaneously using multiple queue subscriptions.
  2. Developers: For the developers portal, we implemented AWS API gateway for enabling 3rd party access to our APIs.  For the developer portal, we did a sample implementation of aws serverless developer portal that helps create the catalog of the APIs and provide access to consumers.

Once these POCs were successful, we could plan to lift and shift the rest of the application into AWS using standard services of AWS EC2, Application load balancers, api gateway, SNS Topics, SQS, S3 buckets and Cloudfront CDN.

Solution Diagram

Solutions

ComponentCurrentAWS
DatabaseDB2 (Hosted)Production – DB2 image from Midvision
ETL JobsProprietaryNodeJs/Database stored procedures
UICustom deployment frameworkS3 bucket + CloudFront CDN
Third party integrationKafka based service for ingesting dataSNS+SQS combination
MonitoringCustom Framework/Self HostedDatadog/Cloudwatch

 

Key benefits

  1. A more stable application with visibility into the state of the cloud servers and platforms through a single pane of glass.
  2. Greater than 50% reduction in the overall operational cost with around 25% increase in application performance and turnaround times.
  3. Deployment using terraform scripts and ability to scale up or scale down resources as needed.
  4. Control over database backups and recovery process.
  5. More control over ETL jobs.
  6. Cloud-agnostic architecture.

  In the next and last installment of this blog series, we will discuss our learnings from this migration journey.