Migrating a multi worker solution to aws

Omid Eidivandi (XaaXaaX)
3 min readOct 3, 2021

Recently i was involved on complex project consists of Multiple .NET REST APIs , about 10 .Net batches , a central SQL database and about 10 domain specific SQL databases, also for the reason of performance there was about 11 SQL servers to be just consulted by front-end web applications to distribute the load on the domain specific databases , these are the same schema as domain specific databases which are synchronized to the destination by some batches.

At a glance, there was a huge schema , after some analysis we decided to do the migration and removing non needed parts but considering the business continuity by keeping the on-premises and AWS side by side as a hybrid solution.

Site-2-Site VPN :

The First step was to improve the connectivity of these two networks, using Site-2-Site VPN we achieved this goal.

Server-Less:

As the part of business that was involved by this architecture, was a great candidate for an event driven architecture we designed a server-less architecture , a fan-out design using SNS and SQS.

Using this design we could keep the existing system as is and have a tolerated load on the on-promise APIs.

Internal Entrypoint:

For internal uses we used an Application load balancer backed by a lambda function which waits an api key to fetch event producer, validate the event and push it to SNS, this part let us to migrate the producers one by one to the new entrypoint .

External Entrypoint:

For external consumers as Salesforce an Api Gateway was the best solution , as you can use integrated WAF , Authorization , TLS and lots of features , also it’s a HA and redundant service. the Gateway was backed with same lambda function.

Broadcasting:

To broadcast the event for processing the events was published to the SQS queues based on some message attributes , the Queues was backboned by a lambda function to verify and send the events to the on-promise REST API,

Till now the on-promises existing platform was working as before , so all batches was doing well their job.

Database:

Removing all these databases or migrating them to aws clould add lots of challenges for many teams and we would like to use DynamoDb as our central database was not fulfilled by all necessary data and lots of data was fetched form domain specific databases at runtime.

The DMS was really simple and has a great performance to migrate SQL database to NoSql database , using the CDC(Change Data Capture) we had a real time replication.

For preparing data we had two solutions Using an Indexed View or preparing the data, as there was lots of data in domain specific databases and also lots of enumerator based integer fields and we needed the string value of enumerators. also to avoid adding lots of fields which are not useful at 99% of times we decided to use a dynamodb MAP type to contains all these information.

We added the needed sql server computed fields to translate all of this data, and a Stored Procedure to prepare the Map Field.

At the end we add a trigger to be triggered for new insertions to calculate the data, but the first time we run the SP on existing data nightly.

Migrating On-Promise Api:

The principal Rest Api that was consulted by the client’s BackOffice and we desired to expose it for the clients to have theirs statistics and data and by a Restfull endpoint we could now migrate to aws and use the dynamodb .

An Edge-Optimized Api Gateway could achieve this goal, with a lambda based serverless design to interact with dynamodb.

--

--

Omid Eidivandi (XaaXaaX)

i'm a technial lead , solution/softwatre architect with more than 20 years of experience in IT industry,, i m a fan of cloud and serverless in practice