Three tiers applications in Cloud

Omid Eidivandi (XaaXaaX)
6 min readOct 3, 2021

in Today’s world, Mostly we talk about cloud and cloud first, we are trying to migrate all our on-premises workloads to cloud , being AZURE, AWS or GCP and in reality we have 50 percent of our workloads as WEB applications exposed over internet to be consumed by users publicly.

Web Architecture :

Focusing on a web traditional architecture , mostly they use a 3 tiers architecture which consists of a Web tier , Application tier and a DataBase tier.

To have more information abour N-Tiers have look at :

  • Web tier is the public tier exposed over internet and can be reached by anyone who knows the url. it’s sometimes faced by XSS, LFI , MTM, DDOS or DOS(Vlume attcks as UDP/ICMP Flood), Protocol Attcks (SYN Flood), Application Layer Attacks(GET/POST Flood, Low/Slow Attacks)) and Path traversal attacks. there are some attack mitigation tricks as well as some challenges to mitigate. Using a Web Application Firewall or sort of IDS/IPS. Azure do automatically DDOS mitigation and IDS for the Microsoft IP Addresses, AWS has Shield service to do this lightly.
  • Application tier is the layer of application logic where all the treatments are done as well as domain validation , Database requesting and so on , it’s more sensitive against front tier, this tier is accessible from Front tier and mostly with static flow assignments.
  • DataBase Tier is the persistence part of the architecture where data are stored and statistics are calculated based on these information, so it has highest sensitivity over other tiers and must be highly protected.

Well Architected Design:

In a real life scenario there are some considerations and recommendations for a 3 tiers design to achieve best design based on 5 pillars of well architected design as azure defines this as (Cost, DevOps, Resiliency, Scalability and security) and AWS defines as (Cost, Operation excellence, Resiliency, Performance and Security).

Cost: for a cost optimized solution the cloud providers has developed many solutions but this pillar will be obtained better by a Build-Measure-Learn practice where you build , monitor and optimize your solution to achieve the best solution. it’s worth to consider best practices but in some scenarios the solution is an exception in its kind and need this principal for optimization. AWS invented Pay-as-you-go and today mostly all CP’s do that and help you move to OpEx instead of retaining the CapEx strategy. in the on-demand strategy you gain the competitive pricing. also Azure and AWS has the reserved and Spot VM/Instance which could improve your costs about 35–40 % for reserved and 70–90% for spots. i don’t dive deeply in this pillar as it’s not in the context of this article.

OPS(Operations) : the operations shall be automated and monitored , any kind of anomaly shall be alerted , the whole system must be observable, this pillar talks about Business continuity and Time-to-market, Azure and AWS offer the solutions for this pillar , in Azure you have basically HDInsights for application and Solution monitoring, also using VSTS and ARM you can automate the deployment , AWS has the cloudformation, CodePipeline and cloudwatch , both Azure and Aws have a concept called auto-scalling where your application becomes elastic by auto-scaller. to have more details about autoscaling have a look at

Resiliency: Your Solution need to be resilient, it means the ability to recover any failure or any failure be resolved by automated actions. it is highly attached to availability as their are both Resiliency and High Availability the characteristics of a reliable solution ,in cloud the architectures are distributed so there are facts based on failures as a part of system can be unavailable , we consider that any failure can cascade throughout the system, any shared hardware can experience a failure and so on , in Azure resiliency can be achieved using Azure SQL Database or CosmosDB and Azure Storage with the replication strategy, SQL Database also provides Master-Slave instance where your whole requests will be redirected automatically to the slave and it will act as the new master while you resolve the disaster on master. Azure also introduce the concept of Fault Domains where your data will be placed on different hardware in the same data-center, the Availability zone concept also help your solution become resilience and reduce hardware failure impact by placing your VM in different data-centers. Also Azure Managed Disks are another azure specific resiliency solution where your data are replicated spread multiple hardware to mitigate any lost of data in a disaster or failure situation. AWS uses availability zones to spread your instances in multiple data-centers separated geologically , also the S3, Dynamodb, RDS obtain resiliency by their replicated nature and can achieve more resiliency using multi region replication. for the resiliency of data the EBS and volumes backups can help you have more resiliency and les RPO and RTO in the case of disaster.Azure/AWS load balancer can be used to distribute load to the available servers in one or more availability zones to achieve HA.

We will discuss about Disaster recovery and RTO/RPO , SLA and Availability and durability calculation in another article

Performance(Scalability): better to say Elasticity is the concept in solution design where your load can increase instantly and decrease in the same manner but your solution can handle this peak load without any Interruption, AWS and Azure have the same concept to mitigate the peak of a system called AutoScalling, using this managed service you add new VMs/Instances in 1 up to 5 minutes based on a desired state that can be configured by CPU or resources usage, and if your load decrease they will remove extra VMs/Instances and unused capacity to reduce your costs .Also using CDN services Like Azure CDN and Amazon CloudFront, you can optimize your performance letting your static or dynamic data be cached in edges (edge-locations) so your uses will experience the same performance and a better performance by avoiding reaching your regional service. Also Data Partitioning can help you achieve a better performance. these was all about HPC(High Performance Computing) systems but in HTC(High Transnational Computing) systems By using the asynchronous processing for your long running tasks using Background job as Azure Event Driven Triggers, Scheduled Triggers or AWS Batch , AWS SNS , SQS and Lambda you obtain the performance in an responsiveness perspective.

Security: The Security of your solution is your responsibility but the Cloud provider are responsible of their parts this is called share responsibilities, as data-center physical and hardware security, but the most important part is your side where you consider security best practices to mitigate the holes in your design and secure Application, Data, Infrastructure and Identities, any vulnerability and attack in this scope will be your responsibility. In Azure using Azure AD you can manage your identities as Users and Groups , Using the trust relationship between Azure Subscription and Azure AD you can use RBAC(role based access control) to grant permissions to users or groups in your Azure AD this way you can secure your infrastructure and resources usage.Using Montoring and Activity Logs audit every operation on resources . for securing your application use Azure Key Vault for secrets storage and avoid sharing them in Source control. Use SSL/TLS to secure application flow , taking care of XSS ,CSRF and SQl injection is your application security so consider them and setup a throttling process using Azure Api Management service. keep the sensitive data encrypted at rest and in transit , also keep the database layer in a private subnet allowing just the access via port and protocols using Security Groups needed for application or administration also using Azure WAF(Web Application Firewall) is recommended. AWS offers Amazon Directory Service which is build on top of Microsoft AD, Consider using RBAC using IAM Roles and policies. Use IAM to manage your Users , Groups and Roles centrally. Use Security Groups at instance level and NACLs t subnet level to protect your sensitive data flow , Use API Gatewayas a managed service to protect your apis as a proxy in front of your environment , it’s integrated with WAF and has many available options as Throttling. Keep your Application secure with Secret Manager for secrets and KMS to manage your Encryption Keys to Encrypt your data at rest, as well as SSL/TLS, Use AWS Shield , Flow Logs and CloudTrail to monitor and audit your environment.

--

--

Omid Eidivandi (XaaXaaX)

i'm a technial lead , solution/softwatre architect with more than 20 years of experience in IT industry,, i m a fan of cloud and serverless in practice