5 Techniques to get more from your AWS Deployments
Amazon EC2 is the popular platform to deploy and run a variety of workloads. Startups love it because of the power, flexibility and the advantage of turning CAPEX to OPEX. Enterprises have started adopting AWS with the migrations ranging from dev/test environments to complex SAP deployments. In my experience of engaging with a diverse set of customers, I have noticed that most of the deployments start with a pilot but they quickly become production deployments. During the transition from the simple proof-of-concept / pilot deployment to production, customer miss out certain crucial elements that prove to be expensive.
Whether you are a hobbyist experimenting with the cloud, a startup deploying the MVP or an enterprise IT team evaluating AWS, here are five techniques that will help you make your Amazon EC2 deployments secure and manageable from day one.
1. Launch Amazon EC2 Instances within VPC
Amazon Virtual Private Cloud (VPC) offers better control, power and security for running cloud applications. It lets you provision a logically isolated section to launch AWS resources in a virtual network. It is a fact that AWS did little to emphasis the importance of VPC to its customers. There is also a misconception in the community that VPC is meant only for advanced scenarios typically involving VPN and enterprise IT integration. But in reality, VPC offers many benefits even when you are running just one Amazon EC2 instance. With VPC, you can define private and public subnets that offer very granular security control. For example, it is easy to block public access to the database server that never needs visibility. You also get a custom DHCP set and DNS naming scheme that won’t change with the stop/start activity. The concept of Elastic Network Interface (ENI) and the Network ACLs offer ultimate flexibility and security of your cloud servers. Enterprise customers starting with VPC will find it easy to configure the hybrid scenario that seamlessly connects the datacenter and the AWS through the VPN. It is incredibly hard to migrate plain vanilla EC2 instances to VPC after you go live. Fortunately, Amazon has integrated VPC with EC2 and made it default environment for the new accounts. If you have signed up with AWS recently, you may not have the option to launch classic EC2 instances. For existing customers, there is EC2-VPC, which is the default VPC configured for your account. Remember, VPC is free and costs you nothing. So, there is no reason for not using it!
Key takeaway – Stop launching Amazon EC2 instances outside of VPC.
2. Launch Amazon EC2 Within an Auto Scaling Group
One of the biggest benefits of moving to cloud is elasticity. The ability to grow and shrink automatically removes the guesswork involved in planning the capacity. One of the powerful features of Amazon EC2 is Auto Scaling. You can configure your deployments to grow and shrink dynamically or periodically based on a pre-defined schedule. Auto Scale feature works in tandem with Elastic Load Balancer, Simple Notification Service and Amazon CloudWatch. In most of the deployments, auto scaling is an after thought. You may think that you may never want to implement it because your workload is not elastic and not designed to take advantage of the scale-out pattern. But, one of the hidden gems of auto scale is auto healing of instances. Since auto scale is a service that constantly monitors the health of the servers running with an auto scaling group, it can track unhealthy instances and bring back a new server automatically. This avoids manual intervention and enables self-healing of your servers. To take advantage of this, create an auto scaling group with min-size and max-size of servers with one. Later on, when your application is ready for the scale-out configuration, use as-set-desired-capacity to change it as required. Thankfully, AWS has integrated auto scaling with the AWS Management Console. You don’t have an excuse not to use auto scaling from now.
Key takeaway – Always launch Amazon EC2 instances running in production as a part of an Auto Scaling Group.
3. Use a Bastion Host for Administering Cloud Deployments
One of the common mistakes found among AWS customers is opening sensitive ports to the public. This typically happens when testing the deployment. For example, opening port 22 (SSH) to the world by allowing the CIDR of 0.0.0.0/0 invites unwanted trouble. Some customers even keep all the TCP/IP ports open to the world. While this may be convenient initially, it will be disastrous when you move to the production. The best way to provide administrative access to your cloud deployments is to setup a bastion host (aka Jump Box), which is a designated server to manage rest of the servers. This will act as an entry point to rest of the servers by discreetly enabling access to the other servers. There are many advantages with this technique. Sharing the credentials only with authorized administrators tightly controls the access to the bastion host. The security groups will be configured in a way to allow the SSH/RDP port access only from the bastion host. If you are concerned about running an additional server incurring extra cost, remember that when not in use, the bastion host can be shutdown preventing any possible misuse. The security group associated with the bastion host will be configured to allow the access from a specific set of IP addresses. Follow this article for an excellent walkthrough of setting up the bastion host.
Key takeaway – Setup and configure a bastion host to manage the AWS resources.
4. Control Access through Identity and Access Management (IAM)
Identity Access Management (IAM) enables secured and controlled access of the AWS resources. With IAM, we can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources. The account that is used to signup with AWS should be treated on par with other critical IT assets of the organization. Since multiple users and departments may use the corporate AWS account, there is a need to create accounts with restricted access. IAM helps by creating user accounts that can be configured separately with independent control. Even in a startup where there are only few individuals dealing with AWS deployment, it is a good idea to use IAM to delegate access. This just makes it easy to grant / revoke permissions for individual users. For enterprises, IAM allows integration with corporate directory services like Active Directory. If you are outsourcing your AWS operations, it is important that you use IAM to delegate access to the 3rd party operations team.
Key takeaway – Never share your AWS account credentials instead delegate access through IAM.
5. Tap into the Power of AWS Tags
One of the most under-utilised features of AWS is tagging! Many of us give the Amazon EC2 instances a name to easily differentiate it. But, we can go beyond that to tag most of the common AWS resources with useful information. With the maximum limit of tags being 10, we can creatively use them to manage deployments. For example, you can tag your development, test and staging to easily distinguish from each other. You can also tag RDS instances, EBS volumes and snapshots to add additional information. But the best thing about tagging comes with the ability to track cost allocation. Enterprises can use this to track the AWS cost incurred by each department to do a charge-back. To take advantage of this, tag each resource by department, signup for Programmatic Billing Access and download the CSV file that contains the tag-wise breakup of the cost. Many 3rd party tools, CLI and the AWS API supports querying the resources by tags. Start using this feature to add more power and productivity to your cloud operations.
Key takeaway – Create and use Tags for consistently tracking the AWS resources.
These are some of the techniques that give you more bang for the cloud buck. Don’t forget to share your tips and tricks on effectively managing AWS deployments.