Cloud computing technology has matured significantly over the years, and now offers a compelling list of advantages over on-site deployments, especially for small businesses and start-ups that may not have the capital to purchase servers and other hardware appliances.
Cost aside, the public cloud delivers well-established capabilities such as scalability and elasticity. Even more importantly, businesses can leverage the public cloud to quickly set up infrastructure in a timeframe that is measurable in minutes as opposed to the weeks or months required to set up physical infrastructure.
We take a closer look at Amazon Web Services (AWS), one of the most popular cloud services today, in order to examine how it can be leveraged to benefit small businesses.
The AWS ecosystem and basic prerequisites
It’s worth noting that while setting up a cloud deployment isn’t rocket science – at the basic level, anyway – it does require an IT background or a certain level of technical competency in order to configure everything correctly. For establishing a CMS or PHP-based website, for example, someone who’s already familiar with setting up the LAMP (Linux Apache MySQL PHP) stack would probably be a good candidate.
For all its widespread appeal, AWS is a proprietary cloud implementation, when all is said and done. So while some concepts could be similar across different cloud offerings, specific expertise working on AWS is typically not transferrable to Microsoft’s Azure or Google’s Compute Engine, and is unlikely to be easily replicated in an on-premises deployment.
AWS does offer a number of ways to help businesses quickly get up and running with common deployment scenarios; indeed, there’s often more than one way to implement a particular solution. In order to help you make sense of the AWS cloud, though, it makes sense to start from a short list of the most heavily used components.
Some key components for Web hosting
EC2 (Elastic Compute Cloud) virtual servers make up the backbone of a cloud deployment on AWS. These virtual servers are available in a variety of configurations, each with differing amount of CPUs, memory, storage and network performance, and are billed based on an hourly rate. Note that some older instance types may get retired over time
S3 (Simple Storage Service) is an object storage system that can store up to 5TB in a single object, and are accessible anywhere on the Web through the requisite command line operators, API calls or even desktop apps that are designed to work with it.
EBS (Elastic Block Store) offers traditional file system capabilities and is more expensive. Attached to a server, EBS volumes function like a disk drive, and like a storage drive, persist even after a compute instance have been shut down. (Note that EC2 instances can be configured to delete EBS volumes on shutdown.)
RDS (Relationship Database Service) is a Web service that makes it easy to set up, operate and scale a relational database management system (RDBMS). An appropriate database engine can be selected, including MySQL, Oracle, SQL Server, PostgreSQL or Amazon Aurora.
Route 53 for DNS and domain name registration. Route 53 offers competitive rates that can be cheaper than the prices offered by some domain name registrars. On the other hand, Web hosting firms are also known to offer packaged deals that include the domain name for free, or at a cheaper rate for the first few years.
Finally, if you are looking to dabble around with the AWS cloud, you will be glad to know that AWS offers a free tier consisting of up to a year worth of compute instance time, as well as various freebies for the various products and services listed above.
Starting from a machine image and choosing a region
AWS makes it easy to launch your first compute instance by offering a wide range of prebuilt and optimized Amazon Machine Images (AMIs) that you can load onto a newly created instance. A vibrant AWS Marketplace also exists for third-party created images, and the AWS community also creates and uses shared public AMIs.
Before setting up your cloud infrastructure, you will need to choose a location or “region” from which to base your virtual infrastructure. The idea here is to go with a location that is either closest to the bulk of your users, or nearest to where your developers are physically located. For the latter, this could result in slightly better load times and speeds when uploading data and developing your website. Developers and database administrators will want to know that synchronous database replication is not supported across different regions, though asynchronous replication is. Finally, some services, especially if they’re new or beta offerings, may only be available in certain regions.
Of course, depending on how your website is architected, the use of a good Content Delivery Network (CDN) service could in most cases render your deployment region moot. You can use AWS CloudFront CDN, though other options can be used. In addition, AWS offers tools to easily migrate between multiple regions.
Finally, it’s worth noting that while the cost of most AWS service is usually the same across the different regions, it’s not always the case. See “Monitor your cost” below for further explanation of AWS cost structures.
Architecting for uptime
If you’re assuming that cloud computing means it’ll never fail, think again. While many services inside AWS are highly reliable, and AWS does offer certain capabilities that make it easier to recover from an outage, you must plan and engineer for reliability as part of your deployment.
For example, bugs were recently discovered in the underlying Xen hypervisor used by AWS, and some AWS machines had to be rebooted as part of the patching process. Also, the physical servers that do the work on the backend can, and do, fail. Without automatic safeguards built in, websites built on AWS can behave unexpectedly or even become unavailable when servers crash or reboot on the backend.
In general, you should ensure important sites can run from more than one availability zone (AZ) within a region. Typically, this entails having the database backend set up for multi-AZ deployment from the get-go. Similar to how having more than one database server in an on-premises deployment is more expensive, expect to pay more when you choose a multi-AZ database option.
The most typical setup entails setting up an Elastic Load Balancer (ELB) to distribute incoming application traffic across multiple compute instances. Traffic can be automatically diverted from unhealthy instances to healthy ones, which could span across multiple AZs in the event of a catastrophic failure of a particular AZ.
Don’t forget security
AWS takes security seriously, which is no surprise considering you can set up literally hundreds of production servers – or tear them down – with the click of a mouse. For example, at least one promising start-up was wiped away after a hacker broke into its Amazon EC2 control panel and basically erased the entire infrastructure.
To better manage security, AWS recommends setting up users with limited permissions to manage the resources under their charge, as opposed to a “root” user with unlimited access. Just like in a typical Linux system, users can be allocated to groups, while additional roles can be created and assigned to users or groups.
In addition, AWS also offers multifactor authentication (MFA), which is available as in both hardware and virtual options. For hardware MFA, AWS supports security fobs manufactured by Gemalto, a third-party provider. Alternatively, a virtual MFA app is supported, with Google Authenticator supported as an option on Android, iPhone and BlackBerry, and an AWS Virtual MFA app on Android.
Monitoring your cost
Finally, the aspect of cloud computing that you probably hear about the most is its capability to reduce infrastructure cost. As businesses are slowly finding out, however, the corresponding increase in operational costs can in certain circumstances exceed the cost of an on-premises deployment in relatively short order.
To help users gain greater insights into the cost of their cloud deployments, AWS devised a monthly calculator where users can compute the cost of their deployments based on the services that they use, according to their estimated disk and network usage levels. This can help businesses decide if they can do without certain levels of reliability or services.
Businesses looking to optimize their cost based on their existing deployment, and are certain of their usage levels may decide to purchase either a “spot” or “reserved” compute instance. In a nutshell, the former allows businesses to leverage unused compute capacity for a lower price, while the latter lets businesses pre-book and/or pre-pay in advance for the same. For obvious reasons, spot instances may not always be available in a particular region.
AWS itself offers a Trusted Advisor service to help tweak various aspect of an AWS deployment, including security and keeping cost optimized. However, a paid support plan is required to unlock all recommendations.
We’ve only covered the tip of the iceberg in terms of the possibilities available on AWS, but this should point you in the right direction in deciding if you’d like to go with AWS, and to start asking the right questions on how to proceed next.