There’s no mistaking that Amazon Web Services is incredibly dominant in the cloud. The company has released so many products and created so many amazing tools that the marketplace has responded by rewarding AWS with an insanely high percentage of the world’s business.
But that doesn’t mean that AWS is the only game in town. Google is running hard and creating cloud products from all of the expertise it used to build the dominant search engine. Google Cloud Platform is very competitive and in some respects might be said to be better.
Better? Well, it’s hard to say that some cloud products are head-and-shoulders above others because the products themselves are often commodities. A machine running the current version of Ubuntu or a cloud storage bucket that stores a few gigabytes are about as interchangeable as sunny days in the desert.
Still, the cloud companies are finding ways to differentiate themselves with extra features and, as is more often the case, slightly different approaches. Google’s cloud products are starting to develop a style all of their own, a style that echoes the powerful simplicity of many of Google’s consumer-facing products.
Some of this style is apparent as soon as you log in because many of the tools aren’t too different from the widely used G Suite. The user interface has the same primary colors and clean design as the major customer-facing apps like Gmail. Finding your way through the maze of configuration screens is much like finding your way through Google’s office applications or search screens. Less is more.
There are differences underneath too. The company’s internal tech culture has always been defined by a great devotion to open source. As this culture evolved, it created a certain Googly flavor that is more and more apparent when you go to the cloud. There are solid open source options like Kubernetes and Ubuntu everywhere. And when Google built one of the first serverless tools, the App Engine, it started with a toy scripting language, Python. Now the Python language and the serverless approach are everywhere. This tradition of clear and open tools is found around every corner.
But it’s worth adding an important caveat. Amazon is dominant for a reason and it’s almost impossible to find an area where AWS can be definitively trounced. In all of the cases here, it’s usually possible to do something very similar with AWS.
Still, here are 11 ways Google Cloud shines just a bit brighter than AWS.
Google Cloud Platform offers a number of different ways to store information, but one of the options, Firebase, is a bit different from the regular database. Firebase doesn’t just store information. It also replicates the data to other copies of the database, which can include clients, especially mobile clients. In other words, Firebase handles all of the pushing (and pulling) from the clients to the servers. You can write your client code and just assume that the data it needs will magically appear when it’s available.
Pushing new versions of the data to everyone who needs a copy is one of the biggest headaches for mobile developers and really anyone building distributed and interconnected apps. You’ve got to keep all of these connections up just to push a bit of new information every so often.
Firebase may look and sound like a database, but really it’s a mobile development platform. It has much of the structure you might need to build distributed web or mobile apps. Or mobile web apps for that matter.
Yes, BigQuery is a database, but it’s also a machine learning powerhouse. You can start off storing your data in the tables and then, if your boss wants some analysis, just kick off the machine learning routines with the same tables. You won’t need to move the data or repack it for some separate machine learning toolkit. It all stays in one place, a feature that will save you from writing plenty of glue code. And then, as an extra-added bonus for the SQL jockeys who drive databases, the machine learning is invoked with an added keyword to the SQL dialect. You can do the work of an AI scientist with the language of a DBA.
Persistent disks that work with multiple instances
Do you have data to share between your different machines? Google makes it easy by allowing you to mount the persistent disk with multiple instances. The data appears as part of the file system, making life a bit easier for the coders as well. The only limit is that you’ve got to use read-only mode because Google isn’t ready to deal with race conditions and other crazy bugs you might create with multiple writes. If you want to do writes, you’ll have to get a database like a regular programmer.
AWS encourages you to share data by using different products like S3, which stores data in buckets with key-value pairs. The volumes from Amazon’s Elastic Block Storage, the standard storage mechanism for EC2 instances, can’t be mounted by multiple machines. That said, Amazon does offer Elastic File Services, which can be mounted with the NFSv4. In other words, AWS can get pretty close.
G Suite integration
It’s no surprise that Google offers some integration of its cloud platform products with its basic office products. After all, the Googlers use the G Suite throughout the company and they need to get at the data too. BigQuery, for instance, offers several ways to access and analyze your data by turning it into a Sheets document in Google Drive. Or you can take your Sheets data and move it quickly into a BigQuery database. If your organization is already using the different G Suite apps, there’s a good chance it will be a bit simpler to store your data and code in the Google Cloud. There are dozens of connections and pathways that make the integration a bit simpler.
More virtual CPUs
In July 2018, Google Compute Engine boosted the maximum power of its instances, putting up to 160 vCPUs and 3,844 GB of RAM at your fingertips. The last time we looked at the documentation, AWS EC2 instances maxed out at 96 vCPUs. Of course, the CPUs are not exactly the same power and they almost certainly run some benchmarks at different speeds. Plus adding more virtual CPUs doesn’t always make your software go faster. The only true measure is the throughput on your problem. But if you want to brag about booting up a machine with 160 CPUs, now is your chance!
Custom cloud machines
Google lets you choose how many virtual CPUs and how much RAM your instance will get. AWS has many options and one of them is bound to be pretty close to what you need, but that’s not the same as being truly custom, is it? Google offers sliders that let you be a bit more particular and choose, say, 12 vCPUs and 74 GB of RAM. Or maybe you want 14 vCPUs. If so, Google will accommodate you.
There are limits to this flexibility, though. Don’t dream about infinite precision because the slider tends to stick on even numbers. You can’t choose 13 vCPUs, for instance, which would probably be unlucky anyway. But this is still a great increase in flexibility if you need a strange configuration or you want to provision exactly the minimum amount of RAM to get the job done.
A premium network
Google and Amazon have huge networks that link their data centers but only Google has a separate “premium” network. It’s like a special fast lane for premium customers that comes with some reliability and performance guarantees like N+2 redundancy and at least three paths between data centers. If you want to rely upon the Google CDN and load balancing across different data centers, then opting for the premium network will make life a bit smoother for your data flows.
What about the non-premium users? Do their packets travel by burro and carrier pigeon? No, but accepting fewer guarantees comes with a lower price tag. By opening up this option, Google lets us decide whether we want to pay more for faster global data movement or save money by getting by with perhaps an occasional hiccup. It’s not another option to muddle our brains. It’s an opportunity.
Cloud services that are “always free”
Cloud providers usually offer various free samples of their products for a limited time and Google is no different. Right now Google will give you $300 of free services for the first 12 months. The difference comes afterwards because some of Google Cloud’s lowest tiers are said to be “always free.” The word “always” is marketing speak because the smaller print underneath says that it is “subject to change,” but let’s not get caught up in parsing English when computer languages are hard enough.
Until that fateful day of change comes you’re free to use a wide range of the Google Cloud’s lowest powered options like the f1-micro instance that comes with 30 GB of hard disk space. If you can handle the uncertainty about the meaning of “always” and your needs are pretty light, you can do quite a bit of experimenting and not face a clearly defined end to the fun.
The buzzword “serverless” is now so common that its meaning is getting, well, cloudy. Google is testing a nice product that lets you take any container and hook it up to an endpoint so it will respond to requests. You’re not limited by serverless function definitions or language choices or any of the other details. If you can get your container running, it can be invoked on demand and billed on demand. This is a great service that could save you the hassle of spinning up and shutting down instances.
New visits to websites need an immediate response, but a great deal of the background processing and housekeeping around most websites doesn’t need to get done right away. It doesn’t even need to get done in the next few hours. It just needs to be done eventually.
Google offers something called pre-emptible instances that come at a nice discount. The catch is that Google reserves the right to shut down your instance and put it aside if someone or something more important comes along and wants the resources. They’ll start it up again later when the demand for computation drops.
AWS offers a different way to save on compute costs, a marketplace where you can bid for resources and use them only if you submit a winning high bid. This is great if you’ve got the time to watch the auctions and adjust your bids if the work isn’t getting done, but it’s not great if you’ve got more important things to do. Google has come up with a mechanism that rewards you for volunteering to be bumped and matches it with a price that’s locked in. No confusion.
Sustained use discounts
Another nice feature of the Google Cloud Platform is the way that the discounts just show up automatically the more you use your machines. Your compute instances start off the month at full price and then start dropping. You don’t need to keep the machines running continuously because the price is set by your usage for the month. If your instance is up for about half of the month, the discount is about 10 percent. If you leave your instance up from the first to last day, you’ll end up saving 30 percent off the full price.
By the way, Google approaches this pricing like a computer scientist. It tracks how many vCPUs and how much memory you’ve used among the various machine types and then combines the different machines, where possible, to give you an even bigger discount. Google calls this “inferred instances.” The model is quite useful if you’re constantly starting and stopping multiple machines.
Again, AWS also offers discounts, but the price breaks extend to those who purchase reserved machines or bid on the spot markets. In other words, you need to do something and make a commitment. Google just rewards you.