
Today we are in the middle of one of the largest “mass migrations” of enterprises, and personal applications are moving out of dedicated data centers and heading to the cloud. If you’re reading this, you’ve probably experienced the move or you are contemplating to do so. Sitting by the sidelines is not an option.
As you embrace the power of the “cloud paradigm,” there are three overarching macro features that you need to evaluate:
- Speed (time-to-result)
- Savings (in both capital and operational expenses)
- Simplicity (of operations)
In this piece we will discuss the first one: speed. After all, we all want our results and our responses immediately!
Why the “Need for Speed?”
As far as clouds are concerned, there are several major areas that contribute to the speed of your application. Let’s look at three different dimensions:
- (Cloud) Hardware Infrastructure Speed — The cloud is only good as the foundation it’s built on. Public clouds are built to handle outsourced spin-up/spin-down workloads. The emphasis is on cutting costs and NOT on performance. This is why you can’t ask for any type of server resources like CPU, RAM or disks. In this model, you must conform to what they offer without customization as seen here www.ec2instances.info. If you compare public cloud services with current state-of-the-art hardware vendors, cloud customization is three to four years behind.
- (Cloud) Networking and Data Access — Most public cloud vendors operate very large data centers where servers are jammed together like a concentrated animal feed operation (CAFO). Your apps can be placed in any crowded corner! Therefore, the networking and communication between the various pieces of your apps can end up having a football field between them. Look for state-of-the-art networking speed in your cloud like 40Gbits/sec and 100Gbits/sec connectivity, not technology that was introduced 16 years ago (https://en.wikipedia.org/wiki/10_Gigabit_Ethernet)!
- Multi-site Cloud and Data Replication — Public clouds can have a ton of hidden charges when your apps span multiple data centers. The parochial toll extends to storage, data transfer and app-by-app transfer. Look for user-friendly technologies like backbone data CDN and replication services for objects, files and SQL at higher access speeds (e.g., 100 Gbits/sec). For a large share of multi-site cloud users, these transit charges can end up being higher than their compute charges!
Internal Cloud Congestion
Analytic or parallel workloads have very specific nuances because the internal components in the cloud need to talk to each other. Therefore, the need for speed also applies to internal infrastructure speed.
For example, let’s compare the internal components of your cloud to what happens during a stadium football game. Imagine that you need to “register” with an usher who has to personally escort you to your seat, rather than being able to look up the seat number and go there directly yourself. What happens when ten thousand fans must be escorted by only fifty ushers? It would take a VERY long time to get everyone seated.
Today, almost all clouds suffer through this phenomenon. The compute pieces have to go through networking and storage gateways, and then wait their turn to reach other parts of the cloud internally. This makes for slow workloads and sparse interconnections. In addition, cluster workloads slow everything down exponentially! So, a cloud that’s good for web servers may be completely inappropriate for big data, machine-learning and analytics. For high volume, cluster processing, it’s vital you look for a cloud architecture that does not suffer from this type of sports stadium congestion.
Do you have these problems with speed?
What are your thoughts? We’d like to hear your feedback. Comment below or send us an email at info@kodiakdata.com.