Big Data depends on cluster processing. Due to the very nature of cluster processing, itself, big data workload is very different. Cloud 1.0 was primarily M2H (Machine-to-Human) and the economics were driven by trying to slow the machine down to match the human interaction (e.g., Web, e-commerce, social networking). Hence, the rise of consolidation, spin-down infrastructure and “per-drink” pricing in the public cloud.
The big data cloud has to handle a very different workload. Big data processing is:
-
Clustered
-
Multi-stack
-
Stateful (in-stack data/states)
-
Heavy use of network bandwidth
-
Share-nothing infrastructure
-
Increasing M2M operations
-
Long-tailed
Big data cloud does not spin-down; its processing loads are incessant.
We spent a year with a crack-shot team of infrastructure developers to come up with an infrastructure that’s suitable for Big Data.
Our solution:
-
Requires no SW changes
-
Is up to 10X lower in data center footprint and up to 10X faster than standard cloud infrastructure
-
Offers excellent cloud economics for big data clustered loads – a fraction of the cost of webby cloud
-
Is suitable for flat-rate billing – no more nickel-and-diming every transaction like today’s cloud
And one of the most important features of our technology it that it allows deployment as both private and edge cloud. So you can now put a big data-crunching memory-speed cloud on-premises. In building. Under your desk!
Let the Cloud come to you!
Sound interesting? Contact us for our hosted and private MemCloud offerings, as well as our cloud-building block appliances.
File Origin: page.php -- Child (Default Template)