October 21, 2011 - by alexsalkever
We are in the early stages of the Internet of Things, the much anticipated era when all manner of devices can talk to each other and to intermediary services. But for this era to achieve its full potential, operators must fundamentally change the way they build and run clouds. Why? Machine-to-machine (M2M) interactions are far less failure tolerant than machine-to-human interactions. Yes, it sucks when your Netflix subscription goes dark in a big cloud outage, and it’s bad when your cloud provider loses user data. But its far worse when a fleet of trucks can no longer report their whereabouts to a central control system designed to regulate how long drivers can stay on the road without resting or all the lights in your building turn out and the HVAC system dies on a hot day because of a cloud outage.
In the very near future, everything from banks of elevators to cell phones to city buses will either be subject to IP-connected control systems or use IP networks to report back critical information. IP addressability will become nearly ubiquitous. The sheer volume of data flowing through IP networks will mushroom. In a dedicated or co-located hardware world, that increase would result in prohibitively expensive hardware requirements. Thus, the cloud becomes the only viable option to affordably connect, track and manage the new Internet of Things.
In this new role, the cloud will have to step up its game to accommodate more exacting demands. The current storage infrastructure and file systems that backup and form the backbone of the cloud are archaic, dating back 20 years. These systems may be familiar and comfortable for infrastructure providers. But over time, block-storage architectures that cannot provide instant snapshots of machine images (copy-on-write) will continue to be prone to all sorts of failures. Those failures will grow more pronounced in the M2M world when a five-second failure could result in the loss of many millions of dollars worth of time-specific information.