Managing the real-time data explosion? Look out for these 5 gotchas
Real-time connectivity will stop being an optional way to incrementally improve efficiency and instead become a baseline requirement without which things simply won’t work.
Over the next decade the ratio of humans to computers on the internet will change radically. We’re moving from an era where interactions were mostly between individual humans and servers to one where human generated traffic will be under 1% of all activity. It’s easy enough to predict the massive increase of volume this implies, but what will be the management challenges?
5 Things you’ll regret not anticipating
- It’s not just the volumes, it’s the variety.
In any large city, you can expect to see over 5,000 different models of Android phone in use. Looking forward, any business plan which assumes there will be a single, well behaved type of device in use is wildly optimistic. Realistic architecture diagrams should include a translation layer to map device messaging to an internal ‘truth’ while adding value by providing a context such as location or customer demographic. This step will probably become a business activity in itself.
- Failure is not an option…
It’s one thing if your power meter is no longer telling you to switch the lights off. It’s another thing entirely if you can no longer switch your own lights on. Or off. Consumer tolerance for failures and outages will shift accordingly.
- Herding cats will look easy compared to managing devices…
In 1976 the Viking 1 Mars lander stopped working because a software patch caused it to forget how to contact earth. Imagine if the same thing happened to several hundred thousand IoT devices that performed a valuable service? On the one hand, we clearly have an imperative to maintain total control over device software and configuration, but we probably won’t make the devices and there will be hundreds of different variants, all of which will be ‘complying’ with the standards in ‘their own special way’. And we won’t be able to see them all at the same time due to network outages and firewalls. Never mind ‘Herding Cats’ – this will be like herding Schrödinger’s cat – except it’s not your cat and no, you can’t look inside the box to check if everything is OK.
- Latency will become mission critical
Any “use case” that attempts to influence events in real time will have some form of time constraint. Many of these will be in the milliseconds. Existing open source development stacks and commercial products will struggle to consistently meet low SLAs which will lead to products failing in the market place. “Adding speed” is not something you can do by writing more code afterwards.
- Did we mention this must be done on a budget?
It’s not just CPU and network costs. Cloud operating costs add up, and then there are the costs of finding and keeping people who know how everything works.
These things are easy to overlook when the technology only needs to run on PowerPoint. But the real challenges will only be experienced after launch, in an environment where hiding failures from your customers will be at best ‘difficult’ and in some cases ‘downright dangerous’. None of these five points represent an insoluble challenge on their own, but collectively they represent a ‘Perfect Storm’ of challenges.
by David Rolfe
David Rolfe is VoltDB’s senior technologist in EMEA. His 30 year career has been spent working with data, especially in the Telecoms industry.