The Internet of Things has a problem. Unless we start looking at a new infrastructure, it may peter out after the first fifty billion devices. Everyone seems to be so excited about predicting whether it will be 20 billion or 50 billion or 1.5 trillion that they’ve forgotten about how the connectivity and business models will scale.
There’s a general consensus that we’ll get to between 25 and 50 billion connected devices by 2020. The first 25 billion of these is foreseeable. Around a quarter of it will come from personal devices – mobile phones, tablets, laptops, gaming devices, set-top boxes and even cars, using cellular or broadband connections. They need moderately expensive broadband contracts, but we’ll pay as we can stream lots of data. The same again will come from machine-to-machine (M2M) connections where broadband or cellular connectivity is embedded in commercial products to monitor their performance. That covers everything from telematics, connected medical devices, asset tracking, smart buildings and everything from vending machines to credit card readers. In this case the service contracts are justified by improved business efficiency.
The second 25 billion is likely to come from locally connected devices – generally personal products which connect to smartphones. Eighteen months ago I wrote a report on these appcessories, predicting that they could grow to an installed base of around 20 billion in 2020, getting us close to the total of 50 billion. These will piggy-back on existing broadband contracts, so most won’t have a service model. At best, there may be an opportunity for selling apps or subscription services.
However, at that point, future growth may start to slow. Although these products all get referred to as the Internet of Things, they’re only that in the loosest sense, as they rely either on personal user setup, or professional installation. Both are time consuming and a barrier to ubiquitous deployment. To achieve the real Internet of Things we need products which can be taken out of their box and which connect and work autonomously. Without that, we’ll never get past the tens of billions. Despite all of the IoT hype around, no-one is really addressing the hole that needs to be filled. We need an Infrastructure of Things – a new Low Power, Wide Area, end-to-end wireless Network (LPWAN), along with a new approach to data provisioning for life. This article explains why and what the options may be.
The “why?” is pretty obvious, but generally overlooked because the limited number of Internet of Things products on the market today are mostly bought, installed and maintained by geeks. Most wireless or connected products do not work out of the box. You need to read the instruction manual, type in passwords and go through a moderately non-intuitive set-up process. That’s true even for the best designed products. If you then change your router or smartphone, the Internet link will stop working, requiring the user to repeat the process. It’s a workable model for a limited number of high-end, high value, desirable consumer products. But it’s limited.
Many of the Internet of Things products which are envisioned are low cost sensors, where the cost in deployment will need to be minimal. Essentially, those installing them will need to attach them to the appropriate surface and turn them on, at which point they’ll start working with no or minimal further user intervention. They should work anywhere – outside, or deep within buildings, with no need for site surveys or optimal siting. In other words, they need to work in the way that today’s products don’t. It should be equally possible to implement the technology inside commercial things like environmental sensors, bridge monitors and smart meters, as well as domestic ones like household appliances, dog collars and smart clothing. To achieve that we need a technology and network which meets the following criteria:
- Low Cost. Once it is shipping in the billions, it should be possible to implement it within products for around $1.
- Low Data Rates. This is not a high or medium speed requirement – it is for devices to report changes or updates. Most will send less than 1 kB of data each day. It should be possible to increase that, but with the proviso that this is never going to be for audio or video – it’s about data events.
- Low Power. Which is linked to the low data rates. Simple sensors should be able to run off small batteries for 5 – 10 years. It should be possible to use energy harvesting for the lowest powered devices.
- Secure. With device security managed by the network, so it can be updated.
- Long Range, so that base stations can support thousands or tens of thousands of devices. That’s necessary to keep the infrastructure costs low, so that data contracts can be as low as a few dollars for a lifetime connection. It also requires a much better link budget than any current cellular solution provides to allow the low power consumption in terminal devices.
- Simple, low cost provisioning. It should support a base level of data provisioning for life, possibly in the chip license, so that devices ship with an ability to connect throughout their working life.
- Flexible Service Provision. A standard way for the operator to allow devices or customers to negotiate enhanced level of provision after deployment.
- Quasi-real time. Many applications don’t need real-time access, particularly if they’re only doing daily reporting. But some may, particularly if control signals are being sent to devices.
If you listen to cellular operators, they’ll imply that not only can they do this already, but also that cellular is the only route to enabling the Internet of Things. That’s largely disingenuous. Cellular is too expensive, whether you’re considering power consumption, cost of hardware or service contract. Operators are busy dismantling their 2G networks, which is the only infrastructure they have that is vaguely suitable for any IoT application. The proposal that a future low cost variant of LTE is the answer has about as much understanding of the requirements listed above as Marie Antoinette’s pronouncement that the starving populace should eat cake.
What’s surprising is how few alternatives are being developed. Perhaps the best known is SigFox. This is a narrow band, one-way network which is being rolled out in various parts of Europe in the 868MHz ISM band. It provides a low rate uplink using a proprietary protocol.
Another contender was Weightless, which is a standard developed by a consortium of Accenture, ARM, Cable & Wireless, CSR and Neul, designed to operate in the TV White Space bands. On paper it probably comes closest to the principles outlined above, but requires major changes in spectrum regulation. That has proven to be an impossible task in standard technology development timescales. Although there is no reason why Weightless could not be used in licensed portions of the spectrum it has lost momentum and may well fall by the wayside.
Telensa is another proprietary standard from Plextek in the UK, which has recently been spun off as a company in its own right. Its primary market at the moment is for street lighting. The radio is moderately complex and unlikely to meet the price points required.
Matrix has been developed by TTP and is deployed in a number of proprietary solutions for their clients. They have recently made some public statements about the technology and it could serve as a possible option or starting point if taken up by a standards organisation.
One of the few US offerings is On-Ramp, an 802.15.4 based point-to-point solution operating at 2.4GHz. The company claims it can cover 400 square miles with 16,000 end points from each access point, with a range of 10km, as long as the access point is raised on a tower, building or hilltop.
Semtech’s LORA is another technology that could be part of the solution, which ticks a fair number of the boxes. It’s a sub-GHz solution that’s promoted as ultra-long range, with a claimed link budget of 160dB. Semtech own a fair amount of IP in this solution, but have a business model of embedding it into their chips, which may be limiting.
There are a number of other solutions being offered, mostly proprietary ones that have been developed for street lighting, parking sensors or smart grid applications. None really give the impression that they’ve had the depth of thought or commitment to standardisation to evolve into a universal IoT LPWAN.
It’s worth mentioning Thread, the recently announced wireless standard from Google and Nest. It’s not a wide area IoT standard – it’s a local mesh. However, from the details which are emerging, its development appears to have considered many of the issues above, so it could be a useful contributor to the debate.
Then we have the various LTE-M and other LTE-Lite variants. The telecoms industry likes these as they perceive them as an evolution from what they’re already doing. But they suffer from their heritage. However much they’re tweaked they’re never going to meet the power or cost levels required – there’s just too much overhead in LTE. It’s like trying to take an internal combustion engine and claiming it can be slimmed down to become a running shoe. What the industry needs is a clean sheet of paper approach. Although the final solution will almost certainly need to coexist with LTE and be deployable by existing network operators.
Provisioning, Provisioning, Provisioning
With my clean sheet of paper I’d start from the opposite end to every LPWAN proposal I’ve seen, which means provisioning. As we move from 50 billion devices to hundreds of billions we need to find a way for them to work as soon as they’re turned on. At these numbers we’re talking about tens or hundreds of device for every human being, which is why it is inconceivable that anybody is going to be involved in configuring individual devices. They have to turn on and connect – no SIM card, no setup codes, not even pushing a button to connect, they just work. Moreover they must continue to connect for as long as they work. That is why they need to connect directly to a base station without any hub, router or smartphone in the way, as any intermediate device will cause complications if it’s changed.
The corollary is that every device must be pre-provisioned, probably at the point the chip is manufactured or programmed. That pre-provisioning should allow a device to connect to the network at power up and register itself. Depending on what contract the product manufacturer has set up with a network provider it will then negotiate a data service.
I’d suggest that every device should be provisioned with a lifetime minimum data service, probably of around 100 bytes sent once per day for as long as it works. Once registered, the product manufacturer can access that daily data. Depending on what contract they’ve set up, the network provider could then instruct the device to transmit more data or transmit more frequently. That could be for life, or on demand, such as when a fault is detected, or if a software update needs to be sent to the device. This probably means that there needs to be some form of central licensing authority.
Secondly, spectrum. My view is that the IoT LPWAN network should be in licensed spectrum, not unlicensed. Whilst that goes against the grain of many current proposals, this is a network that will need to work for at least twenty years. That’s about as long as the oldest 2G data networks (longer than any GPRS network) and almost twice as long as we’ve had Bluetooth or Wi-Fi. The ISM band is getting congested and will get more congested as operators get hold of the 2.3-2.4 GHz spectrum. (Incidentally running LTE next to the 2.4 GHz ISM band will probably scupper their plans to offload significant quantities of data to Wi-Fi networks because of interference, but that’s another story.) These IoT devices need dedicated, managed spectrum if the service is going to scale to hundreds of billions and make money for the network operators. It is important to remember that this needs to make money for the network operators, otherwise it will not happen. The IoT is not a free ride as some seem to think – it needs valid business models to scale. Refarming an existing LTE channel or guard band in the 900MHz band would make a lot of sense here. Most of these devices may not cross national boundaries; the majority will probably be static, so it does not need to be global. However, economies of scale, along with a desire to minimise national variants suggest that a limited number of global frequencies be used.
Thirdly, silicon. To get things going so that they’re cost effective, the terminal solution should try to use existing chips. The past twelve months has seen a step-change in low cost, ultra low power chips which can probably be utilised, rather than waiting for new silicon to be spun. In the course of time that will happen, but the faster we can get the first generation out, the better.
Fourthly, price. I believe the target for silicon should be $1, with no more than a further $1 for other components and no more than $2 for basic lifetime data provisioning. At that level every industry can contemplate building this into almost any device they make. Unless we aim for that, then the Internet of Things is just a pipe-dream, which will be limited to vertical M2M applications and pricey toys for geeks. There are massive benefits in gathering data. Most home appliance vendors have no idea how their products are used, how often, or what goes wrong. Having that data allows them to design more effective products and develop much more interesting service models. As a simple example, it allows an all-in leasing model for home appliances which can include power and water costs. Those low dollar prices may not seem much, but multiply them across a trillion devices, and the revenue rivals that of any other industry sector.
All of this needs to be secure, which must be the foundation for all of these points. That argues very strongly for a standards body to take control of this, as the multiple inputs stand a better chance of ensuring a decent security model which covers all aspects of the solution. They also need to develop a certification scheme to ensure that everything that comes to market conforms and which encompass the whole product chain from sensor device to data access from the network servers. That is one of the biggest challenges, as most standards bodies limit themselves to one part of the puzzle, generally opening up security holes at the interface to the next standard. Weightless tried, but didn’t have the scale. The obvious choice is ETSI, but that means changing many participants’ vision of the future. It would also be beneficial if whoever develops it takes the Bluetooth SIG’s RANDZ approach to IP, as opposed to the dreaded GSM and 3G patent pools.
One of the reasons we don’t have a suitable network like this is that it’s one of those chicken and egg situations where there is little push to design and deploy anything until there is a demonstrable demand for it. Today cellular connections and personal area networks like Bluetooth and Wi-Fi give the impression that the Internet of Things is progressing quite happily. No-one’s too bothered about the bottleneck that will come in 8 – 10 years time when they have cherry-picked the high value connections, because the immediate opportunity of tens of billions is so big. But ignoring it leaves no economic way to connect the trillions of other devices. The standards and infrastructure we need will take a decade to develop. Unless we start addressing them now, things may grind to a halt at the end of this decade.
Because it needs to be nationwide and preferably continent wide, if not global, it also needs Governments and regulators to understand the need and help make it happen. And it needs the industry to stop developing piecemeal solutions and realise that there’s a bigger opportunity to be met. Otherwise the dream of a trillion connected devices will remain a dream. It’s not that there’s a lack of options, just a lack of vision and funding to develop the infrastructure to enable it.