M2M’s impending Hole in the Air

All of a sudden, there’s a lot of activity in the Long Range wireless network community.  In France, Orange has just announced that they’re going to follow in Bouygues’ footsteps in deploying a LoRa network for M2M which will cover the whole of metropolitan France.  That in turn follows on from a similar announcement from KPN that they are planning to do the same thing in Holland, while Proximus are going to cover Belgium and Luxembourg.  It’s a bit like a rerun of the SigFox PR offensive, after they managed to sign up operators in France, Holland, Portugal, Belgium, Luxembourg, Denmark, Spain, the U.K. and San Francisco.  Nor is it just a European phenomenon.  In the U.S., Ingenu, the company which was formerly known as On Ramp, has raised $100 million to roll out its own similar, proprietary network. It seems that there’s a new announcement almost every day.  So it’s interesting to look at why mobile operators are desperately announcing new network technologies to support M2M and IoT applications, when just a few months ago they gave the impression that they would rule the IoT with their 4G networks.

When you stop and look behind all of this activity, you see something that should be worrying the M2M and IoT industry (which is not the same as the cellular industry).  For the last fifteen years they’ve not had to worry much about how they make their data connections – they just embedded a GPRS module and bought a data contract.  But look forward a few years and there’s a worrying hole in the air as networks start to switch off their GPRS networks.  That’s just beginning to dawn on network operators, who see an unexpectedly unpleasant vision of the future, in which their anticipated IoT revenues could disappear into thin air.

I’ve previously written about the specific infrastructure needs of the IoT and the devices which will form it.  Most of the tens of billions of these new connected products which the industry is predicting only require low data rates.  That’s similar to the vast majority of current M2M applications which only send a few tens of bits of data every few minutes.  Almost all of the wireless connected M2M products that are out there today rely on GPRS, (also known as 2.5G) – a cellular data standard that goes back to the mid 1990s.  Plenty of different companies make low cost GPRS modules, which are widely available for less than $10.  To support them, operators have developed data plans that can be bought for under $1 per month.  So if you’re incorporating GPRS connectivity into a product, your overall, incremental monthly connectivity cost is only around $1.50 a month.  That makes it possible to develop a wide range of affordable, cost effective services for those connected devices.

If your application needs more data, the next step is to move to 3G.  That doubles the cost of the hardware to around $20, and the service plans tend to be a bit higher.  But if your application needs the higher throughput it may make sense.  Typically applications needing more data incorporate video (security cameras) or displays (Kindles), so the additional cost of hardware is not prohibitive.  Incidentally, the Kindle is a nice example of innovative pricing plans, where the cellular data costs are hidden in the cost of each eBook download.

Moving forward, we now have 4G / LTE, which is a data only network.  Squeezing more data into the same spectrum is complicated, which increases the cost of a cellular modem to around $40.  Contracts are generally more expensive, although that’s down to individual operator tariffing, starting at around $2.50 per month.  So the amortised cost of adding connectivity jumps to over $6 a month – four times that of GPRS.  That’s sustainable for expensive products like cars, applications which use video, like security, or critical medical ones.  But these more expensive products will deploy in their millions, not billions.  The Internet of Expensive Things is a very minor subset of the IoT.  It’s in a different league to the predicted billions of low cost IoT applications, where even a $1.50 per month fee is pushing the boundary.

That vision of tens of billions of sensors sending data back to the cloud is based on the belief that connectivity will be cheap, often assuming that GPRS will be around for ever and that hardware costs will continue to fall, which has been true over the last two decades.  However, networks are starting to turn off their ageing GPRS and 3G networks, as they want to use the spectrum for more 4G.  There is a very logical rationale for that.  The same amount of spectrum used for 4G supports around five times the number of users as 3G and twenty times the number of 2G data users.  More users means more money for the cellular operators.  But to be fair, the network operators also need to do this to increase the network’s capacity to support the growing number of increasingly heavy data users.

So GPRS is disappearing.  Most network operators have plans to turn it off in the next five to ten years.  Some have already done so.  Others are already refusing to certify new GPRS hardware or M2M products for use on their networks.  That’s going to generate a “hole in the air” for M2M and the vast majority of Internet of Things applications, as they cannot afford the higher cost or power consumption of LTE.

The impending demise of GPRS is the reason for the sudden level of interest in long range, low power alternatives as the networks begin to realise the hole they’ve dug for themselves.  In the rush to support more smartphone data, the 3GPP standards groups have dropped the ball for low power IoT applications.  In the long term there may be other solutions, such as the lower power M0 version of LTE.  The GERAN group in ETSI has also started some useful work looking for a solution which could be utilised in an existing cellular band – something which has become abbreviated to CIoT, indicating the Cellular IoT.  (Note that this is fundamentally different from the 4G cellular IoT which makes up a large part of most network operators’ future revenue plans.)  Unfortunately the CIoT discussions appear to have degenerated into a commercial battle between different, incompatible solutions from Huawei/Vodafone, Ericsson and Nokia/Intel.  That has predictably led to stalemate, with the infrastructure Neros frantically fiddling whilst their IoT Golden Goose burns.  Whatever emerges from the traditional specification route, it will be ten years or more before it becomes commonly available, as that’s the timescale over which this industry works in developing and deploying new standards.

So what should an IoT or M2M vendor do?  I’ve already looked at the options in a previous article, and I can also recommend two good, recent reviews from Bryon Moyer EE Journal and Richard Quinnell at EDN.  Instead I’d like to look at another very pertinent aspect of M2M and IoT product design which is often overlooked – the design time cycle.  It’s important to understand this, as when a company designs a connected product they need to consider what networks will be available not just when the product ships, but whether they’ll continue to be available throughout its working life.

The first part of this is the time to market.  I used to be the CTO of a wireless module company called Ezurio – now part of Laird Technologies, and we routinely tracked the delay from getting a design win to seeing that product come to market.  There is a common perception that it only takes 6 – 12 months.  That’s largely fiction.  Over the course of many years we found the average time is closer to 24 -30 months.  A fair number of projects take up to three years, and in one case, which was a medical device it was almost six.  I’ve shared these numbers with other wireless module companies and they all acknowledge that they’re typical.



The reason it takes so long is that many aspects of these projects are difficult.  Wireless connectivity is difficult, but it’s just one part of the solution. The original product needs to be redesigned; typically new sensors are integrated and designers need to think about what data to send and when.  The comms contracts need to be set up and tested; the data flow to the server developed and the end-to-end performance tested.  Once that’s working it needs to be integrated with the server applications and the security of the whole system tested.  Then the product needs to be tested thoroughly, certified and taken through regulatory testing.

That’s only the first part of the process.  It’s just the time between a manufacturer choosing a wireless module and starting the first production run.  Before the wireless module supplier is chosen, there’s generally at least six month of product specification.  Once it’s in production, there may be no customer – many products have to be brought to market before customers will try them.  That means that it may be another 6 – 12 months before a lead customer places any volume orders and a further two years before they start large scale deployments.



If we put these together, it’s not unusual to see an overall timescale of four years between the start of an M2M project and the first products being deployed, with another two years before they reach critical mass.  After that, further deployments may well continue for another five years or more and it’s not unusual to see the M2M devices operating for ten years.

If you started an M2M project twenty years ago you didn’t have to worry about the wide area wireless standard – it would be SMS or GPRS – both supported by 2.5G modems.  If you started ten years ago, it would still be a safe choice.  But if you start today, whatever you choose may not be there in ten years’ time.  In fact, there’s a pretty high chance it may not be there when you first ship in four to five years.

It’s very difficult to see where to place your bets.  If you were making that decision at the start of 2013, one of the most high profile options was Weightless and whitespace.  At the start of 2014 the flavour of the day was SigFox.  Right now, that crown has passed to LoRa, but by the beginning of next year, the smart money may have moved to Ingenu or Accellus.

Orange says its LoRa network is “especially useful for connecting sensors in Smart Cities”.  In a similarly rose-tinted release, KPN says it stays “committed to cellular for self-driving cars and ATMs, but LoRa is great for lampposts and rubbish bins”.  All of the operators jumping on the LPWAN bandwagon like to claim they’ve tested these new LPWANs “at scale”, which normally equates to tests in one university town with 25 – 40 partners.  They then claim that means that it will get them a large percentage of the 25 -30 billion new IoT connections they predict by 2020 – just five years away.

It’s very difficult to know when any one of these contenders will have enough infrastructure in place to make them a safe bet, rather than a local solution.  We also don’t know how these incompatible technologies will scale in the real world, what level of interference they will see (as they are mostly being deployed in unlicensed bands), or at what point they will move from PR to patent injunctions to try and gain supremacy over their rivals.  None of this is the certainty that has supported the M2M industry for the last twenty years.

At some point there is likely to be widespread availability of LTE-M0, but it’s still not clear whether it will hit the price or power consumption that is needed for many IoT applications.  Nor is it likely to happen much before 2023.  In the meantime networks are rushing into alliances with various LPWAN providers, but no-one can predict how long they will last, or how serious the deployments will be.  The current flurry of announcements seem to be French companies enjoying beating up other French companies, with Bouyges promoting LoRA above SigFox and both of the latter dismissing each other as the failing standard.  They’ve taken over from the UK battle about who is the best Weightless solution, as Neul has disappeared to be replaced by nWave.  Neither are debates which give confidence in choosing a global infrastructure standard.

The rest of the world, and the US in particular remains remarkably quiet, with the exception of OnRamp’s recent announcement of funding and the change of its name to Ingenu.  I wonder whether they based that new name on the meaning of “ingénue” – a young chorus-girl looking for a sugar daddy to marry her, as that would seem particularly apposite for a VC funded startup looking for an exit.  That’s just idle speculation, but it does highlight the fact that most of these new LPWAN options are from VC funded startups who have their main eye on an exit more than generating an industry standard which will support the first few decades of the IoT.  VC funded, proprietary wireless standards do not have a good track record for longevity.

I wish there was an easy answer, but I don’t know of one.  Instead I see the industry sleepwalking toward the edge of a cliff.  Because we have cellular data, everyone thinks that an IoT data network exists.  It does today, but in a few years its most important component will be turned off.   LPWAN deployment may be the answer, but it needs to move rapidly if it is to do more than cover a few cities, and we need to decide which one it will be.  That will be much more challenging that the LPWAN proponents admit.  However, without an infrastructure, we can forget the extra tens of billions of connected devices – the IoT will just be the Internet of Expensive Things.”

If you’re a maker and interested in local connectivity, the Things Network is trying to set up a global LoRa network in cities.  They successfully crowdsourced a complete city-wide Internet of Things data network with the people of Amsterdam in 6 weeks.  Now they’re launching our global campaign to repeat this in every city in the world.  I’m supporting their Kickstarter campaign, but the maker IoT is never going to be the same as the real IoT.  For that, it looks as if we may need to wait a lot longer than most analysts and network operators suggest.