Creative Connectivity - the site for eHealth, telematics and wireless

What’s next for AirPods?

February 16th, 2018 |  Published in Usability & Design

There’s growing speculation that Apple will be launching their next generation of AirPods sometime this year, so I thought it would be interesting to try to predict what might be in their next generation of earbuds.  The hearables market is moving very quickly and there’s no shortage of technology for Apple to choose from.  But the AirPods are a little different to anything else that Apple has ever brought to market.

The biggest difference is the way it has changed their development model.  Historically, Apple is a follower.  They don’t invent product categories – they wait for other major companies to create the market, then come in with a slicker product which delights customers.  They concentrate on everything which is needed for people to feel that Apple invented the experience.  After that, they create clear water between themselves and their competitors by constantly increasing the level of delight.  The AirPod is arguably the first product where Apple have made the market themselves.  There was a smattering of crowdfunded earbuds before the AirPods were announced, but they were only shipping in tens of thousands.  In contrast, AirPods are shipping in the millions.   For once, Apple wasn’t competing with established industry giants, but small, often poorly funded startups.  That’s what makes the question of what might be in an AirPod 2 or AirPod 3 so interesting.

At this point I should state that I have nothing to do with Apple, nor any inside information on their plans, but I have been closely involved with the development of hearables since I came up with the name back in 2014.  It’s an area which has seen massive levels of innovation, not just in the products themselves, but also in the components which go into them.  Small, wireless earbuds are still not easy to make, as many of the start-ups have discovered.  Doppler Labs, one of the pioneers, has dropped out as it ran out of cash, citing the unexpected difficulty of developing such compact devices.  Condensing all of this functionality into such as small space, with a limited battery capacity is challenging on almost every front.  It needs some very specialised skills, which have traditionally been the focus of the hearing aid industry, which is one reason for the high price tag of hearing aids.  That manufacturing complexity plays to some of Apple’s strengths – their ability to innovate in packaging density and their development of novel form factors and manufacturing techniques.  They are very good at making things small.

What is interesting about the first generation of AirPods is how Apple backed off from incorporating too much technology.  Whereas companies like Bragi and Doppler were scrambling to push every possible sensor into their earbuds, Apple just did audio.  The AirPods offer some nice touches – the elegant pairing, a cleverly designed battery case and the optical sensors, but at the end of the day they offer little more functionality than existing wired earbuds, other than the fact that they’re wireless.  I applaud that approach, as audio is the driving reason why people will purchase and use earbuds.  Even then, with all of their resources, Apple struggled, with the initial product deliveries delayed by three months as they sorted out manufacturing.

Since then, sales have taken off.  KGI securities has estimated that around 13 million pairs of airpods were sold in the first year; a number which will double to around 27 million in 2018.   That’s more than the number of Apple Watches which were sold in their first year.  I suspect that it’s far more than Apple had anticipated.  Given their ownership of Beats, some analysts were surprised that Apple launched their own brand of wireless earbud.  I think it’s fairly clear why they did.  Ever since the iPod, with its inspired advertising imagery, Apple has owned the iconography of mobile music.

The decision to remove the 3.5mm audio jack from the iPhone7 had the potential to kill that iconic image, which I suspect was a step too far for Apple’s marketing.  My guess is that the AirPod development was largely a sop to marketing to soften that blow, and I wonder how long a life most gave the new product.  In the event, it generated a new zeitgeist, which means that it’s now on course for a life of regular new releases.  That means that Apple needs to start looking at a roadmap for what was initially a statement product.

There’s certainly no lack of options to feed into that roadmap.  Earbuds have come of age as a result of recent developments in four main technology areas:

Silicon

Running a tiny device like an earbud or hearing aid from a battery has always been challenging.   Hearing aids have generally gone for primary zinc-air batteries, as they give one of the highest energy densities of any battery technology.  However, that means they need to be regularly replaced.  That doesn’t fit with the consumer requirement for rechargeability.  The problem is that rechargeable batteries don’t last that long, particularly if you’re streaming music via Bluetooth.  They’re also larger, which is a problem for an ear-worn device.  Battery technology hasn’t progressed much in recent years, although specialist companies like ZPower are making some interesting advances in miniature rechargeable cells.  What has changed is the improved power consumption of the chips.

The most significant step has come in newer generations of Bluetooth chips.  Much of the current activity in the market was started by Qualcomm’s (nee CSR’s) 8670 chipset, which set new standards for low power Bluetooth audio streaming.  It made it possible to design earbuds which could last several hours.  The other innovation has come from the Digital Signal Processing (DSP) chips which process the audio.  Hearing aid companies had a history of designing their own, or working with specialist chip companies to make very low power DSPs.  In the past 18 months, these have become more readily available as standard components, driven by the growing demand for audio processing in voice recognition devices, led by Amazon’s Echo.  We’re now seeing these coming together in specialised chips aimed at hearables, of which the latest and most interesting is Qualcomm’s QC5100 series.  Containing a complete Bluetooth 5 radio and stack, dual processor and dual audio DSPs, it claims to reduce power consumption by up to 65%.

Apple, of course, make their own wireless chips – the W1 in the airpods and the more recent, higher efficiency W2 in the Watch Series 3.  Like other suppliers, they will be working on methods to lower power consumption in order to maximise battery life.

Audio Transducers

The second area of advance is in transducers – the miniature speakers and microphones which need to be packaged into earbuds to provide acceptable audio performance.  Here, the requirements of the mobile phone industry have driven innovation, particularly the use of Micro-electro Machined System (MEMS) transducers.  In the case of microphones, these use sub-miniature, etched features on silicon substrates, integrated with silicon circuitry to make highly advanced microphones with outstanding performance.  Companies like Knowles have integrated ultra-low power DSPs within these microphones, which can be combined to provide beam steering.  This is a technique which has been seized upon by voice activation products to detect a user’s voice.  By detecting the difference in individual volume in an array of multiple microphones, it’s possible to configure them in real time to emulate a directional microphone which can track you as you walk around a room. Within an earbud, multiple microphones and beam steering help to separate a user’s voice from other sounds around them, or in the case of a hearing aid to preferentially capture the sounds directly in front of you.

There have been similar advances in transducers, which are the miniature speaker elements generating the sound.  These have been driven by a consumer demand for studio quality in high-end wired earbuds, along with the development of multiple directional speakers for in-home audio and voice products

Audio Algorithms and Machine Learning

Until recently, clever audio processing was largely the domain of the hearing aid companies, or specialist noise cancelling headset manufacturers, led by Bose, who were largely responsible for turning this into consumer technology.  The advent of low power audio DSPs has seen a surge of interest, coupled with new audio detection algorithms.  The result is that far more processing options are now available to product developers, increasingly coming from microphone and chip suppliers.

The first stop is normally noise cancellation – reducing outside noise to allow a better rendition of music without the need to turn the volume up.  But companies like Doppler and Nuheara have been extending this to help “curate” the ambient sound around you, adapting the mix of local sound and streamed music to fit the user preference.  That gives a better user experience, but can lead to problems.   As you turn down the ambient volume to concentrate on music, you lose track of sounds around you which may be important, whether that’s friends and colleagues trying to talk to you, or the warning sounds of traffic noise if you’re listening to music as you’re walking, cycling or travelling.  The problem of pedestrians not hearing traffic is serious – it’s driven the town of Bodegraven in the Netherlands to install traffic lights in the pavement, so that pedestrians looking down at their phones don’t walk out into the road.

Step forward machine learning.  Audio Analytic – a company in Cambridge, is pioneering the concept of sound recognition.  That’s distinct from voice recognition.  Sound recognition allows the microphones in an earbud to pick up traffic noise, or the sound of someone talking to you.  Once you have that information, you can adjust the mix of ambient sound and streamed music according to context, helping to bring the listener back into contact with the real world around them, while maintaining the optimum listening quality.  A few years ago, this level of recognition needed the power of cloud processing.  Today the technology has developed to the point where it can be run in low power DSPs, even the tiny ones embedded in MEMS microphones.

Biometric Sensors

The final part of the tech equation is biometric sensors.  The ear is a far better place to have a biometric sensor and will probably be where most of them end up.  Unlike our wrists, which we wave about, our ears are designed to be stable – it’s the part of the body which is responsible for our balance.  The ear canal is protected, warm and moist and is pretty much close to perfect for any contact sensor.  Manufacturers like Valencell are already supplying temperature, VO2 and heart rate sensors, with others working on pulse oximetry, blood pressure and EEG.  The ear is even being used for authentication, as an alternative to fingerprints.

With this wealth of technology, the question is which will Apple choose?  I’m not sure that looking at other manufacturers gives us much of a clue.  Google faced the same question when they removed the audio jack from the Pixel 2 and had to complement the phone launch with their Pixel Buds.  They made a very Google sort of choice and plumped for something they felt safe with – translation.  The problem there is that although it’s very clever, for most users it’s a five-minute wonder, not a compelling reason for purchase.  It fails to hit the buttons which AirPods push, which are good audio quality, long battery life, iconic appeal and ease of set up, although Android’s Fast Pair process may finally address that last issue. 

Nor do the crowdfunded offerings provide much help.  The one common strand which has characterised most of them is an over-enthusiastic belief that more tech is better.  Rather than doing what Apple did, which is stick to the basics and get them right, they’ve loaded their offerings with embedded music players, biometric sensors and power-hungry audio processing, generally providing a far from compelling offering, with very limited battery life.

Which brings me to the predictions:

AirPod 2

Despite what other pundits expect, I don’t think Apple will up the technology stakes in their next release.  One of the reason that the AirPods have done so well is that they work.  They work best on iOS, but they also work very well with any other phone.  That’s an important point. 

If Apple want that experience to work across platforms, then most of the improvements will need to be embedded in the airpods.  Bluetooth currently has some limitations in audio topology, largely because it treats music and voice differently, so doing clever things such as better integration of music streaming and concurrent voice commands is tricky.  That will get solved in the future with the next major Bluetooth release; to solve it now would need proprietary extensions which only work on iPhones.  My guess is Apple quite likes having an iconic product which works on every other brand as well.  That argues that for the next generation they’ll concentrate on improving the cross-platform experience, which leads us to the audio processing enhancements.  Noise cancellation is an obvious one – it’s a quick win with a noticeable user benefit – it will make the AirPod 2 sound better.  But by itself it’s not the “clear water” differentiator which Apple is so good at.  To obtain that, I’d expect to see them add sound recognition, so that they can let users mix outside sound.  Sound recognition is a new technology, which most users will be unaware off.  It’s a feature which will delight and make the AirPod event more attractive.

These features will need more power, which puts a strain on battery life.  The more recent W2 chip will go some way to compensate, but I’d expect to see enhancements in power management, probably low power DSPs embedded in the microphones and some tweaking of the battery.  But it should be possible.  If anyone can persuade component manufacturers to go that extra mile in tech development, it’s Apple.  As for translation, I suspect they’ll recognise it as an unnecessary novelty and leave it for apps on the phone.

AirPod 3

Jump on to 2020 and it’s time for the AirPod 3.  By then we should have a lower power Bluetooth audio spec, which allows seamless combinations of voice and music, along with new topologies such as sharing music.  So those will be in by default.  But where the AirPod 3 steps up a gear will be the introduction of a range of biometric sensors.  As with the Apple Watch, AirPod 3 is Health by Stealth.  While you’re wearing them, you’ll be generating a set of health data of unprecedented variety and accuracy.  The evolution of the Apple Watch in the intervening years will have resulted in a cloud platform that will have the capability of aggregating the data and possibly starting to do something useful with it.  That’s a long-haul job for any company – there’s a significant time lag between collecting data and turning it into meaningful or actionable insight, but it’s already beginning to happen.

The real innovation in the AirPod 3 will not be the addition of sensors, but a step up in the processing ability of the earbuds.  AirPods 1 and 2 are essentially embedded devices which just do one job – they play music.  They don’t generate data, nor do they allow user interaction beyond basic configuration.  By the time we get to the AirPod 3 there will be a new generation of even lower power processor chips which will allow the introduction of earOS – the operating system for smart earbuds.  The AirPod will no longer be a dumb peripheral, but will run apps, where developers can access the biometric data and voice and use these to develop and deploy new applications running in your ear.  That could be selecting music based on your physical or mental state, or providing a more nuanced approach to voice recognition, bringing the power of voice assistants to a personal level.  In the years to come, earOS will eclipse watchOS.  As it matures, it may well become the Trojan Horse that brings about the demise of the smartphone, as we discover that it’s easier to talk to the internet, rather than having to touch it.  But that’s another article.

As I said before, I don’t know what Apple’s plans are.  All of this is conjecture.  But whatever happens, it is going to be a very interesting journey.

 

Read more of my article on hearables at http://www.nickhunn.com/?s=hearables

0 comments ↓

There are no comments yet...Kick things off by filling out the form below.

Leave a Comment

About Creative Connectivity

Creative Connectivity is Nick Hunn's blog on aspects and applications of wireless connectivity. Having worked with wireless for over twenty years I've seen the best and worst of it and despair at how little of its potential is exploited.

I hope that's about to change, as the demands of healthcare, energy and transport apply pressure to use wireless more intelligently for consumer health devices, smart metering and telematics. These are my views on the subject - please let me know yours.

You can Subscribe via RSS »