Airpods – a Speculative Teardown

On 7th September, Apple announced the demise of the 3.5mm audio jack.  Alongside that, they introduced their Airpods, helping to stoke the momentum for a new world of hearable devices,  The loss of the jack was a move which generated howls of anguish from the wireophile community, along with a flurry of speculation about how Airpods worked as well as what Apple’s new W1 wireless chip was doing.

Having been working with wireless standards and hearables for several years, much of that speculation seemed ill-informed.  Once Airpods come to market in October, companies like iFixit and Chipworks will take them to pieces and we’ll have a better idea of exactly what Apple have done.  But those first tear-downs are still a few months away.  So I thought it would be interesting to try a speculative teardown, based on how I might have designed them, and on the limited information which is in the public domain.  I also think I know what Apple’s second wireless chip will be, and it’s not the W2.

Before I start, I should state that I have no involvement with Apple or any of the chip companies supplying them; I don’t own an iPhone 7, nor have been one of the lucky people who have had a chance to see or play with the early review versions of Apple’s Airpods.  Everything you read from this point is best described as reverse imagineering.  It’s all about the having the courage to guess.

airpod2

The first thing you notice about eh Airpods is that they look totally different from almost every other wireless earbud.  Others – Bragi, Nuheara, Doppler, Onkyo, et al, have all gone for a design which fits completely in your ear.  Apple have basically taken the design of their existing Airpods and cut off the cables.  It’s led to a host of parodies, of which I’d recommend the video by Conan O’Brien, but it’s a brilliant move.  As I’ll explain below, it makes it much easier to solve most of the issues which have plagued other earbud manufacturers.  Apple is probably the only company that could have got away with this form factor, particularly by supplying them in white.  Anyone else would probably have attempted to make them more discrete, offering them in black, silver or flesh tones.  Apple have eschewed that to make a design statement that couldn’t be much more “on your face”.  Time will tell whether that aspect of the product has worked.

On to the teardown.  For starters, it’s clear that the audio is using classic Bluetooth A2DP.  That’s pretty obvious, as Apple and reviewers have pointed out that Airpods can be paired with and work with other phones.  If they were using some special Apple protocol or other wireless system, that wouldn’t happen.  We’ve also heard that Airpods can be shared between two people, each hearing the music without them being too close to each other.  That suggests that the same audio data is being picked up by each Airpod, rather than being relayed between them.

This is where it gets interesting.  Standard Bluetooth A2DP (Advanced Audio Distribution Protocol) is designed to send a single stream of stereo audio data to one Bluetooth receiver, which then separates it into left and right audio.  A2DP can stream to multiple devices, but that’s more complex.  There’s a white paper the Bluetooth SIG published ten years ago explaining how to do that.  Each device needs to negotiate parameters such as codec configuration.  It’s important that they choose the same ones, as they receive the audio as separate transmissions, which means they arrive at slightly different times and, if they have different parameters, may take different amounts of time to decode and render.  If the stream is going to two different people that’s not a problem, but if it’s sent to left and right earbuds, it needs to be rendered within about 20 microseconds of each.  Any more than that, the sound image will appear to be off to one side, and if the synchronisation drifts, the sound source will appear to move around, potentially inducing a feeling of nausea.

Companies have developed a number of proprietary ways to ensure they are synchronised.  The first was from Cambridge Silicon Radio (now part of Qualcomm), who developed a method where one chip receives the stereo signal, then separates out the left and right and resends one of the channels to a Bluetooth chip in the other earbud.  They included synchronisation signals, so that the audio output of both chips could be coordinated in time.  You also need to do this if one earbud is a music player, streaming data to the other ear, which is a particularly difficult use case.  Others have streamed over Bluetooth A2DP to both earbuds and tried to use Bluetooth low energy to send synchronisation signals between the ears.  Others have done the same thing using Near Field Magnetic Induction (NFMI).  Some have even attempted to stream audio between earbuds using NFMI.

The problem with all ear-to-ear communication is that the head is remarkably effective in blocking 2.4GHz radio propagation.  Designs which fit snugly within the ear have real problems in getting a signal across.  The bigger the earbud, the easier this is, as you can include a bigger antenna, but the best solution is to put the antenna outside the ear.  Apple’s design does exactly that.  By locating the antenna in the long white battery and microphone boom, it moves it closer to the jaw where there’s a lot less attenuation from one side of the head to the other.  So I surmise they’re using A2DP to stream audio from the phone to each Airpod and probably using BLE control signals between them to synchronise the audio rendering.  So there’s no need for NFMI.

I spent a few minutes wondering whether Apple might have extended their MFI implementation which apparently streams audio separately to left and right hearing aids using Bluetooth Low Energy.  The Bluetooth SIG is currently developing a set of next generation audio standards which will bring audio to BLE, but that’s for the future.  However, the use of BLE for audio would result in the battery lasting days, not hours, so BLE audio is clearly not in play here, although I would expect that Apple have designed the BLE part of the W1 chip to support that in the future.  We’ll come back to what’s in the W1 later.

Although the battery life is not days, one of the marketing claims for Airpods is their five hour battery life and better audio quality.  Some of the initial reviewers have noticed the improvement, particularly the range and the fact that the audio rarely breaks up.  We also know that the W1 chip is being used in Beats’ Solo 3, claiming 40 hours, which is double that of earlier models.  Apple states that the improvement is “driven by the efficiency of the Apple W1 chip”.  So how have they done that and how much is just marketing?

Let’s start with audio quality. I see no reason why Apple would move away from the AAC codec they’ve used since the iPod, so there will no change there.  It’s possible they may have lowered the bitrate as they’re now sending two streams and air-time coexistence would improve with less Bluetooth activity.  However, the main issue with wireless audio quality isn’t due to the codec or bitrate.  It’s those annoying clicks and pops which upset people, which are generally caused by fading in the radio signal.  Interference can be a problem, but Bluetooth’s adaptive frequency hopping generally takes care of that.  I’ve seen a number of articles suggesting that Apple is discarding some Bluetooth features or using proprietary protocols, but that’s unlikely, otherwise the performance improvement would only be seen with their own phone.  The could have done that, but it they had it would not have made sense to use the W1 chip in the Beats headset, as it would largely confine their market to Apple users.  The far better, and universal approach is to try to improve the quality of the radio signal to increase the chance of packets being received.

My guess is that’s what they’ve done.  The problem is analogous to getting heard in a noisy room – speak louder.  For a Bluetooth radio, that means increasing the output power, but as you push it up, the power consumption increases substantially, driving down the battery life. It’s likely the Bluetooth radio in the Airpod is running at around 10dBm, which is a bit of a sweet spot.  As well as shouting a bit, you can also improve the link budget by listening better, which is achieved by improving the radio’s receive sensitivity, typically by add a low noise amplifier.  So I’d guess there’s one of those in the W1.

Getting the five hour battery life may just be natural evolution.  Bragi’s Dash has around 3 hours, but their design has a lot of other sensors and is also based on a chip which was introduced in 2011.  In general, each spin of a wireless radio chip to a smaller geometry reduces the power consumption by about 20%.  A chip released today would be about two generations further on, so that simple fact, combined with the size of battery, is probably all Apple needs, and would explain how Beats get to their 40 hours.

That brings us to the Airpod’s battery.  Another advantage of having the long tube is you can fit a decent sized battery in it (certainly compared to other earbuds or hearing aids), which allows the power to be increased.  However, Apple has done something else which is very clever.  They’ve designed a charging unit which is small and which encourages people to put their Airpods back in the charger as soon as they take them out of their ears.  That means they’re not sitting on a desk or in a pocket looking for an ear and using power while they do so.  It’s an inspired design detail.  If you do leave them out, they have two optical sensors to turn them off.  (Two makes it easier to detect they’re somewhere other than your ear.)  The sensors will only need to run on a duty cycle of 0.1% or less, so will have very little effect on power consumption.

A key feature of the Airpods is support for Siri, which is being promoted as the way of controlling what you play.  That means you need some decent microphone technology to make sure you capture the user’s voice, rather than ambient sound.  Putting a microphone at each end of the battery tube is an obvious way to do this, allowing a degree of beam forming to take place.  Once again, the physical design gives them a clear advantage over in-ear designs.  Having two microphones almost certainly means there’s a DSP core in the W1 to process the audio.  I’d also expect Apple to take advantage of the accelerometers to detect bone vibration, allowing a further level of processing to extract your voice from background noise.  Unless they’re doing that, one would be enough for detecting Siri taps.

CNET’s reviewer noticed that the optical sensors determine the primary earbud for phone calls – selecting the first Airpod to be inserted as the dominant one.  As this appears to be notified to the phone regardless of whether you’re streaming music, it reinforces the fact that they’re using Bluetooth low energy for control, rather than AVRCP, although it looks as if this can be selected by the user as an option both for iPhones or other phones.

The “magical” pairing also tells us that BLE is in use.  The Bluetooth features that are needed for the proximity based pairing have been in the spec since 2010, but Apple has been the first to put them together so intelligently.  In the various videos, we see the connection screen pop up on the iPhone  within two seconds of the charger lid being opened, with both Airpods being paired to the phone and ready to use within a further five seconds.  There could be NFC involved, but I doubt it, mainly because the iPhone app shows the battery life of the charger as well as each Airpod.  That suggests the charger has a W1 chip as well.  If that chip starts advertising (think iBeacon) when the lid is opened it would bring up the pairing app, and pairing would start once the user clicks “Connect”.  If the charger is already paired with its two Airpods, it can easily share the credentials between them and the phone in the few seconds remaining.  An alternative approach would be for the charger to talk to the two Airpods via the split-ring charging contacts at the bottom of the stem, but that feels like extra, unnecessary complexity which might go wrong.  And the iPhone 7 shows that Apple doesn’t like mechanical connections, whether that’s the home button or the 3.5mm jack.  So I’m going with the third W1 chip.

What is clever, and will be Apple proprietary, is the way in which they transfer credentials and security keys between all of your Apple products in the background.  Since IoS 10 was announced, it’s been interesting to speculate why the iPad mini and iPhone 4s were not supported.  My guess is that these incorporated the first spin of Broadcom’s Bluetooth dual mode chip, which lacked some of the features needed to support the enhanced security of Bluetooth 4.2.  Without that, you really don’t want to be sharing security keys over the air.  By the time the mini 2 and the iPhone5 appeared, a newer generation of chip would be in their Wi-Fi / Bluetooth modules, allowing them to develop this new pairing process with Bluetooth 4.2.  It could, of course, be more prosaic, with Apple just not wanting to support such old devices.

What else?  The accelerometers are also there for control, detecting when you tap them and using that to instruct Siri, or sending AVRCP commands to another brand of phone.  Reviewers have expressed annoyance at the fact you have to pause music whilst Siri listens to your commands.  The problem here is that A2DP is one way – it doesn’t support a return audio stream.  It would also require a more complex mixing and noise cancellation ability to separate out your voice from the incoming music track.  Those are harder problems to solve and my guess is that Apple is leaving them until it sees the user acceptance of the Airpod and how people use Siri.  There’s enough innovation in the Airpods for a first release without trying to be too clever.  Bragi showed us what happens when you take the opposite approach.

That’s the speculative teardown, but what does that tell us about the W1 chip?  From the above analysis I’d expect to see:

  • Bluetooth 4.2 dual mode with support for A2DP and the features needed for the next few Bluetooth releases, as well as Apple’s MFI audio standard.
  • The ability to relay Bluetooth to a second W1 chip.
  • Output power of 10dBm.
  • An integrated LNA, giving receive sensitivity better than -93 dBm.
  • Stereo audio outputs (to support Beats’ wired headsets), but probably not an audio amplifier, as earbuds only need one.  I’d expect that to be an external chip – probably the same Cirrus Logic part that’s in the iPhone7.
  • A low power DSP for audio beam forming, with the capacity for echo cancellation, noise reduction and noise cancellation for future releases.
  • A sensor fusion hub to support more complex accelerometer applications in the future.
  • A competent low power microprocessor to run it all, along with enough memory to support future applications and OTA updates.
  • No NFMI

So why would Apple make such a chip?  Developing a wireless chip with this spec isn’t cheap – it will probably cost at least $10m with a similar amount going on the stack.  It is just a peripheral chip, which will never go into a phone, so the volumes are not that high.  But it is interesting that Apple emphasised that this is their first wireless chip, in a tone suggesting it won’t be the only one.  My guess is that Apple wants to incorporate Bluetooth and Wi-Fi into their next generation of processors, as that’s what their competitors are doing.  Wireless can be difficult, so it’s a big step to do that in one go.  Far better to design the wireless chip and evaluate it in an entirely new product category, as well as forcing one of your subsidiaries to use it.  That way you get to test the chip as well as getting useful feedback for your next generation of earbuds.

When Chipworks take an Airpod or Solo 3 to pieces and unpot the W1, I expect we will see that it’s a multichip module.  The main chip will be a Bluetooth dual mode one designed to be Bluetooth 5 compatible, but the DSP and micro are likely to be separate dies.  If Apple is using this as a test vehicle for incorporation into future processors, then we may find 802.11 there as well.  If that’s the case, we can be pretty sure what the wireless roadmap is.  Not a W2, but the A11.

As I said at the start, this is all speculation.  Come the end of October, when people start sniffing the airwaves and iFixit and Chipworks start taking their Airpods apart, we’ll see how accurate it is.  At which point I’ll either be feeling smug or possibly deleting this blog.