Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple prepares HomeKit architecture rollout redo in iOS 16.3 beta

Last updated

After halting its rollout of HomeKit's new architecture in iOS 16.2, Apple has resumed testing of the platform, with it resurfacing in the iOS 16.3 beta.

In December, Apple withdrew the option to upgrade Homekit to the new architecture, following reports the update wasn't working properly for users. It now seems that Apple is preparing to try it all again for the next set of operating system updates.

Screenshots from the iOS 16.3 beta show there is a message in the Home app confirming there is a "Home Upgrade Available," with a "new underlying architecture that will improve the performance of your home." This is the same update message that appeared in iOS 16.2 before being pulled.

Screenshots  sent in by Anthony Powell Screenshots sent in by Anthony Powell

The inclusion of the notification in the beta is a strong indication that Apple believes all is fine with the update, and will be trying to release it to the public once again.

For the previous attempt, users reported seeing devices stuck in an "updating" mode after the upgrade completed, with some seeing devices unresponsive or failing to update fully. At the time, it was unclear what had caused the problems as there wasn't any spottable commonalities between accounts of the issue.

With the appearance in the iOS 16.3 beta, it seems Apple is confident it's worked out the problems and is willing to give it a second try.



10 Comments

ihatescreennames 19 Years · 1977 comments

I’ll be the first person to let a whole bunch of other people try this for several days before I feel confident in trying myself.

darkvader 15 Years · 1146 comments

I’ll be the first person to let a whole bunch of other people try this for several days before I feel confident in trying myself.

I’ll be the first person to let a whole bunch of other people try this for several years before I feel confident in trying myself.

It would break homekit on the old iPhones I've got scattered around the house to use as lighting controllers.

ihatescreennames 19 Years · 1977 comments

darkvader said:
I’ll be the first person to let a whole bunch of other people try this for several days before I feel confident in trying myself.

I’ll be the first person to let a whole bunch of other people try this for several years before I feel confident in trying myself.

It would break homekit on the old iPhones I've got scattered around the house to use as lighting controllers.

That’s true for me, as well, but in my case I’ll be fine not using the older devices for HomeKit control. As it is now the devices that won’t work after the upgrade are barely used for HomeKit already, so an acceptable loss. 

elijahg 18 Years · 2842 comments

I'm stuck in limbo where I can't invite anyone who has ever opened the Home app to my home, because no one can join the upgraded architecture without upgrading their own home first for some ridiculous reason. Therefore since the rollout was cancelled people can't upgrade their home, and so it's impossible for them to join.

I can't downgrade my home even if I reset everything, because despite the upgrade being cancelled new homes still use the new architecture. And besides that resetting doesn't work properly anymore either, even with the special home it reset profile. It's a mess. 

Apple's software QA is abysmal these days, it used to be top notch. It's extremely disappointing for a "premium" brand. Some things are nearly as bad as OS 9 - though the kernel seems to be rock solid at least. 

That said, the upgrade seemed to improve the responsiveness and reliability of homekit. I'm sure they could have used the homekit hubs as a bridge between old and new versions - though less incentive to upgrade of course. 

dewme 10 Years · 5775 comments

elijahg said:
I'm stuck in limbo where I can't invite anyone who has ever opened the Home app to my home, because no one can join the upgraded architecture without upgrading their own home first for some ridiculous reason. Therefore since the rollout was cancelled people can't upgrade their home, and so it's impossible for them to join.

I can't downgrade my home even if I reset everything, because despite the upgrade being cancelled new homes still use the new architecture. And besides that resetting doesn't work properly anymore either, even with the special home it reset profile. It's a mess. 
Apple's software QA is abysmal these days, it used to be top notch. It's extremely disappointing for a "premium" brand. Some things are nearly as bad as OS 9 - though the kernel seems to be rock solid at least. 

That said, the upgrade seemed to improve the responsiveness and reliability of homekit. I'm sure they could have used the homekit hubs as a bridge between old and new versions - though less incentive to upgrade of course. 

Apple's quality challenges are actually very typical of most large scale software projects being done over the past couple of decades. The whole notion of software "QA" as it was once was, something performed by a dedicated team that descended upon a product development process, usually late in the cycle, with a great deal of zeal to make sure that nothing bad leaked out the door and that everything that was promised was actually delivered to an acceptable level of quality, i.e., verification, and actually solved the intended end-user problem. i.e., validation no longer exists.

Don't get me wrong, software is still tested. Heavily tested in fact. It's tested at many levels from the nuts & bolts deep in the code as unit testing as part of test-driven-development, to integration testing, to system testing, and of late security testing, e.g., penetration testing, interface fuzzing, etc. The lower levels of testing are done repeatedly, very often automated, at typically on every code commit and build. Collections of tests that comprise a cross section of wider functionality are often declared as "regression tests" which are essentially a high level smoke test to provide a level of confidence that the most recent changes introduced into the code base didn't break what was already working.

So why do software products and systems that are supposedly so extensively tested still seem to break so often? Imho, and as someone who's worked at pretty much every level of product and system development engineering, it's a combination of continuously increasing complexity, never-ending releases (the software is never really done, so we can fix the broken things in the next release which may be tomorrow or next month), monotonically accumulation of technical debt (anomalies that don't get addressed), and an insufficient number of team members who fully understand the problem domain and the customer challenges the product is intended to solve, which ultimately results in validation failures.

Of course there are several other factors like schedule pressure, cost pressure, poorly defined specifications (nowadays - maybe no specifications at all), late breaking changes, bad planning, over promising, actual bad code, naive software development processes, and all manner of management problems. But from an engineering standpoint, the lack of having a clear understanding of the problem domain, the customer's needs/concerns/pain points/cost concerns (acquisition and lifecycle, TCO), the larger system the software has to live within, and all of the things that define whether the software meets the validation bar are what's missing. There's testing aplenty, but veritably "good code" that does the wrong thing or sacrifices consideration for quality attributes that  code that passes low level testing impacts, like security, privacy, transactional integrity, etc., still dooms the software product.

What's missing today are the system engineers, product owners, and the problem domain aware architects and system testers who in the past would collaborate upfront on making sure the product to be built was the "right product" and would be subjected to the appropriate validation standards that would be applied for the problem being solved. Most of these things have been eliminated or scaled back for the sake of time-to-market, velocity, cost, resource availability and cost (outsourcing, contracting, anyone can code farms, etc.), and "agility." Building the wrong thing quickly and iterating over it sixteen times is okay, because you know, the software is never really done.

Learning how to "build software right" is a much easier challenge than learning how to "build the right software." The latter requires decades of investment.

Has Apple fallen into these traps? Probably. I suppose a "glass half empty" interpretation of their greatly expanded Beta Testing programs is a soft admission that they simply don't have the full scope of internal expertise in the primary problem domains that they are targeting with their software. This kind of soft admission is much better than them foisting well-intentioned but poorly executed software on their customer base or trying to hide their recognized shortcomings. I think it's an admirable approach. The constantly increasing complexity of software in-general may make Apple's approach inevitable for many more software products. Nobody likes scary surprises. Adding actual customers to the feedback loop is a good thing. Hopefully it frees up their internal teams to give more attention to the "build software right" aspects of the task at hand.