Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Big Safari & Kernel issues fixed in iOS 16.3.1, macOS 13.2.1 updates

Monday's software updates fix an array of security issues in macOS, iOS, and iPadOS, including one affecting Safari's WebKit that was being actively exploited.

Apple introduced small incremental updates across its software ecosystem on Monday, with iOS 16.3.1, iPadOS 16.3.1, and macOS 13.2.1 available to download by the public.

Following the release, Apple has published details about the security content of each update, with a lot of crossover between the three operating systems.

The first, a Kernel issue, impacts all three updates, and is described as one where "an app may be able to execute arbitrary code with kernel privileges. The fix addressed a "use after free issue" by adding "improved memory management.

Identified as CVE-2023-23514, the issue was declared by Xinru Chi of Pangu Lab and Ned Williamson of Google Project Zero.

The second, a WebKit problem, is listed as impacting all of the operating systems, as well as Safari itself. Under the issue, "processing maliciously crafted web content may lead to arbitrary code execution."

Apple adds that it is "aware of a report that this issue may have been actively exploited." It has since been fixed with "improved checks."

It is identified as CVE-2023-23529, and was found by "an anonymous researcher.

The last issue is for Shortcuts, and specifically affects macOS Ventura. Under the issue, an app "may be able to observe unprotected user data," which was fixed with "improved handling of temporary files."

CVE-2023-23522 was found by Wenchao Li and Xiaolong Bai of Alibaba Group.



5 Comments

lkrupp 19 Years · 10521 comments

Next comes the “I hope it fixes... the issue I’ve had for years but Apple won’t acknowledge” comments.

DoctorQ 7 Years · 55 comments

lkrupp said:
Next comes the “I hope it fixes... the issue I’ve had for years but Apple won’t acknowledge” comments.

It feels “snappier”, is that better? 😎

CheeseFreeze 7 Years · 1339 comments

I wish Apple would consider starting over like Google Fuchsia - a modern take on a posting system with new security considerations architected from the ground up.

dewme 10 Years · 5775 comments

All of these issues are garden variety programming errors that have been plaguing software quality for decades, like accessing memory that’s already been freed and failing to sufficiently verify input parameters. In my opinion, one of the root causes of the perpetuation of these fundamental programming errors is the difficulty human brains have with dealing with the complexity introduced as a result of multiprocessing and concurrency in applications and operating systems.

In other words, it’s difficult enough for a programmer to mentally keep track of all of the “accounting” related to the memory management, state management, persistence, volatility, privacy, security, exception handling, etc., for a single threaded user mode application. You know, making sure all the ‘i’s are dotted and all of the ‘t’s are crossed and how it relates to the underlying operating system. But in most cases eventually getting everything dealt with in these relatively limited cases is resolvable, often by brute force. When you add in multiprocessing, threading, concurrency,  loss of atomicity with context switching, etc., the ability of most human brains is quickly overwhelmed. Practicing good accounting and hygiene from a single threaded perspective is no longer sufficient when you step into a multiprocessing environment. Brute force testing no longer works, largely because the humans driving the testing don’t fully understand every angle from which to apply the force, whether surgically directed or brutish. They don’t know what they don’t know.

In much the same manner, programming successfully in an environment where privacy and security were not within the sphere of concern pretty much guarantees that the artifacts of those efforts are going to fail miserably where security and privacy is a real concern. Nowhere is this more apparent than in legacy code. A lot of the underpinnings in the most popular current computing environments is legacy code that’s considered to be, at some level, “not broken” and thus not to messed with. Try asking an engineering director for time, money, and resources to dig around in existing “working” code to look for problems rather than applying all that expensive things to building new product features. They may say “no problem” - about 5% of the time. Truth be told, they’ve probably had some bad outcomes in the 5% cases. So the legacy code lives on. I’d imagine that some of the legacy code in macOS, iOS, etc., that traces its roots back to OS X, NextStep, and Unix is older than the engineers who are currently maintaining that legacy code.

The current complexity of the not only multiprocessing but also highly distributed computing means that humans are effectively unable to fully understand whether their applications are correct, secure, resilient, crash resistant, etc. Too many moving parts, any of which can break or be broken by anyone who starts poking at them long enough to discover their weaknesses. This is clearly an area that requires additional help.

My hope is that forward thinking companies will look at AI, ML, and similar technologies and techniques to improve software quality by overcoming some of the limitations of human cognition in these narrow domains. Sure, it’s not sexy or exciting like human-like chat bots or other such AI applications, and it will never be perfect, but until we get to the point where we can actually trust the software we depend on, we’ll always be waiting around for the next mole that needs to be whacked. The wait won’t be long, it never is, especially for the folks who actively search for moles.

avon b7 20 Years · 8046 comments

dewme said:
All of these issues are garden variety programming errors that have been plaguing software quality for decades, like accessing memory that’s already been freed and failing to sufficiently verify input parameters. In my opinion, one of the root causes of the perpetuation of these fundamental programming errors is the difficulty human brains have with dealing with the complexity introduced as a result of multiprocessing and concurrency in applications and operating systems.

In other words, it’s difficult enough for a programmer to mentally keep track of all of the “accounting” related to the memory management, state management, persistence, volatility, privacy, security, exception handling, etc., for a single threaded user mode application. You know, making sure all the ‘i’s are dotted and all of the ‘t’s are crossed and how it relates to the underlying operating system. But in most cases eventually getting everything dealt with in these relatively limited cases is resolvable, often by brute force. When you add in multiprocessing, threading, concurrency,  loss of atomicity with context switching, etc., the ability of most human brains is quickly overwhelmed. Practicing good accounting and hygiene from a single threaded perspective is no longer sufficient when you step into a multiprocessing environment. Brute force testing no longer works, largely because the humans driving the testing don’t fully understand every angle from which to apply the force, whether surgically directed or brutish. They don’t know what they don’t know.

In much the same manner, programming successfully in an environment where privacy and security were not within the sphere of concern pretty much guarantees that the artifacts of those efforts are going to fail miserably where security and privacy is a real concern. Nowhere is this more apparent than in legacy code. A lot of the underpinnings in the most popular current computing environments is legacy code that’s considered to be, at some level, “not broken” and thus not to messed with. Try asking an engineering director for time, money, and resources to dig around in existing “working” code to look for problems rather than applying all that expensive things to building new product features. They may say “no problem” - about 5% of the time. Truth be told, they’ve probably had some bad outcomes in the 5% cases. So the legacy code lives on. I’d imagine that some of the legacy code in macOS, iOS, etc., that traces its roots back to OS X, NextStep, and Unix is older than the engineers who are currently maintaining that legacy code.

The current complexity of the not only multiprocessing but also highly distributed computing means that humans are effectively unable to fully understand whether their applications are correct, secure, resilient, crash resistant, etc. Too many moving parts, any of which can break or be broken by anyone who starts poking at them long enough to discover their weaknesses. This is clearly an area that requires additional help.

My hope is that forward thinking companies will look at AI, ML, and similar technologies and techniques to improve software quality by overcoming some of the limitations of human cognition in these narrow domains. Sure, it’s not sexy or exciting like human-like chat bots or other such AI applications, and it will never be perfect, but until we get to the point where we can actually trust the software we depend on, we’ll always be waiting around for the next mole that needs to be whacked. The wait won’t be long, it never is, especially for the folks who actively search for moles.

From what I've followed, AI or ML (or however you prefer to call it) are definitely where part of the solution lies.

Abstraction is obviously another angle but if you are abstracting to compilers (AI enhanced or otherwise) I suppose it could make programmers 'lazier' (like when we write text and expect the corrector to pick up on not only spelling but grammatical errors).

New languages or supersets of existing languages seem to be popping up too. Supposedly making programming itself easier although programmers of a certain generation tell me it makes programmers 'dumber' ha!

Luckily, I'm not a programmer. All I can do is listen to people who do it for a living and observe that there seems to be easy way to bring quality software to market without a disproportionate amount of holes in it.

Apple is trapped on the wheel and no matter what, a new major version of macOS and iOS have to roll out every year. With that in mind, slowing release cycles down would help to a degree.