Pardon my French, but this is the dumbest thing I have read all week. You simply cannot work on defensive techniques without understanding offensive techniques - plainly put, good luck developing exploit mitigations without having ever written or understood an exploit yourself. That’s how you get a slew of mitigations and security strategy that have questionable, if not negative value.
Agreed, eyebrows were elevated at this point in the article. If you want to build a good lock, you definitely want to consult the lock picking lawyer. And its not just a poor choice of title either:
> teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole
Ah yes, I too remember when buffer overflows, xss and sql injections became stale when the world learned about them and they were removed from all code bases, never to be seen again.
> Remote computing freed criminals from the historic requirement of proximity to their crimes. Anonymity and freedom from personal victim confrontation increased the emotional ease of crime […] hacking is a social problem. It's not a technology problem, at all. "Timid people could become criminals."
Like any white collar crime then? Anyway, there’s some truth in this, but the analysis is completely off. Remote hacking has lower risk, is easier to conceal, and you can mount many automated attacks in a short period of time. Also, feelings of guilt are often tamed by the victim being an (often rich) organization. Nobody would glorify, justify or brag about deploying ransomware on some grandma. Those crimes happen, but you won’t find them on tech blogs.
That. Also not educating users is a bad idea but it also becomes quite clear that the article was written in 2005 where the IT/security landscape was a much different one.
It’s so much better to prevent them from doing unsafe things in the first place, education is a long and hard undertaking and I see little practical evidence that it works on the majority of people.
>But, but, but I really really need to do $unsafething
No in almost all cases you don’t - it’s just taking shortcuts and cutting corners that is the problem here
The attacks with the biggest impact are usually social engineering attacks though. It can be as simple as shoulder surfing, tailgating or as advanced as an AI voice scam. Actually these are widely popularized since the early 90s by people like Kevin Mitnick
You do not have to be able to build actual sql injection yourself in order to have properly secured queries. Same with xss injection. Having rough ideas about attacks is probably necessary, but beyond that you primary need the discipline and correct frameworks that wont facilitate you to shoot yourself in the foot.
I don't think the argument is that dumb. For a start there's a difference between white hack hackers and dark hat hackers. Then here he's talking specifically about people who do pentesting known exploits on broken systems.
Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?
I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.
We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.
My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.
There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".
OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.
What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.
He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".
I don’t mean to sound argumentative, but that’s a knee jerk response.
It’s possible to reverse engineer the OS code to verify that the biometrics indeed end up on the SEP’s encrypted storage, and several people have done this in the past.
Here’s an excellent presentation on the SEP, found by just a simple Google search. [0]
an apple nerd will come along with a real answer but i believe the answer is no. that even if they patched software.. the chip involved is not going to (or physixally can't) cooperate
> i believe the answer is no. that even if they patched software.. the chip involved is not going to (or physixally can't) cooperate
Indeed.
The whole point of the Secure Enclave is that it is the hardware root of trust. See the Apple Platform Security Guide[1].
The Secure Enclave also contains things like a UID (unique root cryptographic key) and GID (Device Group ID), both of which are fused at time of manufacturing and are not externally readable, not even through debugging interfaces such as JTAG.
As hardware root of trust the Secure Enclave is fundamental to all parts of device security, including secure boot and verifying that system software (sepOS) is verified and signed by Apple.
Apple put a lot of effort into Secure Enclave and hardware revisions have brought improvements as you might expect, so always be weary if you come across old presentations !
Even if the chip didn't cooperate, Apple has the key derivation function and presumably everything used to generate your key. While we're on the topic of unlikely first-party attacks, it would be interesting to hear (or see) how Apple limits their ability to create duplicate keys.
> Apple has the key derivation function and presumably everything used to generate your key.
Nope.
The Secure Enclave still contains things like UID and GID which are fused into hardware at manufacturing and are not externally accessible, not even through debugging interfaces such as JTAG.
So Apple will never have all the input parameters for the key derivation functions.
And please, lets not go into tin-foil hat territory where you somehow think Apple logs all keys ever fused during manufacturing and then somehow ties these to you personally.
Unlikely. Having the key generation function is worthless, as you also would need the truly randomized nonce and salt used in any modern cryptographic function. There are plenty of methods to have truly unknowable functions even knowing exactly how the function is generated. That's the whole point of trustless security.
"The sensor captures the biometric image and securely transmits it to the Secure Enclave"[1]
IIRC the implementation detail is AES-GCM-256 with ECDH P-256, i.e. the biometric sensor and the secure enclave derive a unique session key via ECDH each and every time.
Clearly some software layer is required to interface with the secure enclave but its not the app.
The app opens an authentication context through the API and asks the API to perform the authentication. It is the API (through a standardised GUI interface) not the App that collects the biometrics. The API then returns yes/no to the app.
There is further a strict seperation of duties between biometric sensor and secure enclave.
Apple puts a significant amount of effort into making that software layer secure, and as this document[1] shows as time progresses the amount of security has only increased with the various chipset revisions.
The thing I say to all the Apple bashers is this. Sure you might not trust Apple (or Google), but even if you go buy the latest $Cool_Sounding_Open_Phone, you still need to trust someone and trust the supply chain.
Sure $Cool_Sounding_Open_Phone might have open-source firmware, but have you actually read every single line of code AND do you have the knowledge to do a security review of the code ? Not many people do. And if you are truly security conscious, you cannot possibly trust "the community" to review it for you.
Unless you're going to start from scratch, build your own PCB, your own firmware etc. But even then, you still need to trust the chip manufacturers, unless you open up your own foundry. So let's put our tin foil hats to one side shall we ?
Yes. And if your threat model really includes distrusting the manufacturer of your phone and its software after a specific point in time at which you have reversed all of its internals, you should have disabled software auto updates.
Just allowing kexts to be loaded should not increase the attack surface or expose the author to any currently known exploits. The reason that people avoid doing it anyways is because third party kexts have a history of obvious vulnerabilities and don't receive the same amount of eyeballs that first party kernel extensions do.
As you put it, it really is a desire for maximum security at play here.
>Just allowing kexts to be loaded should not increase the attack surface or expose the author to any currently known exploits
"Just allowing kexts to be loaded" sure, it wont. But "just allowing kexts to be loaded" makes no sense as an action, unless you also actually intend to and do load at least one kext.
In which case, it absolutely increases the attack surface.
__padding replied to you, but unfortunately their comment is dead because of their account being new, so I’ve reposted it as I cannot vouch yet.
> __padding 45 minutes ago [dead] [–]
> Typically with devices like network cards (that also operate over PCI-E)
You send the device a circular list of descriptors (pointers) to a region of main memory.
In order to send data to the device, you write your network packet to the memory region associated with the pointer of the current ‘head’ of the descriptor list.
So far, you have a ring of pointers, one of those pointers points to a location you just wrote to in ram.
You then tell the device that the head of the list has changed (as you just wrote some data to the region that the head of the list is pointing to - so it can consume that pointer), the device then goes ahead and copies the data from ram into an internal buffer on the card. Once the data is consumed, the tail pointer of the ring buffer is updated to indicate that the card is finished with that memory region.