Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s plenty of uninspiring (not sure about high skilled) white collar work...I would not be surprised if a lot of what early career lawyers, investment bankers, accountants, consultants do could be automated. I’m also bullish on radiology being the first area of medical practice that can be largely automated by machines.

In the future (and even now) careers are going to be defined by how well you can form relationships (and therefore sell) i.e. the things that will be hardest for machines to do.



I’m pretty bullish on radiology going first too. I keep telling my radiologist friend but she refuses to acknowledge that the computer is already better and faster than her.

But she also makes a good point — even if the computer is better, in today’s lawsuit happy America, it will be a long time before anyone will accept a result that wasn’t at least reviewed by a human.


Absolutely agreed on radiology. Earlier this year 3 different radiologists read my c-spine MRI and said I have slight bulging on C5-6 and nothing more. 5 different neurosurgeons* said sure, the C5-6 is bad but the real issue is the C6-7 herniated disc impinging on my nerves. I actually asked 2 of those 3 radiologists re-read the MRI and look for the C6-7 hernia and they couldn't find anything. All of the surgeons picked it up immediately.

I was told by a surgeon that this happens because radiologists are generalists (looking for strong evidence of different types of issues all over the body) while surgeons are trained to know the specific issues that happen in few parts, even if they don't show up clearly in MRI/x-rays.

AI should be able to take data from all the specialists to make a better generalist than human-trained radiologists. Integrated AI system should immediately read an MRI/x-ray/ultrasound and spit out possible issues. I can imagine an x-ray or ultrasound video feed hooked to the cloud that shows in real-time possible diagnoses and highlights the areas of concern. Ultrasounds are safe and this could even be a consumer device. Just like 3D-ultrasounds and 23&Me are for 'entertainment' and not medical solutions, ultrasound-with-AI can be a good tool for at-home what-ifs. It could be a great prenatal monitoring device.

* I know a lot of surgeons personally. Didn't cause a trillion dollar insurance claim.


As a doctor (not a raiologist), I believe your example shows the opposite, that is, it shows how hard it will be to automate radiology.

Radiology requires a "theory of the body", so to speak. You can't just look at the image in isolation. You often need detailed knowledge of the patient's clinical situation, and some actual reasoning. My guess is that that's why the surgeons got it right in this case (they are more familiar with the complaints of the patient and with the "live" anatomy of that region).

This doesn't mean that radiology can't be automated. It just means that to be a good radiologist, you might need to be a general artificial intelligence, capable of graduating from medical school.

This is different from something like classifying moles into benign, malign and high risk. That's something that can be determined from the pixels of a picture (even by human dermatologists, through experience or by following certain simple algorithms), and has no relationship to the rest of the patient. This means that automating mole classification is kinda like automating chess. Automating radiology looks more like automating the command chain for WW2.

On the other hand, pathology (looking at tissue samples through the microscope) seems much easier to automate. It relies heavily on pattern recognition and IMO (I'm not a pathologist either, although I've spent time in a pathology lab) it's less dependent on the clinical data of the patient. It's almost as if the doctor were looking at the image and nothing else, and the kinds of pattern doctors are something that might be automated. This is of course a simplification, and sometimes clinical judgement is important even in pathology.

None of this means that medicine can't be automated. I'm just trying to convey some of the difficulties you might have in automating radiology, as opposed to other areas of medicine.

And in any case, my criterion for difficulty of automating is "does it seem to require a general artificial intelligence or not?". If you have a general artificial intelligence completely indistinguishable from a human, then all bets are off.


> * I know a lot of surgeons personally. Didn't cause a trillion dollar insurance claim.

We live in a sad state of affairs in which this disclaimer is necessary. :(


I wonder what the record is and has one gone past a billion.


> the real issue is the C6-7 herniated disc impinging on my nerves

Just curious, is there a way you can get this fixed? Or do you have to live with the pain?


I couldn't live with the pain so I got surgery right away. I'm pretty much healed now 4 months later and can lift 30-50lbs without any issue.


That's great to hear! Did you have the disc replaced? I have similar pain in L5-S1 due to a disc protrusion affecting the nerves in that region (either via impingement or, more likely, inflammation). Unfortunately, the surgeon I consulted said that surgery is rarely performed that far down the spine unless there's really serious symptoms.

In the meantime, I keep monitoring these studies on mesenchymal stem cells for disc regeneration, hoping one of them makes it to clinical trials :(


I had disc replacement as well as fusion, since C5-6 is likely in next few years. I got lucky because C-spine is much easier to operate than lumbar. They go in from front (neck) and do not need to touch the spinal cord. For lumbar, they have to.

One cool thing I learned about is the existence of IONM: https://en.wikipedia.org/wiki/Intraoperative_neurophysiologi...

I felt pretty comfortable knowing that things would flash red and beep if anyone got close to my spine during the surgery.

Close friend of mine with multiple lumbar herniations swears by stretching regimen like frequent yoga. Maybe that can help in the meantime. Good luck!


Thanks for the info!


People need to be able to believe they can still make the leap of awareness in their career - even if the machine produces better diagnosis, this is on one hand, no different than referencing a book. People need to be able to learn with the machine in order to yield skill that exceeds what AI can presently do. Unfortunately, as I'm finding, lots of that stuff tests my own knowledge. My IDE gives me a complexity score on some of the functions I write. It's easy to focus on lowering that number, it's hard to actually know 'is this metric actually helping me write better code for my specific environment?'

A metric like that is a piece of data. So is anything a machine is going to produce. It's going to come in a different form, Oracle machines in some ways seem to be an actual thing these days, in that I have to ask myself why - why do all the words I search line up in this specific way? Why does that produce some thoughts I have? How do I know what I know? How can I test that?

I think humans can adapt to anything. I think we retain that flexibity as long as we are up for the challenge. That may seem like common sense or folk wisdom, but, there's probably good reasons stuff like that sticks around.

Saying to your radiologist friend that the computer is better and faster than she is flat out puts the entire security of her future - everything she has built into - on a coin toss. Of course she's going to react defensively. If you punch your hand into someone's chest and hold their heart out in front of them - yes, they will likely have difficulty thinking objectively.

People can adapt. It is often very challenging. But it's often also worth changing, if the question of progress versus stagnation is the thing at stake.

People who go into medicine want to save lives, so work with that foundation. Rather than who is going to get sued, direct the conversation towards how many more lives can be saved. It's the same argument as self driving cars. The problem is as we age, we think we have control over permanent stuff.

If we crash the car, that's our fault. If the car crashes the car, that's something we have no control over. But sometimes we might have a random seizure. We don't think about those probabilities when it comes to us driving the car versus the car driving the car, because we become accustomed to a context. But that's just an illusion until shit changes. I'd rather be aware of the easy and obvious changes in a conservative fashion, than totally ignorant to the hard ones until catastrophe I didn't see coming happens.


The interesting challenges will come when trying to explain to a jury just why any sufficiently esoteric algorithm (AI, ML, DL) choose an action (surgery vs. “watchful waiting”, braking vs. “lane following”, buy vs. sell, etc.) that was taken.


All of this stuff is going to question ethics. People who do mental health care on truly sick people (people that want to hurt other people) understand very clearly how many factors need to align to produce such a person. That changes our definitions of autonomy, that changes belief.

These are things that are core to people who don't understand computation, and they are core to ego - what makes the lives we live better than the lives we compare ourselves to? That's the lion inside of us, that doesn't give a shit who gets ripped to shreds (or simply can't afford to think about it). I know I am a good person because I have hurt less than all the others. But that's not true. I tell myself this, but is this a thing I can prove?

There are profound arguments to be made about why a machine can do a better calculation than a human does. It has access to more information. If people can't believe that, that's their own ego.

Create a job called computer science lawyer, make sure the judge understands computer science, explain the computation to a jury in a way that explains how the algorithm was designed, align that with present understanding of psychology. Checks and balances.


That's why I don't see it coming in law anytime soon. It's the same reason why Google removed all the AI on the search part. Everywhere where you need to explain why an answer was chosen, AI is a poor solution because of its poor debugging.


Lexis Nexus wiped out an entire level of the law firm hierarchy. Quickbooks and Lacerte significantly reduced the demand for accountants. There's a ton of white-collar high-skilled work that is really just grunt work that requires knowledge, and those will be automated in the future.


> Quickbooks and Lacerte significantly reduced the demand for accountants.

My understanding is spreadsheets didn't so much reduce accountant employment as change the job from determining the facts to predicting the future; more 'what-ifs.'


Do we end up with less skilled late-stage doctors, investment bankers, radiologists, etc. if we automate away the grind?


If these years of grind are replaced with a higher-level and more intentional early-stage practise of monitoring and managing the automation, then I think we could get better skilled people.

EDIT: Although I suppose that these professionals would need at least some practise of manual work too, to be able to monitor the machines' output.


I think it will mean that school will just take longer.

Even though we have calculators we still make kids learn basic math in school because it is necessary for doing the higher level work.

I suspect it would be the same here. Even more schooling to learn what the computer already knows, so that you can do novel work.


Does it matter if the skill level(in diagnostics) of the average doc + software will become much better, while the special doctors, who work on optimizing the software, will be much more skilled?

And on the other hand, the average doctor's interpersonal skills will probably improve ?


Aren't we be seeing something like that with commercial pilots due to flight automation? (I could be wrong.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: