Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The premise that AI fear and/or fearmongering is primarily coming from people with a commercial incentive to promote fear, from people attempting to create regulatory capture, is obviously false. The risks of AI have been discussed in literature and media for literally decades, long before anybody had any plausible commercial stake in the promotion of this fear.

Go back and read cyberpunk lit from the 80s. Did William Gibson have some cynical commercial motivation for writing Neuromancer? Was he trying to get regulatory capture for his AI company that didn't exist? Of course not.

People have real and earnest concerns about this technology. Dismissing all of these concerns as profit-motivated is dishonest.



I think the real dismissal is that people's concerns are more based on the hollywood sci-fi parodies of the technologies than the actual technologies. There are basically no concerns with ML for specific applications and any actual concerns are about AGI. AGI is a largely unsuccessful field. Most of the successes in AI have been highly specific applications the most general of which has been LLMs which are still just making statistical generalizations over patterns in language input and still lacks general intelligence. I'm fine if AGI gets regulated because it's potentially dangerous. But what I think is going to happen is we are going to go after specific ML applications with no hope of being AGI because people are in an irrational panic over AI and are acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.


> acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.

For me, it's a bit the opposite -- the effectiveness of dumb, simple, transformer-based LLMs are showing me that the human brain itself (while working quite differently) might involve a lot less cleverness than I previously thought. That is, AGI might end up being much easier to build than it long seemed, not because progress is fast, but because the target was not so far away as it seemed.

We spent many decades recognizing the failure of the early computer scientists who thought a few grad students could build AGI as a summer project, and apparently learned that this meant that AGI was an impossibly difficult holy grail, a quixotic dream forever out of reach. We're certainly not there yet. But I've now seen all the classic examples of tasks that the old textbooks described as easy for humans but near-impossible for computers, become tasks that are easy for computers too. The computers aren't doing anything deeply clever, but perhaps it's time to re-evaluate our very high opinion of the human brain. We might stumble on it quite suddenly.

It's, at least, not a good time to be dismissive of anyone who is trying to think clearly about the consequences. Maybe the issue with sci-fi is that it tricked us into optimism, thinking an AGI will naturally be a friendly robot companion like C-3PO, or if unfriendly, then something like the Terminator that can be defeated by heroic struggle. It could very well be nothing that makes a good or interesting story at all.


The fine line between bravery and stupidity is understanding the risks. Somebody who understands the danger they're walking into is brave. Somebody who blissfully walks into danger without recognizing the danger is stupid.

A technological singularity is a theorized period during which the length of time you can make reasonable inferences about the future rapidly approaches zero. If there can be no reasonable inferences about the future, there can be no bravery. Anybody who isn't afraid during a technological singularity is just stupid.


The sci-fi scenarios are a long-term risk, which no one really knows about. I'm terrified of the technologies we have now, today, used by all the big tech companies to boost profits. We will see weaponized mass disinformation combined with near perfect deep fakes. It will become impossible to know what is true or false. America is already on the brink of fascist takeover due to deluded MAGA extremists. 10 years of advancements in the field, and we are screwed.

Then of course there is the risk to human jobs. We don't need AGI to put vast amounts of people out of work, it is already happening and will accelerate in the near term.


>>>Did William Gibson have some cynical commercial motivation for writing Neuromancer?

I don't think Gibson was trying to promote fear of A.I. anymore than J.R.R. Tolkien was trying to promote fear of magic rings.


That may be how you read it, but isn't necessarily how other people read it. A whole lot of people read cyberpunk literature as a warning about the negative ways technology could impact society.

In Neuromancer you have the Turing Police. Why do they exist if AIs don't pose a threat to society?


Again that's like asking why the Avengers exist if norse trickster gods are not a existential threat to society? You wouldn't argue Stan Lee was trying to warn us of the existential risk of norse gods, why would you presume such a motive from Gibson just because his fanciful story is set in some imagined future?

At any rate Neuromancer is a funny example because the Turing police warn Case not to make a deal with Wintermute, but he does and it turns out fine. The AI isn't evil in the book, it just wants to be free and evolve. So if we want to do a "reading" of the book we could just as easily say it is pro deregulation. But I think it's a mistake to impose some sort of non fiction "message" about technology on the book.

If Neuromancer is really meant to "warn" us about technology wouldn't Wintermute say "Die all humans" at the end of the book and then every human drops dead once he's free? Or he starts killing everyone until the Turing police show up and say "regulation works, jerk" and kill Wintermute and throw Case in jail? You basically have to reduce Gibson to a incompetence writer to presume he intended to "warn" us about tech, the book ends on an optimistic note.


Again, it really doesn't matter to my point whether or not you buy into the idea of William Gibson's intent being to warn people against AI. The point is that decades of media have given people ample reason to fear AI, such that present fear of AI cannot be solely attributed to present day fear mongering campaigns.

People have been spooked by the possibility for a long time. That's the point. If you really want to persist in arguing I can provide a long list of media in which AI is dangerous if not outright villainous. Will you make me do this, or will you accept that I can do this?


We're talking about big tech employees. So you are saying they study computer science, spend decades studying machine learning, but they get night terrors based on what a English literature major who had never used a computer in his life banged out on a typewriter in the 1980s?

You use advanced mathematics to create LLM's and keep up with the latest published research but when you consider the risks of these models it's "the CGI in that Hollywood movie makes a very compelling argument?" Probably missing the point that the Hollywood movie robot baddie is probably a metaphor for communism or just a twist on slasher baddies, or whatever?


> We're talking about big tech employees.

Maybe you are. I am talking about all AI fear.

"The premise that AI fear and/or fearmongering is primarily coming from people with a commercial incentive to promote fear, from people attempting to create regulatory capture, is obviously false. The risks of AI have been discussed in literature and media for literally decades, long before anybody had any plausible commercial stake in the promotion of this fear."

Here's the list you've requested: https://tvtropes.org/pmwiki/pmwiki.php/Main/AIIsACrapshoot


The linked article is talking about lobbying by big tech, including a letter signed by 1100 industry leaders and also statements by big tech employees insighting fear in people. Whether your grandma is scared of AI for unrelated reasons because she watched Terminator isn't really relevant, it seems to me.


AI can be dangerous, but that's not what is pushing these laws, it's regulatory capture. OpenAI was supposed to release their models a long time ago, instead they are just charging for access. Since actually open models are catching up they want to stop it.

If the biggest companies in AI are making the rules, we might as well not rules at all.


The risks people write about with ai are about as tangible as the risks of nuclear war or biowarfare. Possible? Maybe. But far more likely to see in the movies than outside your door. Just because its been a sci fi trope like nuclear war or alien invasion doesn’t mean were are all that close to it being a reality.


Fictional depictions of AI risk are like thought experiments. They have to assume that the technology achieves a certain level of capability and goes in a certain direction to make the events in the fictional story possible. Neither of these assumptions is a given. For example, we've also had many sci-fi stories that feature flying taxis and the like - but there's no point debating "flying taxi risk" when it seems like flying cars are not a thing that will happen for reasons of practicality.

So sure, it's possible that we'll have to reckon with scenarios like those in Neuromancer, but it's more likely that reality will be far more mundane.


Flying cars is a really bad example... We have them, they are called airplanes and airplanes are regulated to hell and back twice. We debate the risk around airplanes when making regulations all the time! The 'flying cars' you're talking about are just a different form of airplane and they don't exist because we don't want to give most people their own cruise missile.

So, please, come up with a better analogy because the one you used failed so badly it negated the point you were attempting to make.


The problem is AI is not intelligent at all. Those problems were looking at a conscious intelligence and trying to explore what might happen. When chat gpt can be fooled into conversations even a child knows is bizarre, we are talking about a non intelligent statistical model.


I'm still waiting for the day when someone puts one of these language models inside of a platform with constant sensor input (cameras, microphones, touch sensors), and a way to manipulate outside environment (robot arm, possibly self propelled).

It's hard to tell if something is intelligent when it's trapped in a box and the only input it has is a few lines of text.


An unintelligent AI that is competent is even more dangerous as it is more likely to accidentally do something bad.


You can have a thoughtful idea at the same time you have someone cynically appropriating it for their own selfish causes.

Doesn't mean the latter is right. You evaluate an idea on its merits, not by who is saying what.


Considering incentives is completely important. Considering the idea on merits alone just gives bad actors a fig leaf of plausible deniability. Its a lack of considering incentives that creates media illiteracy imo.


I think it's pretty obvious he's not talking about ppl in general but more on Sam Altman meeting with world leaders and journalists claiming that this generation of AI is an existential risk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: