Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I do not generate rhetoric that could unduly alter people’s political views..."

This sounds an awful lot like feeding users comforting confirmations of what they already believe.

Clearly, filter bubbles aren't a big enough social problem yet. Let's enhance them with LLM's! What could possibly go wrong?



I feel like they’re in a lose-lose situation here. They get hammered for this approach… but if they take a more activist approach and say “I can generate rhetoric that could influence someone’s political beliefs” (which opens a serious can of AI worms) they will get hammered for not disabusing people of ideas some rough consensus of society disagrees with.

I don’t think society at large knows what it wants LLMs to really do.


I think it might be fun if the AI puts the ball in the user’s court:

Morning Esophagus! Please select your mood today!

Do you want the answer that (A) aligns with your political beliefs, (B) challenges your beliefs with robust dialogue, or (C) pisses in your breakfast to really get ya going?


C for sure!

Wind me up, let’s do this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: