This resonates with me. I’m willing to pay for content for humans by humans.
I am reducing my engagements with the web and technology in general due to lack of quality. It due to AI content, AI hype seeping through everything non-stop. Throw in ads literally everywhere, hyper partisan politics, phony influencers, social media algorithms that live off of FOMO.
It’s all gross and has been sapping the joy from people for too long.
The lack of quality is what gets to me. I've used AI tools in many aspects of my life to great benefit. Yet, nowadays scrolling through Reddit, X, or even video based platforms are a deluge of drivel. It was bad enough that I was spending too much time on my phone instead of interacting with other people but now even the content I'm interacting with isn't human!
More and more I find myself opening Youtube or Reddit and just closing because the information just seems low. It is either me or some sort of mass debasement of entropy is happening. Actually, good.
Are you saying that now, because of all the AI generated content online, you're suddenly willing to pay for sites like Wired, NatGeo, Popular Mechanics, MIT Tech Review or The Economist? These have been around for ages. Why now, and not before?
I'm curious because GenAI might actually help traditional media orgs that still hire humans to write. They just need to move away from hard or metered paywalls and move toward a token model (something less common but growing). Let people buy credits to unlock individual articles instead of forcing a full subscription. Some Substack newsletters are already trying pay-per-post.
(Note: I got downvoted for including a US newspaper as an example. I'm not from the US, it was just a random example. I've removed it to avoid unnecessary polarisation.)
Because they're in the business of producing original, quality content. That's why they charge a fee. Their whole model depends on real people writing articles. The moment they stop doing that is the moment they start losing subscribers.
If the content is AI generated, it explicitly does.
Think about it, would you rather listen to a Spotify AI generated piano solo? Or Donna Summer's 1978 Album "On the Radio"?
AI content is slop, plain and simple and there's no way around it. I would expect a literal child to produce better content than even the most advanced AI models available.
That doesn't mean that AI is bad - it's very, very good at certain things. But media and art are uniquely human creations - if you remove the human part, what are you left with? Is it surprising that something Sora is producing isn't really comparable to The Devil Wears Prada?
Now, if you create content and then slightly edit it with an AI, that's fine. But if, say, the NYT shifted to all AI generated stuff, they would go out of business remarkably fast.
What you're saying doesn't really pan. If the work is pleasing, there's no preference. And I've seen a couple of articles in the last several months where humans thought that AI-generated works of art were created by humans. And the quality will only improve over time. Eventually it'll be the case that the only way to tell something was generated by AI is by labeling it as such.
> And I've seen a couple of articles in the last several months where humans thought that AI-generated works of art were created by humans.
Yes, and there's also human created works of art that are three blue stripes on a white canvas.
Look, If I poll 1000 people, how many would rather listen to AI music rather than their favorite artist? 1, if I'm lucky?
After a certain point we have to acknowledge what is actually going on, here in real life where real humans lives, and put aside what we think might be going on. People, currently, do not like AI music or AI TV or whatever the fuck. They just don't.
I would definitely not. The pricing is outrageous when you consider that I'll read at most a few articles per year from an individual source. And at least the NYT is a borderline scam organization with how much more difficult it is to unsubscribe than it is to subscribe.
Try ground news? It aggregates everything about a particular story shows you all sides. Even has a low tier at $2.99 a month. I knew of another one that lets you subscribe to a bunch of papers for a lower price point but I forgot the name of it.
I think “balance” is a flawed goal, when what people actually need to interpret the news is context. Since their methodology is almost certainly based on heuristics rather than an editor with any underlying philosophy or education, these sites that try to show things from all sides can end up being just another level of obfuscation rather than contributing toward understanding.
It’s sad that a proper payment system was never developed for this kind of content. I’d be perfectly happy to pay $x/month for all the news I get, but I won’t sign up for an ongoing subscription to a site I might only look at once.
I agree. If you're a large media org, the best way to monetise articles is to let people browse for free and buy tokens, one token per article. The more they read, the more tokens they buy, and if they're spending $30 a month the site says, "Hey, why not subscribe for $29.99 a month and get unlimited access?"
Problem is enough persons likely won't be fading enough to get to that threshold, and the business isn't sustainable on tokens. The model is inherently broken.
True. I know of a company that added a pay‑as‑you‑go option. In the first year, more people signed up, but over time subscriptions dropped, so they scrapped it.
That said, pay‑per‑post is becoming more popular, and platforms like Substack are already experimenting with it.
I think it's really easy for people to say that the dive in quality is due to AI. I actually think it's the other way around.
I'm in my late 40s and I've been watching quality decrease in our discourse and media for decades. And I think AI is just another opportunity for them to find a way to further reduce costs. But the incentive to reduce costs is there and it's a result of market demand for convenience and low cost above all else, and it's there regardless of whether or not AI is involved.
And so I think you're asking probably the most salient question: if you're looking for high quality content, where do you go? For me, personally, I've found that people generally are not producing high-quality content for commercial gain. So I've just gotten a lot more community-focused in recent years.
Yeah, I think I caught on to a little bit of that in your comment, which is why I started talking about the dynamic that just because you're paying a legacy publication doesn't mean you can avoid AI. It sort of a case where there's an unlimited number of ways to produce garbage content and really only one way to produce excellent content.
I think there is a decent likelihood that the publications and publishers in general will use AI to decrease the reliance on artists and writers. The most obvious outcome here is that we're going to be consuming AI content no matter what. It's just a question of who's going to be getting paid for it.
This is not my idea. It's a concept that Rick Beato pointed out in his videos analyzing music production today and the direction it's going between the artists and the record labels. Everybody wants to be doing less work, and so the argument is really over who gets to control the technology and get paid.
If you want my honest opinion, GenAI is definitely going to change how we write and consume information:
- Original, quality content will still exist, and it'll still need to be paid for, either monthly or per article.
- Right now, articles are written by journalists. In the near future, a single article might be written by several people who aren't journalists at all, but still get paid. An AI will handle fact-checking and composition. The opinions, ideas, and knowledge will come from humans, AI will just verify and stitch it together.
Their frustration with plagiarism, inaccuracy, and the "tragedy of the commons" effect on web content is valid, but that is human behavior - they even cite an example.
But "wisdom", if we are going to aspire to that, would look for the ways these tools can be used to better our condition as creators and thinkers, rather than have our opinion be led by a reactive moral narrative not grounded in pragmatism or reality.
I will engage in good faith. I do not believe it's theft.
I think that 'stealing' is a loaded term that is often used by propagandists who want to drive outrage and anti-AI sentiment (and is readily consumed, as all other propaganda, by people looking to be 'outraged' at yet another example of the `other side` being bad.)
What are your thoughts on Aaron Swartz? Do you think he was justifiably prosecuted? We must do mental gymnastics to justify one and not the other. We either value the freedom of information to educate and inspire, or we make ourselves slave to an ossified culture and technological progress that is rented to us by large companies.
I propose that we find ourselves conflicted with reconciling the complete and utter myth of Intellectual Property to an era where the system that constructed copyright is no longer relevant.
I do not relish that creatives have seen their work evaporate. I do not envy the many who lived their lives forgetting how to learn while relying on a skill that would give them "job security".
But getting upset by "big corpo stealing" is not genuinely exploring the fundamental properties of information, the nature of copyright law, and the second and third order impacts of successfully twisting the copyright system to limit AI training.
To be honest, is it really that bad that the web is dead? I understand the value of a forum, but as far as the content is concerned if we were to go back to the days of physical magazines I wouldn't be upset.
It is not the medium that is dying, rather it is the content that is dying. People are saying the web is dying because there's no longer financial incentive for actual humans to create content (there may be other incentives, and those types of forms may flourish). That goes equally for other media as well: print, digital, it doesn't matter.
Humans still write for sites behind paywalls like Wired, WSJ, FT, NatGeo, NYT, and The Economist, but I don't hear anyone say "I'm finally going to pay for good content written by humans." Instead, all I hear is people complaining about the flood of AI-generated crap.
My advice: Subscribe to one or two sites or newsletters, and all your problems are solved.
I don't know, I personally miss subscriptions to physical, there is the wait, there is also the whole reading in physical reality experience.
Also, whilst I am sure the quality of payed online magazines is better, once you start reading the cheap/free online crap, it is impossible to switch to a higher quality, and I always start my day with the cheap crap.
Generative AI was a huge mistake. We also see the flip side of this in "AI detectors" ensnaring people who are writing the way they always have. We see it in the idea that "if you use too many em dashes, then you must be AI".
I can just prompt a model to never use em dashes. Which is rather funny to me. I can make it respond like Kamina from Gurren Lagann to hype me up as well.
I feel like we're going though an evolution of the web, and no matter what we do, it's going to happen.
The web is going to change (for better or worse) or die, and there's nothing we can do about it.
The web killed printed media to a large degree and AI will do the same, Resistance is futile!
You may be right that resistance is futile. But I don't think adaptation is. And there's no reason our adaptation can't be rooted in resistance.
For example, maybe smaller local forums make a comeback and their communities decide to hide all threads behind auth. (I don't necessarily see that happening, just an example.)
And honestly given the stronger feelings people are developing against what I'll just call "creative use of generative AI", I'm starting to think maybe resistance isn't futile... Poisoning original digital art so it's less useful for image generators...social shaming of AI generated music being laundered on platforms...those things do feel like meaningful steps towards resistance.
Printed media, radio and television are still very much alive and well, just not as big as they once were. We've past peak Web but it's not going anywhere anytime soon.
We do need new protocols though.
As a social network the web is collapsing but it is entirely possible that a new kind of internet could emerge in its place. After all, routing around damage is part of the essence of the Internet.
Without authors being paid to create new content AI is nothing/it's irrelevant!
Either some startup needs to come up with compensation for authors or the big players need to set up a system that still gets authors paid as Im guessing in five to ten years we are not visiting websites. Our soon to be AI friend (Facetime the "friend," or just talk or text it) seen on our lockscreens or in a hologram is visiting all sites to create visuals of the info and displaying/discussing it with us immediately upon request.
Haven’t for a while. There’s a bunch of open source projects that provide a Captcha and Cloudflare bypass proxy where you just point your scraping through the proxy and it takes care of the challenges. It’s rather trivial to handle nowadays.
A bunch of the torrent trackers are now behind Cloudflare so the pirate community has been maintaining many of these projects in order to enable their autodownloaders like Sonarr, Radarr, Lidarr, etc.
Dumb dumbs, ai is just a tool. Do you still use an abacus or a computer? Are you walking or driving a car? Should we ponder at the stars or build rockets. Not using ai is insane. It’s the most magical tool I’ve ever used.
> Both are well worth the effort of creating an account to read them.
Talking about how AI broke the web, while gating the content in a way that breaks the web, which they’ve been doing since before the AI/LLM threat came on the scene.
One of the reasons people prefer AI to visiting the source websites is because the source websites have so often made it such an unpleasant experience, making you jump through hoops and navigate a maze of dark patterns. Meanwhile, AI gives you what you want without all those roadblocks.
This is like Napster vs iTunes all over again. People started paying for media online as soon as it became convenient to do so. You make things inconvenient for people, you’ll lose out to whatever the more convenient option is.
ChatGPT alone has 800M WAU and there have been several articles recently describing how AI has a large chilling effect on search engine referrals, including the article we are currently discussing. This is very clearly a mainstream preference.
The free open web was economically dependent on reaching large audiences by search and social, and supported by advertising. Without that scale, you're looking at more paywalls and membership programs. The smaller the niche, the more expensive and tighter the program needs to be.
A lot of that gating went into effect after social media start suppressing links.
If you believe in making quality products available to mass audiences—information wants to be free and all that—this is a problem.
AI will be the final nail in the coffin of the internet. What was growth by enshittification has become growth by shit (AI) itself, and there is no going back. Welcome to the shitternet.
Set your site to 401 unauthorized with a basic challenge if an auth header isn’t sent, and set the auth description to “Enter anything to proceed. All human access is authorized. Unauthorized non-human access is prohibited.”. Crawlers can’t parse the instructions and will deadstop on them, while people will shrug and enter any password, which will work.
Anubis is also viable and popular, but it lacks the legal threat to AI of being able to file a federal hacking claim against a scraper’s unauthorized intrusion if they code their scraper to transmit an empty/invalid/valid authentication header.
I think you might be overestimating how little I care if humans don’t interact with my content. If it’s not worth five seconds of their time to type a word and click OK, I have better things to do than trying to coddle them into caring about me.
What decade do you think it is? :) Depending on who you ask, captcha bots have become better at solving them than humans...
There's almost nothing you can do that "AI" can't while keeping it easy enough for your average joe that wants to login. Especially considering "the grandma test"...
It’s trivial for them to solve this automatically. It’s also illegal under the computer fraud and abuse act, because then they’re knowingly and intentionally bypassing for profit (fraud) a clearly defined authorization barrier that explicitly prohibits their (ab)use. No joke, that’s an actual U.S. federal crime, and it would hold up well enough to reach a courtroom to be judged. (Can’t say what would happen after that — the law is not code, etc.) That’s why it’s so effective: it doesn’t matter that they can bypass it, it matters that they could be jailed if they do so, and corporations only take risks whose profits can pay for the penalties incurred. The cost of federal prison is well in excess of what most corporate leaders will tolerate :)
ps. I learned assembly programming from my grandma. She would have loved to discuss this problem with me.
Just because it's called authentication doesn't mean that all possible applications of it qualify. Following directions provided by the owner isn't breaking into the system. Clicking "Yes I'm over 18" when you aren't isn't a violation of the CFAA any more than any other ToS violation is.
What matters isn't the box presenting the challenge but rather the nature of the challenge itself.
Perhaps! But for some weird reason, I bet you’ll find no one except e.g. Yandex is willing to code their AI scraper to attempt passwords at websites — because, as noted, only a courtroom can determine with certainty the exposure to legal risk here, and so long as usage isn’t widespread there’s no profit in accepting that risk.
Very feasible solution I guess the only issue is that we now have to add friction for legitimate users which will only accelerate their migration to AI summaries at the top of the page.
AI is on course to destroy anonymous web browsing due to the costs it inflicts on server operators; user friction, whether via manual input or proof of work interstitial or otherwise, is the only remaining alternative to attested identity, pseudonymous or not. This is HN, so I’m suggesting a non-attestation solution, which is therefore user friction.
Oh I agree with you entirely, not in disagreement at all. Just highlighting the absurdity of the world that this is where we are.
I was discussing with my partner recently the fact that I believe this is all heading to a licensed, centralised internet. I can very easily foresee the path we are blindly wandering leading to an authoritarian space in which websites and hosting will only be available to a select few, for a hefty fee.
The justification will be ‘think of the children’ or ‘we need to control what your AI agents can connect to as there are bad actors whom will convince your AI agents to give them your funds and personal, private data’
Obviously just pure speculative hyperbole to be taken with a tbsp of salt, but, yeah, I can see the path quite clearly judging by how little friction governments get for their reckless and nefarious actions nowadays.
Ultimately, the web will revert to where it was before full text search harvested and profited from all the curated indexes of content — and, bluntly, we did fine back then! It was a lot less shitty of a web than we have now :) And it’ll be a lot easier to be open with others when someone can’t just full-text search their way to our bulletin board (forums used to be called that, since they derived their core structure from BBSes!) to hate on us. Will be harder to use the web for problem-solving in esoteric corner cases, because you’ll have to find the right enthusiast community and jump through some hoops to sign up and become a trusted member. Can’t wait, honestly.
> Crawlers can’t parse the instructions and will deadstop on them, while people will shrug and enter any password, which will work.
Well, that just seems like something where they will just fix the glitch. Do you really think that the devs of bots can't fix this instead of it just haven't fixed it yet?
Corporations can't be jailed. They just pay fines. So, not really sure what the point is. You also have to realize that not all devs that create bots are under the jurisdiction of US law. So again I ask, what's your point?
> Nothing that I publish here has come from AI or answer engines. Every word that is written comes from this human.
I think we're rapidly approaching the point where no one will be able to make this claim anymore. AI summaries and answers are ubiquitous and our knowledge or beliefs are directly or indirectly informed by them. We can avoid 1st order AI use, but it is impossible to avoid 2nd order and further exposure.
The water supply has been poisoned and everyone needs to drink.
I am reducing my engagements with the web and technology in general due to lack of quality. It due to AI content, AI hype seeping through everything non-stop. Throw in ads literally everywhere, hyper partisan politics, phony influencers, social media algorithms that live off of FOMO.
It’s all gross and has been sapping the joy from people for too long.