Hacker Newsnew | past | comments | ask | show | jobs | submit | akutlay's commentslogin

This was in fact flagged (though no indication on the title) yesterday, approximately 2 hours after it was on the second page.



It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.


True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.

If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.

However CSAM generation should obviously be blocked and it's very illegal here too.


Funnily Mistral is as much censored as ChatGPT.

One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.


Some misunderstanding here. This article makes abolutely no mention of CSAM. The objection is to "sexual content on X without people’s consent".


It's nonconsensual generation of sexual content of real people that is breaking the law. And things like CSAM generation which is obviously illegal.

> It feels a bit like foreign morals are being forced upon us.

Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.


whether it was the "first" definitely depends on your standards & focus: https://cloudsecurityalliance.org/blog/2025/02/19/deepseek-r...


This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.


Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.

If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.


I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.

> If you think people here think that models should enable CSAM you're out of your mind.

Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.

> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.


When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.


> When these models are fine tuned to allow any kind of nudity

If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.

The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."


Why does that seem absurd to you?


Don't feed the troll


Great list, thank you. The only thing to note is that whenever I imported a large list like this in the past, I always stopped checking my RSS reader after a while because the content wasn't interesting. I think finding RSS/adding it to a reader should happen organically over time.


This may be because most feed readers don't have a proper way to triage items. Adding a feed doesn't mean you want to read everything from said feed. Usually only a subset of articles are interesting.

I built a feed reader with that concept in mind, having a separate triage stage where you only decide if it's worth reading or not. This will make it easier to handle large feed lists and find the best articles from them.

https://lighthouseapp.io/


I just build feed hydrators that get the feeds, filter them and generate a new feed for FreshRSS to consume.

For example my HN feed only surfaces articles with enough votes + comments and a few other variables.

All high-content feeds also have a maximum number of items, if it goes over they're marked as read.


Yes but the name “Air” claims lightweight, not thin


Air, as a product line, quite famously started with Jobs emphasising the thinness of the MacBook Air by pulling it from a paper folder. Taking what are ultimately marketing terms as literal face-value descriptors isn't particularly useful.


I would guess they do it because they want to minimize the chance that someone will install an unapproved app to someone’s phone and cause harm. I know it’s already pretty hard but Apple seems to be very particular when it comes to this.


Popup on app open that warns app is sideloaded?

There are simpler and more usable options that are more defensible than what they do today.


That is not their job.


That’s an opinion. Apple’s take is that they sell ”everything that runs on your phone has gone through our reviews, so you can trust it isn’t malware”

That, in their opinion, makes it their job to prevent people from permanently installing software on other people’s phones. I’m sure they would remove the “permanently” if they could, but developers have to test builds so frequently that they can’t review them all.


It's not that they can't afford a $3 toothpaste, it is the environment they are in that makes it hard to prioritize things like this. It is the education and the overall life quality (or the lack there of) that causes this problem.


This seems to align with the article. The article states 3 to 5 percent decline after 2021. I see about the same in this graph.


All the countries above US have less than 1M population and generally poor countries. Saying that obesity is not related to the underlying social problems in the US showing this data is ridiculous.


https://www.bloomberg.com/news/articles/2025-03-03/china-ind... | https://archive.today/UeI7X

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...

> Rates of overweight and obesity increased at the global and regional levels, and in all nations, between 1990 and 2021. In 2021, an estimated 1·00 billion (95% uncertainty interval [UI] 0·989–1·01) adult males and 1·11 billion (1·10–1·12) adult females had overweight and obesity. China had the largest population of adults with overweight and obesity (402 million [397–407] individuals), followed by India (180 million [167–194]) and the USA (172 million [169–174]). The highest age-standardised prevalence of overweight and obesity was observed in countries in Oceania and north Africa and the Middle East, with many of these countries reporting prevalence of more than 80% in adults. Compared with 1990, the global prevalence of obesity had increased by 155·1% (149·8–160·3) in males and 104·9% (95% UI 100·9–108·8) in females. The most rapid rise in obesity prevalence was observed in the north Africa and the Middle East super-region, where age-standardised prevalence rates in males more than tripled and in females more than doubled. Assuming the continuation of historical trends, by 2050, we forecast that the total number of adults living with overweight and obesity will reach 3·80 billion (95% UI 3·39–4·04), over half of the likely global adult population at that time. While China, India, and the USA will continue to constitute a large proportion of the global population with overweight and obesity, the number in the sub-Saharan Africa super-region is forecasted to increase by 254·8% (234·4–269·5). In Nigeria specifically, the number of adults with overweight and obesity is forecasted to rise to 141 million (121–162) by 2050, making it the country with the fourth-largest population with overweight and obesity.



Bloomberg may be the last company to be concerned about free information, thinking about their five digit annual fees for the terminal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: