Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Life After Language (ribbonfarm.com)
147 points by jger15 on May 6, 2023 | hide | past | favorite | 99 comments


> the longest regular conversations I’ve had in the last week have been with an AI-powered rubber duck

I think, and am basing this off of nothing objective, but it feels like the vast majority of content today is written by people who spend, frankly, too much time online. I think the fact I’m writing a comment about this says that I, too, spend too much time online.

The bubble that the tech world and this website in particular differs so much so from reality that I don’t know what to make of the articles that front page here.

All the twitter doomsaying, trump will never be president, all entirely wrong. I don’t know what to make of it - maybe I should log off for a while


Just go out, do some sports and enjoy life :) I stopped spending too much time in front of the computer and started doing more outdoor activities. Best decision ever.


“Go touch grass” is used derisively, but it’s something I tell myself more and more. We overvalue the online world and all its drama. Go outside, meet people, make your own organic, locally grown drama.

These days I schedule my work around the weather. Few things bring me as much happiness as a day in the sun. I know it has been a good day when I have not touched my laptop once.


I recently made a small webapp to make me "touch grass". The idea behind it is that you enter some activities (or keep the random defaults), and when you are bored or doom scrolling, it call tell you what to do.

It's a bit silly, and still very bare bones, but I just like the phrase "touch grass", and this is my effort to reclaim it from the depths of derisiveness.

https://makemetouchgrass.com


You're being ironic right?


"Go touch grass" barefeet, if you want even more body sensations ...

And actually you can combine both. Sitting with your laptop on the grass (in the shade), to get the outside feel, but still work done.


As with many things in life, "go touch grass" isn't actually about touching grass.


> As with many things in life, "go touch grass" isn't actually about touching grass.

As with many things in life, though "go touch grass" isn't actually about touching grass, touching grass really is a good thing. (Well, except that it makes me itch all over the area of contact. Still worth it.)


For me it is complementary. While I consciously touch grass, I ground what I am doing with reality. Is the problem I am stuck coding on really that important? Is there a simpler solution, or is something else more important right now?

At least, that's what works for me, sometimes metaphors are to be also taken literal.


I don't like sports, and I live in an endless suburban wasteland where there's nothing to do but go to bars, restaurants or the mall.

I can't afford to go to restaurants all the time, and I don't like bars or the mall.

I'm shy and I don't do well around strangers, and even when I do meet new people 99% of the time we don't have much in common, so it feels like a waste of time.

I'd much rather surf the web... at least there I'm learning stuff, and I can communicate with people who I actually have something in common with.


The curse of the high IQ is that statistically you wont find a lot in common with the average Joe. Too bad just deal with it. Get married and raise children. Go to church. Spend time in your local library. Volunteer at a local CSA (community supported agriculture). Take long walks. Go hiking. Ride a mountain bike in the woods. Go to the gym and lift weights. But dont spend your life online and staring at computer screens.


Why not? This is a very dogmatic take on how to spend ones time. People enjoy different things, if you enjoy spending time behind a screen, go for it.


I don't enjoy my time online, I just dread change more than I dread living a dull life


I don't see a problem with spending time in front of a screen, and would much rather do that than do pretty much anything you mentioned.

Some things, like spending time in nature are nice once in a while, but there's no nature near where I live, and even if there was I'm often not in the mood to go.


Personally I wouldn't survive in a suburban environment without nature being near by.

Going to restaurants, bars or meeting strangers isn't what I meant. It's still the artificial human made world. Spend time outside the city, hiking, boarding, climbing, running or just enjoying nature... That's it. Finding like-minded people will come by itself.

Oh, and beeing active is just awesome. Pushing the body to certain limits is just so important for my mental health. I am a complete different person, if I don't do sports for a certain time.


This reminds me of that reddit post from a couple of years ago, "Most of What You Read on the Internet is Written by Insane People"[1]. I mean, to utter these kinds of predictions out loud really reveals your power level. That said, I think the author is right in the premise, that human-to-human communication will be affected by whatever it is that is coming our way. The predictions they end up at, however, are regressions that go a bit too far.

[1] https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...


World outside this tech bubble is so different.

I worked in conventional ELV systems related engineering before getting into tech. I feel like people outside high-tech life are so much behind in many aspects of life - the progress made by tech on the lives of high-tech people are yet to reach the ordinary person.

There are so many non-tech people out there we look down but in fact they are the probably 98% of humanity and their reality define how we as humanity evolve.


On the other hand I feel like non-tech people are underrated in their understanding of what what tech is doing, they just have a way more relaxed attitude towards it.

Like finding out your 55 year old Aunty has secretly been using LLMs to automate part of her job is real.

“High tech” people are like trekkies, they’re just more into it.


I think it's a result of how the media relegates different sectors of people into different "propaganda" zones. A lot of people like to watch TV. It wouldn't be out of the ordinary for them to spend more time with the TV than with another human being. Others probably don't have the time to watch any TV. The people online on the internet clearly fall into a niche of "wanting", where they're asking for entertainment but none can give them any, so they invent some for their own selves. Now that there's a proper contender for TV in this space, this will become a regular occurrence.

Note: I'm not stating that people online don't watch TV at all -- just that they don't derive the same sort of enjoyment from it that normal people do.


The online world is often much more interesting to me than the offline world... at least around where I live.

It'd be great if I could afford to travel, but I can't... and, anyway, traveling itself has many downsides, and you can burn out on that after a while.


Ain't that the truth. 99% of what people do socially around my area is drink, talk about work, talk about the most recent sporting event, and talk about their kids if they have them.

After the first 5 times you have those conversations, you can predict the entire night's conversation before they begin. I'd love to have offline conversation that was almost anything else.


It’s that way most places I’ve lived. Sports, beer, motorcycles, tv, shopping…

The online world brings one useful thing to the meat world: organization of meetups for folks interested in a bit more… take Jazz, for example… thoroughly possible to organize a jazz jam in a suburban area thanks to the net.


"Trump will never be president" is an excellent support of TFA's thesis: it was said by people who were not so much terminally online as terminally reading, and who were therefore convinced that no one who is demonstrably illiterate (having a command of the oral register but not the written) in his mother tongue could become president*. In 2016, some large percentage of the US electorate was already post-language (or at least post-literacy), as they agreed the lack of literacy(/language) was no impediment.

* despite having been indoctrinated for years that the genius of the States is that "anyone can become president"?


The genius of the US is that, no matter who becomes president, the country still functions pretty well.


After watching the first head-to-head debate it was apparent that Trump could win - whereas Trump was clearly a deeply flawed human being, Clinton came across as a smiley robot.


Can you tell me what's the biggest flaw you see in Trump as a human?

Genuinely curious.


Find the biggest flaw in the infinite fractal of flaws — fun game.


Personally, I never liked his hotels much.


Take your pick between 1) He's nakedly selfish and narcissistic and 2) He attempted to subvert the peaceful transfer of power, one of the bedrocks of our democracy.


I would suggest the author actually talk to some real people.

Not on Twitter, or Mastodon, or anything - just real people. Hell, you can even get away with joining a small discord server and joining a voice chat there. Anything but the large, hype- and drama-driven user generated content ad farms.


>I would suggest the author actually talk to some real people. Not on Twitter, or Mastodon, or anything - just real people.

Friendly fyi in case you got the impression he was a recluse isolated in a basement ...

The author "talks to real people" all the time. Examples: https://www.youtube.com/results?search_query=Venkatesh+Rao


I didnt make assumptions about the kind of lifestyle the author leads, at least not purposefully.

I just cannot imagine thinking like this if youre not completely detached from real, normal human interactions.


>I just cannot imagine thinking like this if youre not completely detached from real, normal human interactions.

Your statement makes it seems like you've misunderstood his thesis.

He's envisioning a future where humans-to-humans conversation is more intimate, more meaningful because it's not limited to textual language. Example excerpt from his article: >"We’ll all be like children inventing secret languages for talking to imaginary friends, except they will be real friends."

His attachment to real human interactions is letting him see a future where real interactions can be "post-language". The post-language "secret language" could be images, or dynamic translations customized for the particular person's brain.


> more meaningful

How could it possibly be more meaningful. Languages mature over time, bits of other langauges are integrated, vocabularies expand, new forms of expression in the langauge appears, by serious practioners of writing or speaking in the language.

> We’ll all be like children

Oh great. Goo goo ga ga level civilization then. Curious as to how "children" discover the mutual new language and start communicating in it.

> His attachment to real human interactions is letting him see a future where real interactions can be "post-language". The post-language "secret language" could be images, or dynamic translations customized for the particular person's brain.

Sorry, but this is actually a silly notion of the OP. It shows an utter lack of understanding of the role of language in human society. Language is a repository of cultural memory broken down to composable bits.


> Goo goo ga ga level civilization

You’re not being open minded here. Our traditional languages are so inefficient at actually conveying information. Whenever we try to explain very complicated concepts, we always dumb it down to very basic illustrations like philosopher’s diners or ice cream stands explain complex actor model. This is necessary for intuition because using parsing “adult” language at this level is extremely hard.

I think what OP meant is that we will have more of this. LLMs have opened up more possibilities in how we convey information to one another. Why _should_ we stick to English? Why can’t we all converse in generated images, real-time generated videos and illustrations?

For example, instead of having PDFs, slides, Jupyter notebooks or whatever for class material. We will instead of 1000 long line of unreadable and undecipherable prompt to an LLM that will generate that course content that you can talk and interact with?


Conjuring a group of philosophers together with the intermittent (fork - plate) cycle is a subtle structural setup. There is thought behind that gestalt of simple elements that started off in a thicket of complexity. That thought is also the product of collaboration over time by experts. Finally, at the end of that discourse, someone reduced it to its essential complexity and showed applicability to general patterns of interaction.

None of that is child talk.

OP started off on a promising note and one hoped that he would start playing the song of 'vector languages' for machine intelligence -- but alas, he then started singing out of tune.

> You’re not being open minded here.

Unfair. Rather may I suggest you did not read carefully. I did say "civilization". In a civilization, you will need both teachers and students; thinkers, and readers. What language will the thinkers use? "Explaining" is a concern regarding established results. What about deriving new results? (See above)


> Language is a repository of cultural memory broken down to composable bits.

So we already have examples of discourse in which we alternate natural language with some other, more formal, language: english with formulae (math), english with tabs [less formal] or staff notation [more formal] (music criticism), english with a set of images with formal denotations (memes), etc. Even Montaigne's essays (which were attempts at deriving results, not just explaining established ones) somewhat follow the pattern in that at times they alternate french with classical quotations.

I hope —I think along with eternalban and Hermann Hesse— that we may see the introduction of new combinators, new forms of composition, that are general enough to serve both as explanations of established thought and tools of generating new thought. (cf Aristotle Organon)


(Glasperlenspiel? A favorite of mine.)

> generating new thought

I was beguiled yesterday (reflecting on our chattering mechanical friends) with the idea (new for me) that production of 'thought' doesn't necessarily require a 'thinker'. It's just that we thinkers -- that is the HomoSapiens -- have only known of production of thought as thinkers, and assumed there is 'thinking' involved. (This is what is causing the confusion in some granting GPT the coveted 'thinker' and 'thinking' status.)

If that is the case, then we have misnamed 'thought': It was always nothing more than a proposition, but we assumed it was intimately connected with the experience of thinking.


> Glasperlenspiel?

Genau. I especially appreciate how millefiori glass beads are a physical incarnation of the literary device of quoting other works...

> If that is the case, then we have misnamed 'thought': It was always nothing more than a proposition, but we assumed it was intimately connected with the experience of thinking.

That makes a great deal of sense to me; in that world 'thinking' is like imperative small-step operational semantics, while 'thought' would correspond with functional big-step denotational semantics. The former suffices to produce the latter, but is by no means necessary.


I dislike how LLMs are causing lay people to recapitulate the debates about private language that have been had in the philosophy of language community. At first I was so excited that people were talking about one of my favorite topics (finally! My niche interest! People care!) but it became clear that people were doing way more talking than reading and thinking, so the conversations are typically not that great.

If you thought that this blog post had provocative ideas, you might be interested in reading some of Ludwig Wittgenstein's philosophy of language or Roy Harris's work on integrationist linguistics. These ideas have been debated for a long time.


Re: Witttgenstein's philosophy of language, here is a finnish comedian's contribution: https://www.youtube.com/watch?v=CGksgZKecKE


Comedians and (great) sci-fi authors: apex disruptors, intermittently disrupting the self-styled disruptor-disruptors (i.e., philosophers, or maybe more accurately, epistemologists)


>If you thought that this blog post had provocative ideas, you might be interested in reading some of Ludwig Wittgenstein's philosophy of language

Yes, I've mentioned Wittgenstein before: https://news.ycombinator.com/item?id=27650343

The older writings about language/linguistics are mostly about observing misunderstandings -- or -- complaining that progress isn't being made in philosophy because people (unknowingly) use different definitions of the same word. They weren't talking about any new yet-to-be-invented machines to bridge the gap of misunderstandings -- and understandably so because they lived in a time before computer algorithms like Large Language Models existed.

In contrast, because Venkatesh Rao is living right now in this LLM world (e.g. him playing with ChatGPT/GPT4/Berduck/etc), he's revising his prior opinion about evolution of communication. This thread's blog post is talking about new possibilities of technology enabling a more direct understanding. Maybe it's a poorly written blog post and maybe his prediction won't happen. In any case, I'm just trying to explain what the article's is about since some readers seem to misunderstand it. (E.g. some readers may think he's a misanthrope detached from real human interaction but his article is actually envisioning future scenarios of the opposite situation!)

EDIT reply to: >Like I said there is a whole debate about the concept of private language that you are missing.

And as I was trying to convey back to you, the previous "debates about private language" are not relevant to _this_ thread's article about future technology scenarios. His article is about predicting a technology trajectory, and not about philosophical debates.

To use this very thread we're in as an example of the author's point:

This thread's article was written by a high-IQ intelligent author. And you, the reader, is presumed to also be highly intelligent. And yet -- you seemingly misunderstood the core idea of what the author was trying to communicate. (E.g. You feel his article is low quality because it re-hashes "private language debates" already covered by previous philosophy). And to use a mirror assessment of the author's writing, he didn't string together the perfect sequence of chosen words to make his point easier to understand.

So... can new technology (e.b. future improved LLMs) ... bridge that gap of misunderstanding ... between what the author wrote ... and your misinterpretation of it?

Neither you nor the author are illiterate... and yet there's a gap between you and the author. Can a future technology help bridge that gap?


>The older writings about language/linguistics are mostly about observing misunderstandings -- or -- complaining that progress isn't being made in philosophy because people (unknowingly) use different definitions of the same word.

Uh... No. Like I said there is a whole debate about the concept of private language that you are missing.


> This thread's article was written by a high-IQ intelligent author.

Does he also have a Mensa membership?


Nice try, Mr. Rao. Btw, I'm a fan of your texts. The world is a bit better for the presence of someone willing to think deeply, and express themselves freely. The themes are always fascinating.


> Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments. Imagine a world a few centuries in the future, where humans look back on the era of reaction gifs as the beginning of the world after language.

> Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche.

I think the LLM revolution tell us something else - it is language that crystallises and captures almost all the human intelligence. By training a random initialised transformer on language you get chatGPT. Doing the same to a baby leads to a modern human. It was not the brain or the transformer that are intelligent, it is language running on them that is. Language is a self replicator, but it needed humans, now it works on LLMs too.

In the long term I expect AI will discover things that are hard to express in our language and probably need to develop and teach new concepts to us. And of course AI can help create languages, it can do that quite well, if that's our thing. We can develop private (hyper-)languages and have it translate back and forth.


> it is language that crystallises and captures almost all the human intelligence.

This used to be a common position held by the likes of Wittgenstein, but the biological reality is probably different [1]. Language is probably like the shadows in the Allegory of the Cave, a representation of some thing not to be confused for the thing itself.

[1]: https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-thi...


Of course language relates to things, but I am thinking of language in the sense of a constantly growing corpus of text. It is a self replicator, but like a virus it uses the host organism, a human or a LLM to create more of itself. Ideas have their own life-cycle, they don't get born or die with us, they like to travel and evolve fast.

My point was the capabilities we attribute to LLMs actually come from the training data, not the model. Any model would do, from 10B parameters to 500B, with cosine embeds, rotary or AliBi, even RNNs can do it today. But the data - that is precious, the source of all these "emergent" abilities. We are wondering at the mysteries of the transformer, but the mystery was hiding the training set.

Let's look at the problem from the point of view of language as an evolutionary system of ideas. Like scientific papers and citations, that kind of stuff, leading to AI and human progress. That's where our progress will come from - the evolution of ideas.


> Of course language relates to things, but I am thinking of language in the sense of a constantly growing corpus of text. It is a self replicator, but like a virus it uses the host organism, a human or a LLM to create more of itself. Ideas have their own life-cycle, they don't get born or die with us, they like to travel and evolve fast.

You're talking about "memes", in the original meaning: https://en.wikipedia.org/wiki/Meme



I think it is more that language is a convenient way to capture high quality human intellectual capabilities to train an AI easily. LLMs have highly curated training text to keep that quality high.

In reality we think not just in words but in pictures, sounds, abstract unsymbolized concepts. We also just do with our bodies and just watch with our hearts.

Image models capture some of that. We don’t have a way of capturing our unsymbolized thinking or doing yet. Expect lots of video data to be used to train AI in human body movements.

Unsymbolized, and worse subconscious, thinking will be harder. My assumption is we will have AIs which can learn directly from the world rather than via curated human culture by that point, so won’t need it.


Except highly curated. Not that highly.


My personal take is that "inside" much of language (especially written language) is a kernel of predicate logic. Most pieces of text end up taking a propositional form, eventually. So if you train the right kind of learning machine on an absolutely huge pile of text, it will eventually "learn" some amount of logical constructions, and start producing things that at least look like logical reasoning, in response to logical queries.

Same with training it on programming languages, which have a similar underlying form.

What I've found though is that this "reasoning" starts to get exposed as shallow fairly quickly. It can only "backtrack" through logical relationships a couple levels deep. E.g. I give GPT4 a Rust program with a borrow checker problem and ask it to "fix" it and watch it flail around in circles for 3 or 4 tries before just producing complete nonsense because it gets lost.

It looks like human intelligence like you say, but it just falls over after a few short steps.

Yes, most humans will do the same thing. Except unlike an LLM, humans are able to sense when they get lost and attempt to sometimes retrace or correct their steps. But I think there's something intrinsically missing in LLMs that prevents them from doing that -- they don't "know" why they arrived at some step, and don't seem to be able to retrace and fix. Ask them to fix something they get wrong, and they start to degrade in quality quite quickly, I find. Self-awareness/consciousness may be the kernel of that missing piece.

I think maybe these systems might be more useful once they're tied to an actual rigorous logical deduction system, and "taught" how to use them. Let the LLM "read" the prompt, recognize that it needs to do some logical constraint solving or deduction, etc., create the right terms or horne clauses or whatever in that subsystem, query it, and then let the LLM in turn produce the textual response.


> I think the LLM revolution tell us something else - it is language that crystallises and captures almost all the human intelligence. By training a random initialised transformer on language you get chatGPT. Doing the same to a baby leads to a modern human. It was not the brain or the transformer that are intelligent, it is language running on them that is. Language is a self replicator, but it needed humans, now it works on LLMs too.

This sounds like it's leaning or leading toward Julian Jaynes bicameral mind book or ideas?


What did you have in mind for new features of these created languages? (cf Delaney, Babel 17)


>Ridiculous not in a political sense (I have no strong feelings one way or another about the economic fates of career writers) but in the sense of being incoherent.

It is not incoherent if you believe art to be an expression of the human condition. You still need humans in the writer's room to know what they want the audience to get out of a show. That is not something that ChatGPT can understand.

Also, maybe people should have some feelings "about the economic fates of career writers." Who are we even building this hyper-efficient economy for?


Yeah, this. As humans and as the inventors of these technologies, we get to decide what kind of values we uphold and how our politics and policies should reflect them.

These think pieces that subscribe to an unspoken underlying technological determinism are so disgusting to me, they have a really narrow and pathetic view of what it means to be human (it mostly boils down to "economic agent") Half the time I wonder if the authors themselves even have requisite humanity, and I also wonder if the obsession around digital technology is partially driven by our having made machines of ourselves in the first place (all we care about is economy, work, etc. We've lost the old notion of the "human spirit" at great cost).

More people in power and decision making positions need to start broadening their reading lists with Goethe, William Blake, Novalis, and other awakened poets and start letting some of these imaginative ideas about humanity's potential drive their lives more than purely economically motivated crapped out thinkpieces.


I wake up with songs in my head. And as a professional musician in bands of different genres, and fills in for other genres, the "languages" are far removed.

My banjo player recently told me our mandolin player describes things in flats because he's originally a trumpet player. My best friend is classically trained with a masters in vocals and couldn't jam with rock musicians because of the way they call out keys.

And Western music is an established, though often debated, language.

I just don't see the difference between any of this. It's still language. Whether, as in the article it's the language a kid invent with their imaginary friend, how I think about and remember the songs I wake up with...language conveys information.

And I don't understand what the author of this piece is trying to convey about language, except perhaps a connotation of the word language I don't understand.


> Berduck is already a more interesting companion than 90% of humans online

Sounds like the article is predicated on having completely missed the point of talking to another human being?

Perhaps part of the problem is that we are so bad at teaching communication skills that we find it difficult to connect to other people?


I had the same thoughts. When you think a chatbot is the best interaction you had in a week, you might want to look into that.


Maybe my prompt-fu just sucks, but I have yet to get something interesting (ie both novel and good) from https://anonchatgpt.com . What is an example of a prompt that produces an interesting response?


(by ChatGPT)

Here are a few example prompts that could lead to interesting responses:

"Write a short story about a detective who solves crimes by talking to plants."

"Imagine a world where time flows backward. Describe a day in the life of someone living in that world."

"Compose a poem about the beauty of the night sky as seen from the perspective of an astronaut on the moon."

"Create a fictional conversation between a philosopher from ancient Greece and a modern-day scientist about the nature of reality."

"Design a magical creature that has never been seen before, and describe its unique abilities and habitat."

Remember that "interesting" is subjective, and what one person finds interesting, another may not.


Thanks. I guess I have no shortage of purely creative material it's possible to consume, but do have a distinct shortage of creative material that has enough structure corresponding to our world that it's possible to map onto other topics, or other readings into it.

From that point of view, the two examples that seemed the most promising where 4 (fictional conversation) and 2 (time flows backward), but the results I got were anything but novel: the fictional conversation hit the usual tropes, with nothing particularly related to ancient Greek philosophy or to modern science specifically, and for some reason the backwards time world still had meals ordered in breakfast/lunch/dinner sequence. Nice for essays from an elementary schooler (or an AI?), but nothing one would wish to pay attention to read.

On the other hand, at least the backwards time response reminded me of https://www.youtube.com/watch?v=i6rVHr6OwjI , which did meet my criteria for interesting when I first ran into it, so there's that...

(note that "Entropic Time" has multiple possible tangential successors, based on the various details in its interpretation. that's a sense in which it contrasts to the —to my perception— sterility of the AI responses)

Edit: I guess what I'm saying is that I found "Entropic Time" to be a strong move in a Hessian Glass Bead Game, but I guess at this point instead of quibbling about the weakness of the AI moves, I should be impressed that it plays at all.

cf https://i.pinimg.com/originals/67/bc/35/67bc35746db0758b5ab4...


If you and I don’t need to share a language to discuss Shakespeare (remember, we already don’t read Shakespeare’s plays in the original Elizabethan), do we need to share a language at all?

What does he mean by this? There's no real difference between the language in the First Folio and what's published today. A few spelling tweaks here and there, pretty much.


I was also confused by this parenthesis. My best guess is that he refers to the pronunciation (which was markedly different before the great vowel shift), but I don’t think that’s a valid or particularly interesting point to make.


The only reading I can come up with, and it's really stretching generosity here, is that the works were written smack in the middle of the Great Vowel Shift. Some of the rhymes are lost/opaque for modern speakers. But that doesn't really change the core inanity of the parenthetical.


I suspect they are referring to modern English translations of Shakespear's plays like the ones on this website:

https://www.litcharts.com/shakescleare/shakespeare-translati...

Or perhaps exactly those. Maybe the article author means that's how Shakespear is read by everyone today?


I think the author is not taking language far enough. The animals which cannot speak do have a language, their language is just body language — one of actions. This is why when learning how to train dogs or cats, a human must learn how to control themselves and their actions.

Yes, there is not reason why AI must “speak” in English or any other human language, but any language it speaks in will be lossy in the same way. Fundamentally, all of communication is lossy including our computer programs. This is why things like undefined behavior exist and also why people misunderstand.


I can't tell whether this is an exercise in scifi-ish speculation, a la Julio Vernes, in which case is kinda interesting, or an attempt at seriously exploring possible near future, in which case I hate it.

There's a weak and strong version of what I feel is the core idea, "what if human communication becomes largely intermediated by machines".

The weak version is already here, as the article points out. We (at least we, chronically online people) already talk in memes and gifs quite a lot. And how AI will shape it is worth thinking about. I saw that happening on real time on TikTok with people using alternatives for words they feel get their content deprioritized, like suicide, murder, sex, etc. A literal AI model changing language.

The strong version, though, is purely star-trek/Asimov level of speculation. Let alone the same neighborhood, current technology is not even in the same country as that.

The article kinda mixes both things back and forth for a bit, and then fully dives into the scify version, which makes me wondering if the author genuinely believe they will see anything like this. If they do, boy will they be disappointed.

Now, assuming it's all fun scify speculation of a future we'll never see, but hey, might happen, I have some problems with the ideas, but I'll highlight two.

> Plenty of other species manage fine with simpler languages or no language at all.

No they don't. Only species doing well right now are the ones we breed or our parasites. We can potentially erase almost all non-microscopic life forms by accident, let alone if we really put our minds to it. Humans have effects on a geological scale, that's a very select group of species.

> And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

Kinda see the above comment? I don't think we even have a concept of intelligence that isn't somehow based on language, even if that language is not necessarily some form of spoken language. We certain can't evaluate intelligence without it. And as far as I know, humans don't develop language without socialization.

Sure, you could envision a future where the machines do the socialization, but I think that would be a civilization that would be too fragile to exist for much long.


Interesting take. I think it also matters how broad or small you define language or communication. Personally and in a philosophical way spoken and written language is the main differentiation between us and and animals. We can quickly and specifically make intent and meaning mediated between us. I also think we are hilariously bad at it in a modern context, the complexity of the current society makes communication (and by extend community!) unnecessarily (?) noisy.

Another interesting point is the idea of Logos, Word, Speech as the divine attribute par excellence in some religious and mythological contexts.

That said I am not sure what the author wants to optimize. I mean if AI can mediate meanings and intent, that sounds interesting. If it is to AI have their own chats, I doubt that is useful from a human context.


I think we're only bad at language in the sense that we created harder problems than the language we have can solve, so we feel its limitations. But the kind of stuff out language can do is pretty amazing, in a larger scale.


I agree re: sci-fi or serious. I could see "intentionally intermediate most communication through AI" being a niche subculture some years from now, but I wouldn't actually expect it to be a panacea to communication.


I was sincerely admiring vgr until he went off the deep end into crypto and now AI.

The basic meta mistake in many similar arguments that I since heard from others in the space is exemplified in this:

> That’s… not how technology works.

It's... not about technology, tech is not the end goal, human is. Tech serves human, and if it hurts human guess who should take precedence.


If it hurts some humans while helping others immensely, the question becomes murkier.

Some questions are easy to dispense with, though: should we hold back the progress of ML because of the job-security concerns of some Hollywood writers? Answer: "No."


It helps some humans (those using AI and selling AI services) and hurts some (everyone else). For added irony many from the second group is who mostly produced content on which AI is trained.


It helps some humans (those using AI and selling AI services) and hurts some (everyone else).

Gonna throw up a nice big [Citation Needed] flag on that one.

A rising tide lifts all boats in the end. All of them. No exceptions.


Citation needed on whether it is that kind of rising tide. I think ML use should be regulated until economy is adapted with UBI or something. Benefit eventually does not justify unnecessary suffering now. Maybe in countries with stronger employment protections fewer people will be affected, but the topic is about US workers


I think ML use should be regulated until economy is adapted with UBI or something

You're putting the cart before the horse. ML is what will enable UBI. In the best of all possible worlds, we will look back on our present economy a hundred years from now and shudder, just as we currently do when the conversation turns to company towns and antebellum plantations.

That won't happen unless something changes, and... well, here we are, watching it change.

If zero-sum gamers in government don't "regulate" ML to death first, that is.


The joke photo is much closer to reality than many ight believe, according to OpenAI at least. OpenAI employees have claimed they can search direct strings generated by the system for expanding bullet points on their whole database and receive a "mirrored pair" of another user putting in the long form text and receiving bullet points in a very large percentage of samples. So the real joke here is that 1) OpenAI clearly spends an enormous time just looking through all text you give to them manually all day, and 2) were pushing tons of CO2 into the atmosphere in order to expand and contract text over and over with tons of compute cycles, essentially wasted for what nobody wants.


I don’t understand your comment, what is happening? So it copies answers?


I think the suggestion is that it's acting as a wildly inefficient broker between people who want their summaries turned into long form text and people who want their long form text turned into summaries!

It's an entertaining thought.


I was struck by how invested the author was in asserting that the “joke fails”. I thought that cartoon was the highlight of the piece.


In a strong sense, syntax and semantics are adjoint.

In a weaker sense, long emails and single bullet points may have adjunctions between them. (every single bullet point may be expanded to any of a set of long emails, but one suspects that many long emails are incapable of being summarised by single bullet points)

The cartoon is interesting in ways that https://news.ycombinator.com/item?id=35840240 fails to be.


Meanwhile, I still feel like I'm taking crazy pills. The prose these systems can generate is low-quality and incoherent. They are able to regurgitate textbook responses to simple questions but cannot display any sort of original reasoning.

Fluent content-free drivel is now essentially free, whatever that's worth.


Something in my gut tells me it’s not going to play out the way this guy thinks it will.


This applies to so many avenues these days…


I like to think about how we are now faced with changes not seen in 10000 (ten thousand years)!

this is a play on a Chinese media quip attributed to Xi Jinping that says "changes not seen in the world in 100 years" https://www.strategictranslation.org/glossary/great-changes-...

we have just found (in the 20th century) something that I think is as BIG as writing itself.

Writing taken as the underlying technology behind (or under?) the concept of LAW itself (hence, a foundation for the notion of 'rule of, by, from, LAW" and everything and anything founded on this notions has become candidate for revision and re-evaluation).

all because we invented a way to write that can change itself; this is usually better known as computing. I write some words that say how some other words must change.

It really is all substitutions of arbitrary symbols (or sequences of symbols) for other symbols (or sequences thereof) all the way to the bottom. At the bottom there's a first number and a huge debate (and confusion) around whether this first number is one or zero (LOL).


This was a fantastic read. I can see us getting here in a few decades of constant exposure to LLMs.

Couple this with some form of advanced VR, and human-to-human collaboration will make everyone look and sound either: a) the way they would LIKE to be presented to the world, or b) an overriden version of how the OBSERVER would prefer them to look and sound.

Either way, it's terrifylingly isolationist.


That’s what they call a “puff piece”

I found myself persuaded into the opposite point of view with the passage of each intellectually bankrupt paragraph.

Congratulations, I’m thoroughly dissuaded of any such conceits by the end of reading this, I want my 5 minutes back.

Tell me a chatbot wrote this without telling me a chatbot wrote this


Would you like to elaborate on what makes you think this is a puff piece? Or what you disagree on?


https://twitter.com/smdiehl/status/1621203955509334016

"There's a new kind of dystopia that's seeming increasingly plausible. A post-literate society where the written word has been debased by LLMs and where those who can focus and consume long form prose will increasingly be unable to effectively communicate with those who can't."


I half agree with the author, though I see LLMs more like the harbingers of the upcoming language apocalypse rather than of the post-language age, considering the limits of their world is the limits of our language, how much faith and hype we start to be putting in it and the degree we depend on language that makes it a perfect tool to manipulate us in the deepest level. The LLMs are just tools, if we do not see past language ourselves (and considering what I read and discuss last months, we generally do not) the post-language era will never happen.

Even the fact that the author held the conversations they had with a LLM higher than human conversation proves exactly this point.


If the core remains a human individual, the prediction will not come to fruition. As humans, we are only capable of comprehending a sequence of sounds and writing in a linear manner. This is the essence of our evolution, both a blessing and a curse. Even when we achieve AGI, we will still rely on communicating through singular, linear forms of sound and text, as this is the path we have traversed throughout our evolutionary history.


sometimes people can know just enough to make them stupider than they’d be if they knew much less


TikTok could be seen as that life. People communicate in dance.

One step further, how are humans going to exist when AI handles everything? Why communicate at all if days can be passed in a drug induced stupor of bliss?


My favorite read in, I think, years. Great seed (prompt?) for a novel, lol. Better communication. Improved cognition. No need for paper clips!


Can humans emulate chatgpt?


I’m seeing plenty of that already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: