twitter thread

The real question driving AI hype

This post reproduces a twitter thread where I examine a fundamental question underlying a lot of the fervor around AI.

One fascinating part of this emerging era of human-like AI is learning what traits humans automatically assign to our new robot assistants.

In this case, the assumption is that an AI is a) competent at assessing creative work and b) completely "objective" in this assessment. 

This tweet is no longer available. It attached a Tik Tok video where a person advocates that writers should use ChatGPT to critique their writing. Ostensibly because it’s more “objective”. https://twitter.com/JackHarbon/status/1648051301706522625

I'm going to keep reminding people that the current incarnation of human-like AI chatbots don't "think" in any real sense. They make shit up in a very convincing way. It turns out, a lot of people want that and don't care about the distinction.

You’re gonna keep seeing examples of LLMs supposedly doing amazing things. But It’s important to understand that they are not “smart”. They do not actually engage with real data. They can confidently make things up because maybe someone has written something similar in the past.

I've been thinking a lot about the need that is being served here. When we look at the incredible reaction to these new chatbots, I think we have to recognize that we've activated a very deep need in people. But what is it?

In this example, it's a reaction to a dynamic that writers are very familiar with and frustrated by. It's very hard to get your work seen and published. Because other humans get to decide to pick it up or to discard it. And other people's judgment seems arbitrary and capricious.

That's bringing me back to what I think is the core need driving this fascination with human-like AI. It seeks to deliver on a fundamental question that is causing a lot of angst in our current moment.

What if you didn't need other humans in order to be successful?

What if there was a thing that knew how to communicate with humans. It's very accessible and you don't have to be "technical" to use it. But it's not a human. It will always be ready and willing to engage with you. It doesn't get tired or irritable. It can't reject you.

Moreover, this robot assistant always talks about what you want to talk about. It doesn't deflect or change the subject. It doesn't decide to talk about itself instead. It can't "grin fuck" you instead of giving you the answer. It can't hit on you instead of giving you the answer.

When we start to understand these non-obvious qualities of human-like AI, we start to see the need that is being met. We can see how a lot of people might decide that it matters less that the thing is actually competent or accurate. It's still giving what needs to be gave.

Chatbots like ChatGPT are being used in professional contexts. But we are going to see this new breed of AI expand into many other areas as well. I've seen discussions about using these for therapy, for learning how to talk to girls. Literally anything humans want to do.

So it brings me back to the core question that I think is defining this space right now. But amended a little bit to encompass the true scope of what we've unleashed.

What if you didn't need to interact with other humans in order to get what you want?

The "I don't want to interact with other people" thing has been building in our society for a while now. It's a deep topic, and I won't do it justice.

I do think this next iteration is gonna be a doozy. People are really ready to give power over to these machines.

I'm gonna end this thread by being clear about what I personally think of all this.

I think it's bad. Where I see us currently headed feels like a really bad idea to me. And I hope we find the strength to change the trajectory before too much damage is done.