Loading...
Attack of the chatbots? Our writer eyes humanity’s guardian role.
Predictive chat generates responses to human input that can seem human, with implications some tech-watchers call “as big as the internet.” It will take responsible intent, not just regulation, to temper this AI innovation.
What is intelligence?
No matter how you’ve answered that question before, you’re likely to find yourself in even more conversations that reference the newest wrinkle in its artificial form: ChatGPT technology.
It’s a predictor, by definition, not a “thinker.” It showcases the responsive power of computer processing, not of sentience. But it’s showing up everywhere as more businesses apply it – doing work for student essayists, making companion apps appear more human. One major drawback: Its use can complicate the fight against misinformation.
They’re “experimenting with all sorts of stuff,” Laurent Belsie tells the Monitor’s “Why We Wrote This” podcast. “Stuff that is sometimes ready for prime time and sometimes isn’t.” Iterations will improve it, he says. (Several have arrived this recent interview, and this week Google rolled out its ChatGPT competitor, Bard, while Microsoft added AI art features to two browsers.)
Regulations will eventually add more guardrails. In his reporting, Laurent also notes humanity’s role as a shaper. Could predictive AI help us to hack, say, climate change?
“It all comes down to what the people are doing with the [technology],” Laurent says, from the programming and testing to its applications. Then there’s intent. “There are lots of temptations out there,” says Laurent. “But I’m hopeful that people in general, in business, will attempt to do the right thing.”
Episode transcript
Clay Collins: Artificial intelligence has captured the popular imagination for decades, and has found its way into daily life through many practical applications. Alexa and Siri are almost family. Autofill features abound. Customer service chatbots give us quick, basic answers. And it’s actually getting more lifelike.
Enter ChatGPT. Those last three letters stand for “generative pre-trained transformer.” This technology takes existing knowledge and information and reshapes and reuses it in responsive, predictive ways that feel sometimes eerily like original thinking. Some people are calling ChatGPT as big a deal as the internet.
[MUSIC]
Welcome to “Why We Wrote This.” I’m Clay Collins.
Senior economics writer Laurent Belsie, who was last on this show talking about smarter ways to work, has been watching tech and AI trends for a long time. He recently wrote about the need for guardrails around AI’s use. He joins us again to talk about that and more.
Welcome, Laurent!
Laurent Belsie: Nice to be here.
Collins: First of all, I know I’m talking to you and not to AI because I can see you in the studio. But before we get into this disruptive technology and how you approach reporting on it, I want you to tell listeners about the interesting way in which you filed your guardrails story to your good-natured editors.
Belsie: Sure. Well there’s nothing like trying the product you’re writing about. What I did was, after gathering all my information and everything, I wrote my story as I would. And then I turned on ChatGPT, and I fed it my lede, the, the beginning paragraphs, because I wanted to make it somewhat like me. And then I fed it all the facts and said: “Use these facts.” And then I fed it all the quotes I used in the story. I said, “Use these quotes.” You wait about five to 10 seconds, and all of a sudden it starts writing. And it’s writing faster than I can type. And out comes this thing in like half a minute. It’s taken my story and, and turned it into a really abbreviated thing. So then I took both versions of the story, and sent them to the editors and said: “OK, you figure out which is my story, which is the ChatGPT. Now in fairness, it was pretty obvious. The technology isn’t quite up to writing really sterling copy. It took out all the best quotes. The punchiest stuff, it just paraphrased. Bad move. But there was one paragraph that I have to admit improved on. It made it snappier, shorter, and it made it clear. So the technology does show promise. We shouldn’t be too dismissive of the advances that are coming.
Collins: You wrote this story in part because of how buzzy this topic is. Bitcoin was buzzy too, but it felt like it was easy to sidestep if you weren’t that interested. This new surge in AI feels different. It’s selling cars in the metaverse. It’s creating visual art and (sometimes) passable writing. In fact, it’s just a kind of brilliant impressionist, right? Can I ask you to describe basically how it works?
Belsie: Yeah. Imagine very powerful, powerful computers with the latest chips that go blazingly fast. And then imagine feeding that machine information that would take up about a quarter of the shelving in the Library of Congress. And what we call “generative AI” processes all that text, or images, or whatever it is, and then it generates an image, a text. So instead of thinking, most researchers think of it as being a very, very, very good predictor, because it’s got so much data, and has weighed that data, and taught itself, that it can then create text that can fool people into thinking: “Oh! This was written by a person.”
Collins: Hmm. It’s power as a helper – you’ve described a little bit off mic – it’s the same thing that gives it power to do harm. So if you could talk a little bit about both the promise of predictive, generative and other emerging strains of AI, and some of the perils.
Belsie: Yeah. We can look at anything from new scientific discoveries, and the ability to predict all sorts of things. Maybe even figure out climate change. Who knows? And it can also help us at work. Imagine, for you and me, it might involve gathering far more facts than we could, then processing those facts to be able to present it to us, so that then we can write our story. That can happen in all sorts of fields. It could take away the drudgery of a professor, for example, who wants to stay up to date with the latest research, so he can rapidly keep up with what’s going on.
That’s wonderful. But of course there’s the dark side of all this. And where we would be most worried about would be fraud. You know, already people are fooled by emails, you know, asking them to give their personal information or whatever. Well, take that to a factor of 10. And imagine how sophisticated fraudsters could be if they knew far more about you and could instantly compress that information into something that looks very, very convincing.
Collins: Mm-hmm. I think you described it yesterday as feeling like a smart, articulate graduate assistant. Very persuasive, but of course also sometimes very wrong.
Belsie: Very wrong. And that’s the challenge because these machines, many of them are being trained on the internet, which is great, because there’s a lot of valuable information there. Unfortunately, there’s a lot of misinformation there too. And so the machine can make mistakes, and has, embarrassingly so, from the early versions that we’ve seen being released.
Collins: Hmm. So how does the Monitor approach inform your reporting about a topic like this, that’s steeped in issues of ethics and trust? The arguments are highly charged, but your job is to be cool and constructive.
Belsie: Uh, yes. It’s exactly to be cool and constructive, and not be carried away by the hype, which there will be lots of from marketing departments, from, you know, any tech company. And just look at the promise and the need for guardrails. The challenge is that, you know, just as in any industry, the industry is moving rapidly forward. And it’s experimenting with all sorts of stuff. Stuff that is sometimes ready for prime time and sometimes isn’t. But in true Silicon Valley fashion, you put it out there, it breaks, and then you fix it. You get all sorts of feedback, and then you come up with another version that’s better. And that’s how the technology improves. Eventually, regulation catches up with that, but it’s a period of months and years, sometimes decades, to catch up with all the inventiveness that is out there.
So in that interim, you want to make sure that companies are acting ethically. As one AI CEO told me, it all boils down to intention. And if your intention is to use this technology in the best possible way, to remove or alleviate bias, to educate people, or to show them increased possibilities for the financial decisions they have to make, for example. It all comes down to what the people are doing with the machine, and how they’re programming that machine, and testing that machine for even unintended errors.
Collins: Right. Market forces are going to create all kinds of demand for generative AI, and the dangers aren’t clear enough yet for lawmakers to formulate effective regulations. So what’s the next period going to look like? Is it chaos?
Belsie: Uh, it’s partially chaos. Yeah. Because all sorts of things are going to come out. We may have good intentions, but program for the wrong thing. But I’m hopeful that people in general, in business, will attempt to do the right thing. There are lots of temptations out there. There’s a lot of money at stake, because this is, you know, “the next big thing,” and possibly even more game changing than the internet. We’ll have to see. And it will be tempting to take the shortcuts. But enough people have seen the power of this technology that they, I think, have the ability to make the best decisions they can at the time, and correct the errors as quickly as they can when those pop up.
Collins: Well, thank you, Laurent, for being here in real life to talk about this breakthrough technology. And I hope you’re right about your near-term prediction. Thanks so much.
Belsie: Happy to be here.
[MUSIC]
Collins: Thanks for listening. You can find more, including our show notes, with links to the stories discussed here at csmonitor.com/WhyWeWroteThis, or wherever you listen to podcasts. This episode was hosted by me, Clay Collins, and produced by Jingnan Peng. Tim Malone and Alyssa Britton were our engineers, with original music by Noel Flat. Produced by The Christian Science Monitor. Copyright 2023.