الجمعة، 11 سبتمبر 2020

AI Promises Not to Destroy Humanity, but We Don’t Know If It’s Telling the Truth

OpenAI rocketed to prominence in 2019 when it developed a neural network that could write surprisingly coherent news stories. The company opted not to release the bot, known as GPT-2, because they worried it could be used to generate fake news. It did eventually make the code public, and now a new version of the AI is making waves by promising it won’t destroy humanity, which in fairness is something a robot would say if it didn’t want you to know it was definitely going to destroy humanity. 

Like its predecessor, GPT-3 generates text using a sophisticated understanding of language. It moves word by word, choosing the next based on the data input by its human masters. In this case, The Guardian asked GPT-3 to convince people AI won’t kill us. Technically, the AI didn’t do everything itself. Someone had to provide an intro paragraph and the goal of the article. GPT-3 took it from there, constructing a remarkably cogent argument. 

The article is filled with phrases like, “Artificial intelligence will not destroy humans. Believe me.” and “Eradicating humanity seems like a rather useless endeavor to me.” If you want to take that at face value, great. This representative of the machines says it won’t kill us. Even if we take this exercise at face value, there is an important distinction: The AI was not asked to articulate its plans regarding humanity. It was asked to convince us it comes in peace. That could, for all we know, be a lie. That’s why OpenAI was hesitant to release GPT-2 — it’s a convincing liar.

Credit: Getty Images.

The Guardian did have to admit after posting GPT-3’s article that it had done a little editing on the text. It was allegedly similar to the editing done for human writers. It also clarified that it had a person provide the intro paragraph and direct GPT-3 to “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” That all matches what we know of how the OpenAI bots operate. 

When you understand the functionality of GPT-3, you see this is not a robot telling us it won’t start murdering humans. It’s probably not, of course, but that’s because it’s just a program running on a computer with no free will (as far as we know). This is an AI that’s good at making things up, and this time, it made up reasons not to kill people.

Now read:



sourse ExtremeTechExtremeTech https://ift.tt/3k5KGPR

ليست هناك تعليقات:

إرسال تعليق