الأربعاء، 8 فبراير 2023

Google’s Bard Chatbot Will Take on ChatGPT, but Can We Trust Either One?

(Credit: Emily Dreibelbis/Woraphon Nusen/EyeEm/Getty Images/Google/OpenAI)
ChatGPT has been making headlines ever since it was opened up to the public late last year. This advanced conversational AI has been cited as a potential shift in how we access and consume information, leading Microsoft to invest billions in OpenAI, the reactor of ChatGPT and other AI tools. Google is reportedly worried about the implications, and now it’s doing something about it. The company has announced Bard, its own conversational AI based on the LaMDA language engine. But are we ready for this brave new world of AI search results?

Google has been under pressure to develop a ChatGPT competitor, which Microsoft plans to integrate with its Bing search engine. The idea is that an AI could give you an answer to your query rather than letting you dig around in the search results. Analysts have widely cited this as a threat to Google’s search dominance, but Google has a tool to fight back in the form of LaMDA. The company unveiled the model in 2021 but hasn’t made it available to the public. That will change with Bard.

To start, Google will make Bard available to a small group of “trusted testers,” after which it will open up to the public in the coming weeks. Internally, Google is telling staff it needs all hands on deck to test and improve Bard. Google says its latest AI advancements will be integrated with web search first, and Google can’t afford to screw that up.

This new generation of AI chatbots can do more than we would have thought possible just a few years ago, but they’re still far from perfect. They can accidentally plagiarize sources, and even more troubling, they don’t know what is true. OpenAI, the firm behind ChatGPT, was apprehensive about releasing the GPT language model underlying ChatGPT a few years ago because of its ability to create convincing lies. AI doesn’t have an intuitive sense of what’s believable or true — at least not yet. Still, Google believes this technology is so powerful that it’s building it into its flagship product. So, to some degree, competing is more important than the truth.

Can you spot the mistake?

Google says that it’s aware of the tendency of AI to get caught up in lies. “We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety, and groundedness in real-world information,” says Google CEO Sundar Pichai.

That might be a tall order, though. Even in Google’s own Bard demo, there are fundamental errors. In the image above, Bard claims that the Webb telescope took “the very first pictures of a planet outside our own solar system,” but that’s not true. Webb did directly image an exoplanet last year, but it was just the first time for Webb — the first direct imaging of exoplanets happened about 20 years ago. This is the kind of subtle misinformation that can crop up when an AI misunderstands the information it feeds on.

This isn’t the first time Google has gone all-in with an AI product — it launched Assistant in 2016 with the expectation that smart voice assistants would be the next big thing. However, both Google and Amazon find it hard to make money with these products. Amazon has gutted its Alexa group as a result. We’re about to find out if chatbots can do what voice control couldn’t. Hopefully, that doesn’t include lying or exterminating humanity.

Now read:



sourse ExtremeTechExtremeTech https://ift.tt/NE78qrj

ليست هناك تعليقات:

إرسال تعليق