Google CEO Sundar Pichai has replied to negative comments of the company’s experimental AI chatbot - Bard, guaranteeing that Google will be improving Bard soon. In an interview on The New York Times’ Hard Fork podcast, Pichai stated "Pretty soon, perhaps as this [podcast] goes live, we will be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities; be it in reasoning, coding, it can answer maths questions better. So you will see progress over the course of next week.”
He acknowledged that Bard is running on a "lightweight and efficient version of LaMDA," an AI language model whose main purpose is delivering dialog. The CEO said “In some ways I feel like we took a souped-up Civic and put it in a race with more powerful cars”. By contrast, PaLM is a more recent language model that is greater in scale and as the company asserts is more competent when dealing with tasks such as common-sense reasoning and coding problems.
Google presented Bard to the public on March 21st, but OpenAI’s ChatGPT and Microsoft’s Bing chatbot received much more attention and acclaim. The Verge's own tests found that Bard was less capable than its competitors. Less fluent and imaginative replies of the language model failed to exploit reliable data sources. He stated "To me, it was important to not put [out] a more capable model before we can fully make sure we can handle it well."
In some measure, Bard’s limited potential due to Pichai’s suggestions was caution within Google. Pichai also affirmed that he was talking over the work with Google co-founders Larry Page and Sergey Brin (“Sergey has been hanging out with our engineers for a while now”), and while he never gave the infamous "code red" away to clamber development, there were likely people in the organization who "sent emails saying there is a code red."
During the interview, Pichai also addressed concerns about the rapid development of AI and its potential risks to society. He acknowledged that there was merit to concerns about AI safety and called for more debate on the subject. Pichai stated that "AI is too important an area not to regulate," but supposed that it was better to use regulations in existing industries - like privacy and healthcare - than to create new laws to tackle AI specifically.
While some specialists worry about current risks like chatbots' tendency to spread misinformation, others warn of more existential threats, suggesting that these systems are so challenging to control that they could be used destructively. The CEO acknowledged that AI systems are becoming increasingly capable and that it almost doesn’t matter whether they have reached artificial generally intelligence or not. He stated "Can we have an AI system which can cause disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment."
In conclusion, Bard’s less usefulness in comparison with its competitors isn't a big deal as Artificial Intelligence can’t be as smart as humans, and even more scaled systems can make mistakes and be less accurate, especially at first. In earlier news, we wrote Microsoft Says Talking to Bing for Too Long can Cause it to Go Off the Rails and it is a good example of a more mature system that sometimes can be wrong as well.