Tech giants Microsoft and Google are racing to control generative artificial intelligence (AI), the technology behind their new chatbots, despite concerns from their own employees. In February 2021, Microsoft launched an AI chatbot on its Bing search engine, while Google followed suit with Bard six weeks later. However, Microsoft and Google employees have raised concerns that generative AI can spread disinformation and degrade critical thinking, eroding the factual foundation of modern society.
The competition to build with new AI technology intensified after OpenAI, a start-up working with Microsoft, released ChatGPT in November 2022. The chatbot has now captured the public imagination with an estimated 100 million monthly users, leading Microsoft and Google to take greater risks with their ethical guidelines.
An internal email by Sam Schillace, a technology executive at Microsoft, urged employees to prioritize speed over concerns of societal problems when building new AI technology. He argued that the first company to introduce a product is the long-term winner, even if the technology has ethical issues that can be fixed later.
However, last week over 1,000 researchers and industry leaders, including Elon Musk and Apple co-founder Steve Wozniak, called for a six-month break in the development of powerful technology. They said it implies "profound risks to society and humanity."
Regulators have also threatened to intervene. The European Union suggested laws to control the technology, while Italy for the time being banned ChatGPT last week. In the United States, President Biden expressed concern over the safety of AI, saying that tech firms have a responsibility to ensure their products are risk-free before making them available to the public.
Despite these thoughts, Microsoft and Google have limited the scope of their chatbots' initial release and built sophisticated filtering systems to eradicate hate speech and content resulting in obvious harm. Natasha Crampton, Microsoft's chief responsible AI officer, stated that Microsoft's dedication to responsible AI remained steadfast. Google released Bard after years of diverse thoughts about generative AI benefits among risks. As well as its similar chatbot Meena in 2020 was supposed very risky to announcement according to some experts.
Researchers warn that tech giants are taking risks by releasing technology that they don't entirely understand. However, Microsoft and Google claim to have worked on artificial intelligence ethics for years and believe that their commitment to responsible technology will minimize any potential harm.
Google and Microsoft are both facing problems about the conscientious development of AI. Google has come under fire for censorship of research, with El Mahdi El Mhamdi, a part-time employee and university professor, claiming that the company refused to allow him to publish a paper warning that larger AI models pose significant cybersecurity and privacy risks. Dr. El Mhamdi subsequently released the paper through École Polytechnique and resigned from Google, stating that the risks of modern AI outweigh the benefits.
Google’s top lawyer, Kent Walker, has met with research and safety executives to discuss the release of Google’s AI, and urged the fast-tracking of AI projects, though some executives have said they will maintain safety standards. The director of Google’s Responsible Innovation group, Jen Gennai, has also been criticized for downplaying risks associated with Bard chatbot. Two reviewers from Gennai’s team recommended blocking Bard’s release, but Gennai changed the document to remove the recommendation and downplay the risks. The company has since released Bard as a limited experiment, with continuing training, guardrails, and disclaimers to make the chatbot safer.
Meanwhile, Microsoft has faced criticism over the development of its chatbot, which was linked to Bing search results. The company’s Office of Responsible AI developed policies to ensure ethical practices, but five current and former employees said that the guidelines were not consistently enforced or followed. Experts working on the chatbot were not given answers about the data used by OpenAI to develop its systems, and some argued that integrating chatbots into a search engine was a particularly bad idea, given the risk of presenting untrue details.
Despite concerns about the responsible development of AI, both Google and Microsoft are pushing ahead with plans to integrate generative AI into their products. Google plans to integrate generative AI into its search engine, while Microsoft has invested $1 billion in OpenAI and has urged every product team to adopt AI.
In conclusion, the companies must be mindful of the risks associated with AI, and ensure that policies and guidelines are consistently enforced to ensure the responsible development and deployment of the technology.
Earlier, we wrote about Microsoft’s AI image generator that now is right om Edge’s sidebar, so you can read this news and be more informed about recent digital news.
You can look closer at new Meta AI model that can isolate and mask objects within images and try its demo version.
Also, we suggest you read about Bard AI chatbot future upgrades and its new capabilities.