November 21, 2024

Cautionary Measures, Google Warns Employees on Chatbot Usage as Bard Goes Global

Facebooktwitterredditpinterestlinkedintumblrmail

Alphabet Inc, the parent company of Google, is taking proactive measures to caution its employees about the usage of chatbots, including its own program called Bard. While Alphabet markets the chatbot program globally, it has advised employees not to enter confidential materials into AI chatbots, citing a long-standing policy on safeguarding information. This move comes as researchers discovered that the AI technology behind these chatbots could reproduce the data it absorbed during training, posing a risk of leaks. Alphabet has also alerted its engineers to avoid direct use of computer code generated by chatbots.

Google’s Bard, along with ChatGPT, is part of the growing trend of human-sounding chatbot programs that utilize generative artificial intelligence to engage in conversations with users and respond to various prompts. Although human reviewers may read these chats, the potential for data reproduction has raised concerns about confidentiality.

Alphabet’s Bard may offer undesired code suggestions but still assists programmers. The company emphasized its commitment to transparency regarding the limitations of its technology. This cautionary approach reflects Google’s desire to avoid any negative impact on its business as it competes with ChatGPT, developed by OpenAI and backed by Microsoft Corp. The race between these tech giants carries the potential for billions of dollars in investment and significant revenue from new AI programs, including advertising and cloud services.

Alphabet’s precautions also align with the security standards adopted by many corporations, which advise employees against using publicly-available chat programs. Samsung, Amazon.com, and Deutsche Bank are among the growing number of businesses worldwide that have implemented guardrails for AI chatbots. While Apple did not respond to requests for comment, it is reportedly taking similar measures.

According to a survey conducted by networking site Fishbowl, approximately 43% of professionals were already using ChatGPT or other AI tools as of January, often without informing their employers. In February, Google instructed its staff involved in Bard’s testing not to provide internal information. Now, Bard is being rolled out in over 180 countries and 40 languages, positioned as a springboard for creativity. The warnings issued by Alphabet extend to the code suggestions generated by Bard.

The technology behind AI chatbots has the potential to draft emails, documents, and even software, promising significant time savings. However, such content may also contain misinformation, sensitive data, or even copyrighted material. Google’s updated privacy notice, released on June 1, explicitly states that users should avoid including confidential or sensitive information in their Bard conversations.

To address these concerns, some companies have developed software solutions. Cloudflare, a provider of website defense against cyberattacks and other cloud services, offers a capability for businesses to tag and restrict the flow of certain data externally. Similarly, Google and Microsoft are offering conversational tools to business customers that come with a higher price tag but refrain from absorbing data into public AI models. It’s worth noting that the default setting in Bard and ChatGPT is to save users’ conversation history, which users can choose to delete.

This cautionary stance from Alphabet aligns with the recent approval of the EU AI Act by the European Parliament on June 14, 2023. The act establishes a risk-based approach to regulate AI, with stricter requirements for systems that pose higher risks to human rights and fundamental freedoms, including those used for social scoring, real-time biometric identification, and significant decision-making impacting individuals.

Terry Jones
Technology/Digital Assets Desk

Facebooktwitterredditpinterestlinkedintumblrmail