Google new app lets you experimental AI systems like LaMDA

Google today launched AI Test Kitchen, an that lets users try out experimental AI-powered from the company’s labs before they make their into production. Beginning today, folks interested can complete a sign-up form as AI Test Kitchen begins to roll out gradually to small groups in the U.S.

As announced at Google’s I/O developer conference earlier this year, AI Test Kitchen will serve rotating demos centered around novel, cutting-edge AI technologies — all from within Google. The company stresses that they aren’t finished products, but instead are intended to give a taste of the tech giant’s latest innovations while offering Google an opportunity to study how they’re used.

The first set of demos in AI Test Kitchen explore the capabilities of the latest version of (Language Model for Dialogue Applications), Google’s language model that queries the web to respond to questions in a human-like way. For example, you can name a place and have LaMDA offer paths to explore, or share a goal to get LaMDA to break it down into a list of subtasks.

Google says it’s added “multiple layers” of protection to AI Test Kitchen in an effort to minimize the risks around systems like LaMDA, like biases and toxic outputs. As illustrated most recently by Meta’s BlenderBot 3.0, even the most sophisticated chatbots today can go off the rails, delving into conspiracy theories and offensive content when prompted with certain text.

Google AI Test Kitchen

within AI Test Kitchen will attempt to automatically detect and filter out objectionable words or phrases that might be sexually explicit, hateful or offensive, violent or illegal, or divulge personal information, Google says. But the company warns that offensive text might still occasionally make it through.

“As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural human-computer interactions,” Google product manager Tris Warkentin and director of product management Josh Woodward wrote in a blog post. “We’re at a point where external feedback is the next, most helpful step to improve LaMDA. When you rate each LaMDA reply as nice, offensive, off topic or untrue, we’ll use this data — which is not linked to your Google account — to improve and develop our future products.”

AI Test Kitchen is a part of a broader trend among tech giants to pilot AI technologies before they’re released in the wild. No doubt informed by snafus like Microsoft’s toxicity-spewing Tay chatbot, Google, Meta, OpenAI and others have increasingly opted to test AI systems among small groups to ensure they’re behaving as intended — and to fine-tune their behavior where needed.

For example, OpenAI several years ago released its language-generating system, GPT-3, in a limited beta before making it broadly available. GitHub initially limited access to Copilot, the code-generating system it developed in partnership with OpenAI, to select developers before launching it in generale availability.

The approach wasn’t necessarily borne out of the goodness of their hearts — by now, top tech players are well aware of the bad press that AI gone wrong can attract. By exposing new AI systems to external groups and attaching broad disclaimers, the strategy appears to be advertising the systems’ tech prowess while diffusing problematic components to the extent possible. Whether this will ward of controversy remains to be seen — even prior to the launch of AI Test Kitchen, LaMDA made headlines for all the wrong reasons — but Silicon Valley seems to have confidence that it will.

You might also like
Leave A Reply

Your email address will not be published.