Google expands Responsible GenAI Toolkit



Google has enhanced its Responsible Generative AI Toolkit for building and evaluating open generative AI models, expanding the toolkit with watermarking for AI content and with prompt refining and debugging features. The new features are designed to work with any large language models (LLMs), Google said.

Announced October 23, the new capabilities support Google’s Gemma and Gemini models or any other LLM. Among the capabilities added is SynthID watermarking for text, which allows AI application developers to watermark and detect text generated by their generative AI product. SynthID Text embeds digital watermarks directly into AI-generated text. It is accessible through Hugging Face and the Responsible Generative AI Toolkit.

Also featured is a Model Alignment library that helps developers refine prompts with support from LLMs. Developers provide feedback regarding how they would like their model’s outputs to change, either “as a holistic critique or a set of guidelines. Then they can use Gemini or a preferred LLM to transform the feedback into a prompt that aligns model behavior with the application’s needs and content policies. The Model Alignment library can be accessed from PyPI.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img