Microsoft adds safety tools to Azure AI Studio



Protected material

Microsoft has given the Azure AI Evaluation SDK another function: testing how often the LLMs underpinning applications generate responses containing what it calls “protected material” — perhaps better thought of as forbidden material, as the category includes copyright text to which the enterprise is unlikely to own the rights, including song lyrics, recipes, and articles. To check for it, the LLM’s outputs are compared with an index of third-party text content maintained on GitHub, Thigpen wrote.

“Users can drill into evaluation details to better understand how their application typically responds to these user prompts and the associated risks,” Thingpen explained.

Two APIs are provided: one to flag the output of protected copyright text, and another to flag output of protected code including software libraries, source code, algorithms, and other programming-related materials.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img