google

Tackling AI Trust Issues with Personalized Governance for Language Models

Now, IBM’s game plan is to craft AI that developers can actually trust. According to Daly, it’s a walk in the park to tally up clicks, but pinpointing harmful content? Not so much.

They’ve got a three-step mantra: Detect, Control, Audit. Your run-of-the-mill governance policies won’t cut it for these LLMs. IBM is on a mission to make LLMs follow the law, corporate standards, and the unique governance of each company. It’s like giving AI a personalized rulebook that goes beyond standard corporate norms, diving into the ethics and social vibes of the place it’s used.