google

Tackling AI Trust Issues with Personalized Governance for Language Models

Companies are kind of iffy about jumping on the AI bandwagon. Why? Well, turns out, managing the cost of keeping AI in check and dealing with the quirks of those big language models (LLMs) is a bit of a headache. You’ve got hallucinations, privacy issues, and the nagging fear that these models might spit out some not-so-friendly content.

IBM, though, is stepping up to the plate to ease these worries for businesses. Elizabeth Daly, the brain behind the scenes at IBM Research Europe, spilled the beans at a Zurich event. The key issue? Figuring out what counts as harmful to these models is like trying to catch a greased pig. Tricky, right?

Now, IBM’s game plan is to craft AI that developers can actually trust. According to Daly, it’s a walk in the park to tally up clicks, but pinpointing harmful content? Not so much.

They’ve got a three-step mantra: Detect, Control, Audit. Your run-of-the-mill governance policies won’t cut it for these LLMs. IBM is on a mission to make LLMs follow the law, corporate standards, and the unique governance of each company. It’s like giving AI a personalized rulebook that goes beyond standard corporate norms, diving into the ethics and social vibes of the place it’s used.

This nifty rulebook doesn’t just give context to LLMs; it’s like a performance review. It rewards an LLM for sticking to its task and staying on the straight and narrow. That means fine-tuning to spot when AI goes rogue and dishes out content that doesn’t vibe with the local scene.

IBM is all about trust, building LLMs on reliable data, and keeping biases in check. They’re playing the long game, auditing for biases at every turn.