It sounds like science fiction, but with all the hype around AI, the danger of ‘kidnapping’ is worth talking about.
It goes like this - companies who can afford to build their own internal LLMs throw all of their valuable IP (intellectual property, covering everything from trade secret designs to marketing campaigns and product strategies) into the model for it to generate relevant responses. Essentially, while it’s still a non-sentient AI model, it’s also a repository of all the most valuable information the company has.
This makes it a fantastic target for a criminal attacker, or even an unethical competitor. If it can be poisoned to generate bad responses, cloned to give the attacker more insider knowledge than they know what to do with, or even somehow locked down for ransom, then it can do a huge amount of damage to the company.
I recently came across a security device which impressed me, which doesn’t happen often these days. It wasn’t because of its brand new quantum dark web AI firewall technology, as so many vendors advertise these days. Instead it was because it’s a company which has taken a very old idea, thought about it, and brought it bang up to date in a way that opens up a lot of possibilities for secure network topologies.
The approach is simple, and pretty unbeatable (obviously nothing is 100% secure, but the attack vectors against a physical disconnect are vanishingly sparse). Essentially the FireBreak (as opposed to firewall, of course) unplugs the network cable on command - or plugs it back in. It’s not quite having someone on-call in the data centre to manipulate cables on demand, but that’s an easy way to think about it - except the person can plug or unplug the cables in a fraction of a second in response to prompts from any method you choose.
More in the link: https://hackernoon.com/lock-up-your-llms-pulling-the-plug
Author: James Bore