The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European CommissionHardly a day goes by without a new proposal on how to regulate AI: research bodies, safety agencies, an idea from the International Atomic Energy Agency branded ‘IAEA for AI’?.?.?.?the list keeps growing. All these suggestions reflect an urgent desire to do something, even if there is no consensus on what that “something” should be. There is certainly a lot at stake, from employment and discrimination to national security and democracy. But can political leaders actually develop the necessary policies when they know so little about AI?
This is not a cheap stab at the knowledge gaps of those in government. Even technologists have serious questions about the behaviour of large language models (LLMs). Earlier this year, Sam Bowman, a professor at NYU, published “Eight Things to Know about Large Language Models”, an eye-popping article which revealed that these models often behave in unpredictable ways and experts do not have reliable techniques with which to steer them.
Such questions should give us serious pause. But instead of prioritising transparency, AI companies are shielding data and algorithmic settings as trademark-protected proprietary information. Proprietary AI is notoriously unintelligible — and growing ever more secretive — even as the power of these companies expands.