Of course, even NATO is doing Artificial Intelligence
Trouble is, with whom?
In case you missed it, NATO has an official Artificial Intelligence (AI) Strategy. With that document, NATO acknowledges that AI will likely affect its core mission in many ways, and therefore commits to collaboration and cooperation among Allies on any matters relating to AI for transatlantic defence and security.
Among other things, the Strategy aims to:
- encourage the development and use of AI in a responsible manner, according to principles defined in the strategy
- accelerate and mainstream AI adoption by enhancing interoperability within the Alliance
In order to accomplish this, NATO and Allies will also conduct regular high-level dialogues, engaging technology companies at a strategic political level to be informed and help shape the development of AI-fielded technologies, creating a common understanding of the opportunities and risks arising from AI. The Strategy explicitly says that the “private sector” that will be involved “includes Big Tech, start-ups, entrepreneurs and SMEs as well as risk capital (such as venture and private equity funds)”.
The devil is those tiny venture, interoperable details
A first crucial point in this whole NATO AI thing is “interoperability”.
Implementing anything about AI and testing that you got it right surely involves co-processing of an awfully big lot of data. In this case, quite sensitive data, siince we are speaking of NATO and not of some lolcat image recognition tool. It will take a lot of trust to exchange sufficient quantities of sufficiently relevant, that is sufficiently sensitive data to make this kind of stuff work, and really be interoperable.
The second “sensitive” detail of the whole picture is the “private sector”. That Strategy aims to work by:
- sharing data with companies that already have way, way too much sensitive data in their hands, from which they make money precisely by not being responsible, and not interoperable with third parties
- involving “risk capital”, that is gamblers whose sole reason to exist is to be the opposite of responsible, in order to make money as quickly as possible, with criteria for what is “acceptable risk” that are a tiny bit different, I’m told, from what is considered acceptable in military and geopolitical circles
Now, today’s venture capital and Big Tech are so messed up that a bit more contact with people trained to kill without being killed themselves first may make them a bit saner. Eventually. But don’t bet the farm on this, and let’s keep a very close watch on all sides and levels of these “responsible partnerships for interoperability”.