The state of chaos that we have seen in OpenAI over the last few days has emphasised the risk to both users and commercial partners in closed source, proprietary software and data ie Closed AI… as opposed to Open AI.
Nobody would wish this shambles of the last few days on their worst enemy. However it is a great example of the reality of the risk in *not* opening up AI. This is what we need to balance against the risks of opening up AI – primarily concerns about bad actors, and in reality these also apply if arguably to a lesser scale in Closed AI.
The availability of the software source code and open data mean that in OpenAI there is no lock-in and the available information allows seamless management of transfers like those customers and partners of OpenAI have been panicking about.
The open source experts have been constantly been explaining the benefits of opening both AI software and the data on which it is trained – transparency that allows trust and control, by enabling deeper understanding through open data of how and on what the AI has been trained and made its decisions and access to the software for third party collaboration that allows better innovation and the opportunity of competition.
We must remove reliance on a few major corporations whose activities including Board and Senior Executive disagreements exactly like the embarrassing OpenAI debacle can impact their customers who are locked-in to closed source / proprietary tech, particularly AI.
If we do not do this for AI our governments will be repeating the tech mistakes of the past couple of decades. However this time they will be making those mistakes knowingly and wilfully.
It must now be time for governments to wake up to the reality, that yes, they haven’t taken enough time over the last decade to understand open source software and open data, which is pervasive in tech today, but there are organisations like OpenUK and people with deep expertise (not just the AI experts they’ve been consulting who clearly haven’t understood it) who can and will now help them to understand the nuances and reality of ‘Open Source’. This could enable them to make an informed decision about risks which has not currently been happening.
The AI summit and a couple of recent papers by non experts have been great examples of the lack of open source expertise in the room. Risk is not something to be shunned. It is something to be understood and calculated. If we want a future with well managed risk we need to see proper consultation of the Open Source Communities in the UK and beyond.