Department for Science, Innovation & Technology publishes details of roundtable talks held on first day of the summit on artificial intelligence held at Bletchley Park on November 1.
Experts on AI and representatives of leading AI nations and companies have been meeting at the AI summit in the UK to discuss the future of the technology and its potential benefits and risks. Eight roundtable discussions were held on the first day of the summit.
As chair of the summit, the UK government has published a summary of the points raised in each roundtable, which isn’t a policy paper but makes a useful guide to current thinking. In short, given such a rapidly developing sector, there’s a lot we don’t know, and international collaboration by all involved is essential to establish standards, mitigate risks and ensure we all share the benefits of AI.
The roundtables were on the following subjects:
Frontier AI systems (such as GPT4 and equivalents) make it slightly easier for less sophisticated ‘bad actors’ to carry out cyber-attacks or design biological and chemical weapons. Decisive, global action is urgently needed to better understand and act on these risks. Frontier AI companies, governments and academic and civil society researchers need to work together – now.
The current abilities of frontier AI systems are far beyond what many predicted only a few years ago and new models must be tested rigorously with close attention paid to emerging risks.
Current AI systems require human prompting and are relatively easy to control but there are concerns that more sophisticated AI in the future will not need such supervision and might even evade human oversight and control.
Current, known frontier AI systems pose societal risks that are an existential threat to democracy, human rights, civil rights, fairness and equality. AI offers the opportunity to solve global problems such as in strengthening democracy and tackling the climate crisis, but citizens needs to be more involved in how AI is used.
There’s an urgent need for safety policies to be put in place in leading AI companies – and soon – but governments must also set standards and regulate them. Benchmarks will be needed, and reference was made to the recent AI Safety Institutes in both the UK and US.
Given the rapid developments in AI, and that the tech is effectively ‘borderless’, governments need to work together such as in sharing knowledge and resources, and agreeing standards
Again, the theme here was on international collaboration. It was felt that over the next year the priorities are to develop: a shared understanding of the capabilities (and risks) of frontier AI; a coordinated approach to safety research and evaluation; and international collaborations and partnerships to ensure benefits are shared and to reduce global inequalities.
We need better models than those we have at present, and a great deal more research is needed. AI vendors should be responsible for providing evidence that systems are safe. The future of AI is also a ‘sociotechnical challenge’, with a need for geographic and linguistic inclusion and the voices of the public involved.
You can read the full roundtable chairs’ summaries for November 1.
In related news:
£37m for AI projects from farming and fashion to fire-fighting
Leave a Reply