South West AI Safety Summit Fringe event – presented by Tech Ethics Bristol

Tech Ethics Bristol are proud to be representing the voices of the South West region by putting on its next event as part of the AI Fringe series.

Join them on Thursday November 2 6:00 PM–8:30 at the PwC Office – Bristol for an in-person evening of discussions with their panel of experts to explore the main topics covered by the AI Safety Summit, the risks and the benefits AI can bring, and what we can do to ensure the responsible development of this technology.

Featuring leading local voices in AI & Data from the region:

This will be an intro to the AI Safety Summit and a panel discussion featuring the local thought leaders above.

Moderated by Karin Rudolph CEO of Collective Intelligence and Co-Founder of Tech Ethics Bristol.

Brought to you by ADLIB & Collective Intelligence

Sponsored by PwC

 

Book your tickets here

 

What is the AI Safety Summit?

The AI Safety Summit is an upcoming event that aims to address the transformative potential and associated risks of Artificial Intelligence (AI). It will gather experts, government representatives, academics, and industry leaders to discuss the opportunities and dangers of AI, particularly focusing on “Frontier AI” – highly capable AI models with unpredictable and potentially risky capabilities.

The summit will centre on two major risk categories: misuse risks, where AI could be exploited for harmful purposes, and loss of control risks, related to aligning advanced AI systems with human values. The event seeks to foster international collaboration to mitigate these risks and promote best practices in AI development.

Initially introduced by Prime Minister Rishi Sunak in June during his visit to Washington for discussions with US President Joe Biden, the summit is designed to assemble government representatives, AI industry stakeholders, and researchers at Bletchley Park to deliberate on the risks and advancements in AI technologies. The goal is to explore ways to mitigate these risks through coordinated international efforts.

In March, the UK government released a white paper detailing its AI strategy, emphasizing a preference for avoiding what it referred to as “heavy-handed legislation.” Instead, it intends to task existing regulatory bodies with employing current regulations to ensure that AI applications adhere to established guidelines, rather than creating new laws. Regulatory bodies are anticipated to issue practical guidance to organizations in the coming months, including providing risk assessment templates and outlining how to implement the government’s safety, security, transparency, fairness, accountability, and redress principles.

The Department for Science, Innovation, and Technology expressed its eagerness to collaborate with global partners in addressing these concerns and making frontier AI safe. This collaboration aims to ensure that nations and their citizens can reap the benefits of AI both now and in the future. The department’s five objectives, which stem from initial stakeholder consultations and evidence-gathering, will shape the discussions during the summit.

 

What does the summit hope to achieve?

The UK government’s five summit objectives are as follows:

  1. Develop a common understanding of the risks posed by frontier AI and the need for proactive measures.
  2. Propose a framework for international collaboration on frontier AI safety, including support for national and international standards.
  3. Suggest appropriate actions for individual organizations to enhance frontier AI safety.
  4. Identify potential areas for collaboration in AI safety research, such as evaluating model capabilities and creating new governance standards.
  5. Highlight how ensuring the secure development of AI can enable its global use for positive purposes.

 

What is Frontier AI?

The term “frontier AI” is defined in a July 2023 academic paper by Anderljung et al as “highly capable foundational models that could exhibit dangerous capabilities.” These foundational models fall under the category of generative AI and have the potential to cause significant physical harm or disrupt essential societal functions on a global scale due to intentional misuse or accidents, according to the paper’s authors.

 

What is the AI Fringe?

The AI Fringe consists of a set of events held throughout London and the UK, serving as a complement to the UK Government’s AI Safety Summit. Its primary aim is to incorporate a wide and diverse range of perspectives into the conversation about safe and responsible AI, extending the discourse beyond the AI Safety Summit’s primary focus on Frontier AI safety. It’s important to note that the AI Fringe is an independent event separate from the AI Safety Summit.

OUR OBJECTIVES

  1. Facilitate the convergence of insights from industry, civil society, and academia regarding the safe and advantageous use of AI.
  2. Provide a platform that encourages participation from all communities, including those that have historically been underrepresented, in the ongoing discussion.
  3. Enhance comprehension of AI and its implications, enabling organizations to leverage its benefits effectively. The AI Fringe will feature a series of panels, fireside conversations, and keynote presentations, all of which will explore various domains and applications of AI, address overarching challenges, and delve into the creation of a responsible AI ecosystem. These discussions will involve distinguished figures in AI discourse, as well as voices representing diverse communities.

 

Programme:

18:00: Doors open
18:10: Presentation by the Tech Ethics Bristol Team.
18:15- 19:00: Panel Discussion
19:00-19:30: Q&A
19:30-20:30: Networking and drinks, kindly sponsored by PwC.

This special event is part of the AI Fringe, a series of events hosted across London and the UK to complement the UK Government’s AI Safety Summit by bringing a broad and diverse range of voices into the conversation.

Tickets below, they’re running out fast, so if you’re interested, sign up now!

AI Safety Summit: Fringe Event by Tech Ethics Bristol — AI Fringe

 

Brought to you by ADLIB & Collective Intelligence

Sponsored by PwC

 

Written by

Head of Data

Insight & Analytics

View profile

Alex Cosgrove