AI Ethics, Risks and Safety Conference – Meet Karin Rudolph

As a supporter of the upcoming Collective Intelligence AI Ethics, Risks, and Safety Conference on Wednesday, May 15th, we wanted to spend some time getting to know Karin Rudolph, founder of Collective Intelligence. Karin is a prominent figure in AI ethics and governance. In this conversation, we’ll get straight to the point, discussing her insights and experiences, setting the stage for the practical discussions that await at the event. Let’s get started.


Tell us a bit about you

I am the founder of Collective Intelligence, a Bristol-based AI Ethics and Governance Consultancy.

We provide training and resources to help start-ups and SMEs embed ethics and a robust governance process into the design and development of technology.

I’m a regular speaker at universities and conferences and an active member of the tech community in Bristol and the South West of England.

I’m also the co-founder of Tech Ethics Bristol, a community of more than 800 members, and in the upcoming months, I’ll be launching a new initiative in the region to support businesses working in this field.


What’s your interest in AI Ethics, Safety & Risks?

I’ve been interested in the societal aspects of technology for a long time, and more recently, I have been focusing on risk management and safety, which are essential aspects of any AI governance framework.

Many of the discussions around these topics are still theoretical, which is not helpful for businesses trying to implement these tools.

My next project aims to provide a clear and pragmatic approach to these issues.


What’s the reason behind putting on this conference?

I decided to put on this conference after speaking with a variety of start-ups and SMEs in the South West. It became clear to me during those conversations that most professionals working in AI want to understand the potential risks of these technologies.

However, they lack the resources and tools to identify and mitigate the risks associated with AI technologies.

It is crucial for everyone working in this field to fully understand the impact of upcoming regulations, how to implement standards, and how to comply with a series of risk and impact assessments that will be part of the process of developing these technologies.

This can be a daunting task for small companies, but it is a necessary step if we want to fully realise the huge potential benefits of AI technologies.


What are the main objectives of the event?

A big difference from other events is that the AI Ethics, Risks and Safety Conference will provide practical guidance and access to resources available directly from the organisations developing these tools such as The Alan Turing Institute, British Standards Institution and the Department for Science, Innovation and Technology among other organisations.

It will provide important updates on the regulatory landscape which will have a big impact a global scale.

And case studies and best practices from professionals applying frameworks in real case scenarios.

And it’s the first of its kind here in the South West!


Who is the event aimed at?

The event is aimed at professionals working in AI, including Data Scientists, Data Engineers, CEOs and Founders, Consultants, Risk Managers, Legal Experts, Researchers, Developers, Designers, and anyone interested in learning more about the development of responsible AI.


When is it and how can people buy tickets for the event?

The AI Ethics, Risks and Safety Conference is a full-day conference taking place on Wednesday 15th of May at the Watershed in Bristol.

 

View the full programme and tickets here

 

Written by

Head of Data

Insight & Analytics

View profile

Alex Cosgrove