/blog @bhaprayan   ·  

CIFAR DLRL Summer School: Ethics in AI

I recently attended the CIFAR DLRL Summer School, which brings together participants to engage with foundational research developments in the fields of deep learning and reinforcement learning. Traditionally, the school is held every summer at MILA, but due to the ongoing COVID-19 pandemic, the organizers decided to hold a virtual summer school.

The week featured talks, panel discussions, breakout sessions, and 1:1 chats with speakers. I thought all the talks and sessions were top-notch and were differentiated from conference or seminar talks since they covered more ground and explored multiple frontiers of research. A common thread woven through all the sessions was the willingness to engage in tempered speculation about future and evolving research directions. Naturally, it’d be difficult to succinctly summarize all the sessions. Instead, I’ll focus on one particularly thought-provoking session, the “Ethics in AI” panel discussion, Yoshua Bengio and Doina Precup, moderated by Sasha Luccioni.


Context for Discussion

The fields of machine learning and artificial intelligence, more broadly, have made incredible progress in recent years! They’ve left from being a pure research exercise in academic and industrial labs, and have percolated into many areas of society. A sampling of workshops from a conference I attended last year (NeurIPS 2019) is a case in point:

Since developments in the field now have real-world consequences, ethics shall become a central part of the conversation.

Note: Any factual errors or omissions are solely my own responsibility, and do not reflect the views of any speakers or participants. I’ll try to minimize editing and avoid interpretation bias.


Q/A

Why are we thinking about ethics if we’re not philosophers?

How do we regulate AI such that it won’t hurt its’ development since policymakers don’t understand AI?

What are the difficulties which you see?

What does that mean for us? What should we prepare for?

How do you keep the big picture in mind when you’re working on fundamental research?

Might something like the Geneva Convention be necessary for AI?

Can we define morals and ethics as part of the objective function?

We aren’t always aware of how AI is being used. How do keep an eye on how things are being used in applications?

Since we have fields that already have regulation (i.e. healthcare), how do we integrate AI?

Since we’re doing things that are impacting the general public, how do we reach out to them?

As a grad student, choosing topics, making sure that our research is not going to be misused, is this going to be important to bring good in the world?

How do we make the social and ethical questions more relevant within our discussions?

How do we prevent ML from exploiting human weaknesses?

As ethics applied to each field, why is it so concerning for AI?

Further Reading

Grand Challenges of Robotics: Ethics and security

The Moral Machine

The AI Cluster Steering Committee

Geneva Convention

Lethal Autonomous Weapons

Data Trust

Written August 14, 2020. Send feedback to @bhaprayan.

← Grand Challenges of Robotics  Letters to a Young Scientist →