<aside>
🧠 2021
Commissioned by the ICO, we researched the role of service and UX design in creating anti-discriminatory AI services in the context of data protection. Read more here.
</aside>
See: https://nadiapiet.com/portfolio/ico-non-discriminatory-ai/ also
Non-Discriminatory AI & Data Rights by Design - Generative Research Project for the ICO - Nadia Piet
In January of 2021, the Information Commissioner’s Office (ICO) commissioned AIxDESIGN and Anti-Racist by Design to explore the role of service and UX design in creating non-discriminatory AI services. The aim was to bridge data rights and data protection and help designers and industry practitioners apply them.
The goal of the project was to inform a research report and surface:
1- examples of how discriminatory outcomes manifest in the design of AI services through use of personal data
2- challenges product teams face (in particular service and UX designers) in creating AI driven services that are non-discriminatory
3- recommendations toward what the ICO can do to support them
Through participatory workshops, we gathered perspectives, case studies, and experiences straight from the perspectives of practitioners, which fed into our the larger research project. The participants applied to the following open call:
<aside> 💬 “We’re hosting a series of generative workshops to collectively explore the role of design in creating non-discriminatory AI.
The research will help the UK’s ICO understand how (mis)use of personal data across the AI pipeline can create harms, the technical, social & organizational challenges in mitigating them, and the support product teams and designers might need.
Hosted by AIxDesign and ANTI, we’d like to invite you to come investigate and expand current & potential practices with us in these 3 interconnected sessions:”
</aside>
We hosted a series of generative research workshops that were done with industry practitioners and people who build and design AI products & services in their day-to-day jobs. Together with Abdo Hassan, who moderated the conversations, we developed & facilitated 3 interconnected sessions:
1️⃣ In our 1st workshop on 2nd of February, we sought out to mapping harmful/discriminatory practices & outcomes throughout the AI pipeline and current work on mitigating them with a wide range of AI practitioners.