Skip to content

Focusing on ethical AI will unlock social and economic opportunity: Senator Colin Deacon

Imagine that you are about to begin an important university exam. You sit down in front of your computer, already stressed because all learning and evaluations are now completed online due to COVID-19. Moments before the exam is about to begin, the university’s proctoring software, which uses artificial intelligence (AI) algorithms to confirm each person’s identity, requests access to your webcam. It doesn’t recognize you.

This is what happened to Chelsea Okankwu immediately prior to an exam last December. The Globe and Mail reported that the software repeatedly rejected her image, requesting that she adjust her glasses (which she doesn’t wear) and add more light, among other suggestions. The fourth-year accounting student at Concordia University, who is Black and of African descent, finally wrote in the help window "I’m a person of colour," following 10 stressful minutes of futile efforts. Moments later, she was allowed to begin the exam, but she did so with the troubling challenge of managing a web of negative emotions generated by the experience.

This is just one example of a problematic risk associated with AI — one that is becoming more and more pervasive. Racial, gender and other biases can manifest themselves in what should otherwise be innocuous software, whether it be software related to predictive policing, health-care diagnoses or automated hiring processes.

Biased AI algorithms exist because they are developed by people. People have biases and prejudices that risk being amplified when the development teams are homogenous. Too often, this results in programs whose very design overlooks the problems that can arise when applied to diverse populations. These programs are often "blind" to important data, including disaggregated data that identifies important racial or gender differences. In consequence, decades of systemic biases are codified, effectively digitizing sexism and/or racism.

How should government, business and society grapple with this emerging threat to equity?

I believe we should respond to biases in AI algorithms by being systemically inclusive. This challenge represents an opening to unlock burgeoning social and economic opportunities by building ethical AI solutions right here in Canada.

Think about it. We are home to two of the most respected AI influencers, globally — Geoffrey Hinton and Yoshua Bengio — and Canada has become a top destination for AI talent. We have one of the most diverse populations on Earth. Each of these elements puts Canada in a position to become a global leader in commercializing AI solutions.

Canada also developed the world’s first standard that establishes minimum ethical protections in the design and use of automated decision systems. This National Standard (CAN/CIOSC 101:2019) was developed in 2019 by the CIO Strategy Council, which includes chief information officers (CIO) from leading private, public and not-for-profit organizations. The standard is designed to apply to all public, private, not-for-profit and government entities and to help those organizations align themselves with the OECD’s value-based principles for the stewardship of trustworthy AI.

This standard was informed by the Government of Canada’s Directive on Automated Decision-Making and paired with a useful Algorithmic Impact Assessment (AIA). The AIA tool helps categorize and manage ongoing risks by determining exactly what level of human intervention, peer review, monitoring and/or contingency planning that an AI tool built to serve citizens will require.

This ongoing monitoring is important because, according to former Government of Canada CIO Alex Benay, “there are too many things we don’t know.”

“You don’t want to get to a position where we are relinquishing control of decisions to machines without knowing everything about them,” Benay, who co-founded the CIO Strategy Council, told Digital Magazine in 2019.

“The scary part for us around the world is that a lot of governments don’t seem to realize the level of automation of their own values that they are putting forward when dealing with potential vendors that have black box code or IP.”

AI algorithms are increasingly pervasive in our daily lives and are driving our digital transformation. Let’s become social and economic leaders; let’s embrace and prioritize a globally leading ethical AI standard, making it a competitive advantage in our digital sector. Consequently, we will prevent unacceptable and entirely avoidable situations like Okankwu’s from being repeated. This will enable us to create more inclusive opportunities for all Canadians, both at home and around the globe.

Senator Colin Deacon represents Nova Scotia in the Senate.

This article appeared in the March 24, 2021 edition of The Hill Times.

Imagine that you are about to begin an important university exam. You sit down in front of your computer, already stressed because all learning and evaluations are now completed online due to COVID-19. Moments before the exam is about to begin, the university’s proctoring software, which uses artificial intelligence (AI) algorithms to confirm each person’s identity, requests access to your webcam. It doesn’t recognize you.

This is what happened to Chelsea Okankwu immediately prior to an exam last December. The Globe and Mail reported that the software repeatedly rejected her image, requesting that she adjust her glasses (which she doesn’t wear) and add more light, among other suggestions. The fourth-year accounting student at Concordia University, who is Black and of African descent, finally wrote in the help window "I’m a person of colour," following 10 stressful minutes of futile efforts. Moments later, she was allowed to begin the exam, but she did so with the troubling challenge of managing a web of negative emotions generated by the experience.

This is just one example of a problematic risk associated with AI — one that is becoming more and more pervasive. Racial, gender and other biases can manifest themselves in what should otherwise be innocuous software, whether it be software related to predictive policing, health-care diagnoses or automated hiring processes.

Biased AI algorithms exist because they are developed by people. People have biases and prejudices that risk being amplified when the development teams are homogenous. Too often, this results in programs whose very design overlooks the problems that can arise when applied to diverse populations. These programs are often "blind" to important data, including disaggregated data that identifies important racial or gender differences. In consequence, decades of systemic biases are codified, effectively digitizing sexism and/or racism.

How should government, business and society grapple with this emerging threat to equity?

I believe we should respond to biases in AI algorithms by being systemically inclusive. This challenge represents an opening to unlock burgeoning social and economic opportunities by building ethical AI solutions right here in Canada.

Think about it. We are home to two of the most respected AI influencers, globally — Geoffrey Hinton and Yoshua Bengio — and Canada has become a top destination for AI talent. We have one of the most diverse populations on Earth. Each of these elements puts Canada in a position to become a global leader in commercializing AI solutions.

Canada also developed the world’s first standard that establishes minimum ethical protections in the design and use of automated decision systems. This National Standard (CAN/CIOSC 101:2019) was developed in 2019 by the CIO Strategy Council, which includes chief information officers (CIO) from leading private, public and not-for-profit organizations. The standard is designed to apply to all public, private, not-for-profit and government entities and to help those organizations align themselves with the OECD’s value-based principles for the stewardship of trustworthy AI.

This standard was informed by the Government of Canada’s Directive on Automated Decision-Making and paired with a useful Algorithmic Impact Assessment (AIA). The AIA tool helps categorize and manage ongoing risks by determining exactly what level of human intervention, peer review, monitoring and/or contingency planning that an AI tool built to serve citizens will require.

This ongoing monitoring is important because, according to former Government of Canada CIO Alex Benay, “there are too many things we don’t know.”

“You don’t want to get to a position where we are relinquishing control of decisions to machines without knowing everything about them,” Benay, who co-founded the CIO Strategy Council, told Digital Magazine in 2019.

“The scary part for us around the world is that a lot of governments don’t seem to realize the level of automation of their own values that they are putting forward when dealing with potential vendors that have black box code or IP.”

AI algorithms are increasingly pervasive in our daily lives and are driving our digital transformation. Let’s become social and economic leaders; let’s embrace and prioritize a globally leading ethical AI standard, making it a competitive advantage in our digital sector. Consequently, we will prevent unacceptable and entirely avoidable situations like Okankwu’s from being repeated. This will enable us to create more inclusive opportunities for all Canadians, both at home and around the globe.

Senator Colin Deacon represents Nova Scotia in the Senate.

This article appeared in the March 24, 2021 edition of The Hill Times.

Tags

More on SenCA+

Back to top