Privacy lawyer says Canada's proposed AI law is "fundamentally flawed"

He said it sets a dangerous precedent

Privacy lawyer says Canada's proposed AI law is "fundamentally flawed"

Technology

By Mika Pangilinan

Canada’s proposed law to regulate the use of artificial intelligence (AI) has been dubbed “fundamentally flawed” by a privacy lawyer who testified before the House of Commons Industry Committee on Tuesday.

Barry Sookman, senior counsel at McCarthy Tétrault, argued that the Artificial Intelligence and Data Act (AIDA) fails to adequately shield the public from the potential risks associated with high-impact AI systems.

He pointed to the lack of a clear definition for the term “high-risk” and the absence of guiding principles on AI system regulation.

“We don’t know what the public will be protected from, how the regulations will affect innovation, or what the administrative monetary penalties will be,” Sookman said in his testimony.

“We know that fines for violating the regulations can reach $10 million or 3% of gross revenues, but we have no idea what the regulations will require that will trigger the mammoth fines against small and large businesses.”

AIDA is a component of Bill C-27, which also includes the proposed Consumer Protection Privacy Act (CPPA), a comprehensive overhaul of federal privacy legislation governing the private sector.

According to IT World Canada, the government is open to amending the wording of AIDA but has yet to deliver precise wording even as the committee-set deadline draws closer.

“AIDA sets a dangerous precedent”

Another point of contention raised by Sookman is how the proposed law concentrates regulatory authority in the hands of the Minister of Innovation. He said this centralized power structure undermines Parliamentary sovereignty, adding that “AIDA sets a dangerous precedent.”

Furthermore, Sookman expressed concern that the AI and Data Commissioner tasked with enforcing the legislation would report directly to the Minister of Innovation rather than being accountable to Parliament.

He ultimately argued that the current wording of AIDA “paves the way for a bloated and unaccountable bureaucracy” within the innovation department and described the legislation as “unintelligible.”

Sookman’s comments on AIDA come as a larger debate over the regulation of AI systems continues to unfold.

As more businesses adopt AI technology, experts have warned that analytical models could perpetuate biased algorithms that could deny access to things like loans and insurance.

In a previous interview with Insurance Business, Definity senior actuarial analyst Elizabeth Bellefleur-MacCaul spoke of ethical concerns surrounding the amplification of bias in AI models.

To address this issue, Bellefleur-MacCaul said it’s important for businesses to understand the potential for bias.

“This would include the underlying assumptions in the data that we’re using, the tools that we are selecting, and the methodology related to predictive modelling, but also ensuring that we have a framework in place to ensure that once something has gone live, it’s not doing the opposite of what we’re intending,” she said.

What are your thoughts on this story? Feel free to comment below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!