What is Canada’s new Artificial Intelligence and Data Act (AIDA) in Bill C-27

[This is an article contributed by Marcus Anderson. Marcus is the  Chief Compliance Officer at CANPAC Asset Management, which provides investors with comprehensive Canadian, global, and industry investment opportunities.]

This article is meant as an accessible summary for the AIDA companion document that was authored by the Government of Canada. It reframes the contents of that document in a way that is consumable and easily understandable.


Artificial intelligence (AI) will have a significant impact on Canadians’ lives and businesses. In June 2022, the Canadian government introduced the Artificial Intelligence and Data Act (AIDA) as part of the Digital Charter Implementation Act. AIDA aims to implement the Digital Charter and ensure trust in everyday digital technologies. It proposes a framework for regulating AI in a positive and responsible manner. The government plans to involve various stakeholders across Canada in the development of these regulations to align with Canadian values. Canada will also collaborate with international partners to coordinate AI regulation and meet global standards. This document addresses concerns about AI risks and reassures that the Act balances safety, innovation, and fair regulations. It provides a roadmap for AIDA’s implementation to foster understanding and support its consideration by Parliament

Canada and the global artificial intelligence (AI) landscape

Canada is a global leader in AI technology. We have many AI research labs, incubators, investors, and start-ups. Canadians have been involved in AI development since the 1970s. Canada was the first country to have a national AI strategy in 2017 and is a founding member of the Global Partnership on AI.

The government invested $568 million CAD in AI research and innovation. This led to the Pan-Canadian AI Strategy, positioning Canada as a global leader. The global AI market is growing rapidly and is expected to generate over $680 billion in revenue by 2023. It could reach $1.2 trillion CAD by 2026 and even exceed $2 trillion CAD by 2030.

AI technology allows computers to learn and perform complex tasks like image recognition and translation. It has improved healthcare, agriculture, energy efficiency, and language processing in Canada. AI benefits our economy and daily lives, helping us find information and make important decisions.

Overall, AI brings many advantages to Canadians and has become an integral part of our lives.

Why now is the time for a responsible AI framework in Canada

AI is becoming widely used in the digital economy, but clear standards are needed to manage it responsibly. Without these standards, it’s hard for people to trust the technology and for businesses to show they’re using it ethically. There have been instances where AI systems discriminated against women or people of color, and “deepfake” technology has caused harm. To address these concerns, countries like the European Union, the United Kingdom, and the United States are working on regulations for AI. Canada needs its own framework to ensure trust, promote responsible innovation, and stay connected to international markets.

Canada’s approach and consultation timeline

Canada already has strong laws in place to regulate many uses of AI. The Personal Information Protection and Electronic Documents Act protects personal information used by businesses. The government is also working on the Consumer Privacy Protection Act to update privacy laws for the digital economy. Other laws, such as consumer protection, human rights, and criminal laws, apply to AI use. Regulators are addressing AI impacts within their authority. However, there are gaps in regulations that need to be filled to ensure trust in AI. The government has developed a framework to identify and minimize risks, support research and innovation, and adapt to evolving technology. The implementation of regulations will take at least two years after Bill C-27 becomes law, starting in 2025.

How the Artificial Intelligence and Data Act would work

The goal of AIDA is to protect Canadians and promote responsible AI development in Canada. It aims to align with international standards and integrate with existing Canadian laws. AIDA proposes a risk-based approach, taking into account concepts from international norms like the EU AI Act, OECD AI Principles, and the US NIST Risk Management Framework. The focus is on high-impact AI systems, ensuring they meet safety and human rights expectations. The Minister of Innovation, Science, and Industry would oversee the Act, supported by an AI and Data Commissioner. AIDA also prohibits harmful AI use and establishes accountability for businesses involved in high-impact AI systems.

High-impact AI systems: considerations and systems of interest

AIDA aims to regulate AI systems that have a significant impact on people. The criteria for identifying these systems would be defined in regulations, considering factors like potential harms, scale of use, and existing regulations. The government wants to avoid unnecessary impacts on the AI ecosystem.

AIDA won’t affect access to open source software, which is freely available for developers to use and build upon. However, if someone provides a fully-functioning AI system, including through open access, they would have obligations under the law.

The government is particularly concerned about certain types of AI systems. Screening systems used for services or employment can result in discrimination and harm, especially for marginalized groups. Biometric systems that make predictions about individuals can impact mental health and autonomy. AI-powered content recommendation systems can influence behavior and emotions on a large scale, potentially causing harm. AI systems integrated into critical functions like autonomous driving or health triage decisions can directly harm people if not properly managed for risks and biases.

AIDA aims to address these concerns and ensure that AI systems are used responsibly to protect people’s well-being and rights.

Individual harms, collective harms, and biased output

AIDA focuses on addressing two types of negative impacts caused by high-impact AI systems. Firstly, it addresses harms that individuals may experience, such as physical or psychological harm, property damage, or financial loss. These harms can affect individuals or groups of people, with more vulnerable groups facing greater risks. Secondly, AIDA is the first Canadian law that specifically addresses the adverse impacts of systemic bias in AI systems used in commercial settings. It prohibits unjustified differential treatment based on discrimination grounds outlined in the Canadian Human Rights Act. AIDA aims to protect individuals and communities from collective harms by requiring businesses to assess and mitigate bias risks associated with prohibited discrimination grounds.

Regulatory requirements

AIDA would require measures to be taken before high-impact AI systems are used to identify and reduce risks of harm or biased results. This ensures compliance and sets clear expectations at each stage of the system’s lifecycle.

The obligations for these systems are guided by principles aligned with international norms:

  1. Human Oversight & Monitoring: People managing the system must have meaningful control and the ability to understand how it works.
  2. Transparency: The public should be provided with enough information to understand how these systems are being used.
  3. Fairness and Equity: Systems should be built with an awareness of potential discriminatory outcomes and actions must be taken to mitigate them.
  4. Safety: Risks of harm must be assessed and measures put in place to prevent harm.
  5. Accountability: Organizations must establish governance mechanisms to comply with legal obligations.
  6. Validity & Robustness: Systems should perform consistently and be stable in different circumstances.

Businesses would be accountable for ensuring compliance with the Act. Measures would be set through regulations, tailored to specific activities and risks. Extensive consultation and international standards would guide the development of these measures.

The responsibilities of businesses would vary depending on their role in the system’s lifecycle. Designers and developers would address risks, document use, and limitations. Those making systems available would ensure users understand restrictions and limitations. System operators would monitor and assess risks.

Clear accountabilities would be established, and businesses would need to notify the Minister if a system causes harm. Measures at each stage of the system’s lifecycle would include risk assessment, documentation, evaluation, monitoring, and intervention as needed.

Overall, AIDA aims to ensure that businesses take appropriate measures to make high-impact AI systems safe, non-discriminatory, and compliant with the law.

Oversight and enforcement

In the first few years after the AIDA comes into effect, the focus will be on educating businesses, establishing guidelines, and helping them comply voluntarily. The government wants to give enough time for everyone to adjust to the new rules before enforcement actions are taken.

The Minister of Innovation, Science, and Industry will be responsible for overseeing and enforcing all parts of the Act that are not prosecutable offenses. The AIDA will also create a new role, the AI and Data Commissioner, who will assist the Minister in carrying out these responsibilities. The Commissioner’s role will be separate from other activities within ISED and will allow them to specialize in AI regulation. In addition to enforcement, the Commissioner will work with other regulators to ensure consistent regulation and study the potential impacts of AI systems.

The Minister will have the power to ensure the safety of Canadians. If a system poses harm or bias, or if there is a violation, the Minister can order the production of records or an independent audit. In cases where there is an immediate risk of harm, the Minister can order the system to be stopped or disclose information publicly to prevent harm.

The AIDA has three enforcement mechanisms: administrative monetary penalties (AMPs), prosecution of regulatory offenses, and true criminal offenses. AMPs are used to encourage compliance with the Act’s obligations. Regulatory offenses can be prosecuted in more serious cases, where guilt must be proven beyond a reasonable doubt. True criminal offenses are separate and involve intentional behavior causing serious harm.

External expertise from the private sector, academia, and civil society will be mobilized to ensure enforcement is conducted in the evolving AI environment. This will include designating external experts, AI audits by independent auditors, and an advisory committee to advise the Minister.

To illustrate how the system works, consider the example of an AI system developed by multiple actors. Each actor involved would have specific obligations based on their role, and non-compliance could result in penalties or prosecution.

Voluntary certifications will also play a role as the ecosystem evolves, and the AI and Data Commissioner will assess progress and adjust enforcement activities accordingly. Small and medium-sized businesses will receive assistance in meeting requirements, and penalties will be proportionate to encourage compliance.

Overall, the AIDA aims to create a regulatory framework that ensures compliance, with penalties for non-compliance and the possibility of prosecution in serious cases.

Criminal prohibitions

The Criminal Code in Canada lists the serious crimes that are considered harmful to society and morally wrong. These crimes have severe punishments, like imprisonment, and come with a lot of social shame if someone is convicted. They are different from regulatory non-compliance offenses, which are mainly about failing to meet regulatory rules. Because criminal offenses are so serious, they need strong evidence to prove that the act was intentional, not just that it happened.

While some existing criminal offenses can apply to harmful uses of AI, they may not directly target these behaviors. So, the AIDA introduces three new criminal offenses that specifically address concerning AI-related actions. These offenses are separate from regulatory obligations and their punishments aim to prohibit and penalize activities involving AI that cause harm knowingly or intentionally.

Law enforcement can investigate these crimes, and the decision to prosecute lies with the Public Prosecution Service of Canada.

These offences are:

  1. Knowingly possessing or using unlawfully obtained personal information to design, develop, use or make available for use an AI system. This could include knowingly using personal information obtained from a data breach to train an AI system.

  1. Making an AI system available for use, knowing, or being reckless as to whether, it is likely to cause serious harm or substantial damage to property, where its use actually causes such harm or damage.

  1. Making an AI system available for use with intent to defraud the public and to cause substantial economic loss to an individual, where its use actually causes that loss.

The path ahead

The AIDA is a new set of rules for AI in Canada. It aims to keep people safe from harmful AI systems and promote responsible AI development in the country. It follows a risk-based approach similar to the EU’s draft AI Act and will be supported by industry standards. The government plans to involve different groups like industry, academia, and communities in discussions to shape the implementation of AIDA and its regulations. They will talk about things like which AI systems should be considered high-impact, what standards they should meet, and how regulations should be developed and enforced. The government will publish draft regulations and seek feedback before finalizing them. They will also work with other regulators to ensure consistent protection for Canadians.

Thanks for reading this far.

Disclaimer: Marcus Anderson is not a real person, nor is CANPAC Asset Management a real company. However the perspectives are real, as is the information about Bill C-27 and AIDA. For more information on why “Marcus” wrote this article, see this post:

My aim with this campaign is to provide readers with valuable content, insights, and inspiration that can help in their personal and professional lives. Whether you’re looking to improve your productivity, enhance your creative strategies, or simply stay up-to-date with the latest news and ideas in cybersecurity, I’ve got something for you.

But this campaign isn’t just about sharing our knowledge and expertise with you. It’s also about building a community of like-minded IT and security focused individuals who are passionate about learning, growing, and collaborating. By subscribing to the blog and reading every day, you’ll have the opportunity to engage with other readers, share your own insights and experiences, and connect with people in the industry.

So why should you read every day and subscribe? Well, for starters, you’ll be getting access to some great content that you won’t find anywhere else. From practical tips and strategies to thought-provoking insights and analysis, the blog has something for everyone that wants to get current and topical cybersecurity information. Plus, by subscribing, you’ll never miss a post, so you can stay on top of the latest trends and ideas in the field.

But perhaps the biggest reason to join the 30-in-30 campaign is that it’s a chance to be part of something bigger than yourself. By engaging with the community, sharing your thoughts and ideas, and learning from others, you’ll be able to grow both personally and professionally. So what are you waiting for? Subscribe, and for the next 30 days and beyond, let’s learn, grow, and achieve our goals together!