AI in the Canadian Workplace: Compliance Risks Employers Face in 2026
Employers hoping for regulatory clarity from Ottawa on artificial intelligence will need to keep waiting.
Employers hoping for regulatory clarity from Ottawa on artificial intelligence will need to keep waiting. Bill C-27 , the Digital Charter Implementation Act, 2022 , which contained the Artificial Intelligence and Data Act (AIDA), died on the Order Paper when the 44th Parliament, 1st Session, was prorogued on January 6, 2025. It was never reintroduced. As of April 2026, Canada has no federal AI-specific legislation in force . The federal government has signalled possible future privacy or AI reform, but nothing has been tabled. Had AIDA become law, it would have created two tiers of penalties for non-compliant AI systems: up to $10 million or 3% of gross global revenue for general offences, and up to $25 million or 5% of gross global revenue for more serious offences prosecuted on indictment. These figures illustrate the scale of regulation that was contemplated — and may resurface in future proposals. Because Bill C-27 also would have replaced PIPEDA with the Consumer Privacy Protection Act (CPPA), PIPEDA remains Canada's governing federal private-sector privacy statute . The Office of the Privacy Commissioner of Canada (OPC) has issued guidance confirming that organizations using AI must comply with existing privacy principles — including transparency, safeguards, fairness, and accuracy — under PIPEDA and applicable provincial privacy laws such as Quebec's Law 25 (fully implemented September 2024), British Columbia's PIPA , and Alberta's PIPA . Excessive data collection when training or operating AI systems increases privacy risk under all of these statutes. Ontario is currently the most significant provincial jurisdiction for AI-specific workplace obligations. Effective January 1, 2026 , amendments to the Employment Standards Act, 2000 introduced through the Working for Workers Acts require employers with 25 or more employees to disclose the use of artificial intelligence in publicly advertised job postings when AI is used to screen, assess, or select applicants. The 25-employee threshold is measured on the day the posting is published. This obligation applies only to qualifying publicly advertised job postings — not to all recruitment activity generally. Ontario also has an electronic monitoring disclosure requirement that has been in effect since October 11, 2022 . Employers with 25 or more employees must maintain a written policy disclosing whether and how they electronically monitor employees. Importantly, this provision does not restrict monitoring itself — it is purely a disclosure obligation . AI-powered productivity tracking, keystroke logging, and similar surveillance tools are not prohibited, but employees must be informed. Together, these two requirements mean Ontario employers deploying AI in hiring or workplace monitoring face concrete compliance duties today. Failure to include required AI disclosures in job postings or to maintain an electronic monitoring policy could expose employers to complaints under the ESA. Ontario Bill 149 (Working for Workers Five Act, 2024): Employers with 25 or more employees must now disclose in job postings if artificial intelligence is used to screen, assess, or select applicants. This requirement took effect in 2026 and applies to all publicly advertised positions. One of the most significant — and often underestimated — compliance risks of workplace AI is human rights liability . AI systems that produce discriminatory outcomes, even unintentionally, may create exposure under provincial and federal human rights legislation. The Canadian Human Rights Commission has explicitly identified AI bias and algorithmic discrimination as a serious concern. If an AI hiring tool disproportionately screens out candidates based on race, gender, disability, or other protected grounds, the employer — not the software vendor — bears the legal risk. Because Canada does not yet have a federal AI statute mandating specific technical safeguards, employers should treat the following as recommended governance practices and risk controls : Conduct regular bias audits on AI tools used in hiring, performance evaluation, and termination decisions. These audits help prevent human rights complaints before they arise. Maintain an AI inventory documenting every AI system used in employment decisions, including vendor details, data inputs, and decision logic. Ensure meaningful human review of consequential AI-driven decisions. Automated outputs should inform — not replace — human judgment on hiring, discipline, and accommodation. Review privacy impact of AI data collection practices under PIPEDA and applicable provincial privacy laws to avoid excessive data collection. These practices are not universally mandated by statute across Canada today, but they represent prudent steps to mitigate legal exposure under existing privacy, human rights, and employment standards frameworks. Canada does not currently have a federal AI-specific statute in force. Employers using AI must instead manage risk un