July 18, 2019
By Maria Aurora Nunez B.A. (Hon.), J.D. Human Rights/Health Lawyer, Policy Advisor at York University
Ready or not, the use of artificial intelligence (AI) and robots is increasingly becoming a part of our society. This is particularly true in the health care system. In 2018, Humber River Hospital became the first hospital in Canada to recruit a humanoid robot Pepper, which can greet visitors, provide directions and entertain guests. 1 However, robots and AI can do much more. Surgeons can use robotic systems, like the DaVinci, to operate with the utmost precision and minor incisions, resulting in less patient bleeding, faster healing times and reduced risk of infections. Sanitizing robots can move autonomously to the empty hospital rooms of discharged patients and, within minutes, kill all microorganisms with high-intensity ultraviolet light; thereby, lowering the risk of hospital-acquired infections. Hyper-realistic robots can simulate patients to assist doctors with their clinical training.2 Even in remote communities in Canada, robots can help to save lives.3 Software developers can train an AI to perform tasks (e.g. diagnose diseases) and learn which treatment has the best chance of success for an individual patient, based on available information. “Machine learning” can advance so much that eventually the developers do not know how output results, only that it appears to be very accurate.
The technological changes in health care require the legal profession to answer associated questions. For example:
• Does current legislation sufficiently protect patient privacy in the era of big data?
• How does the use of AI and robotics impact a doctor’s standard of care, particularly if an AI-derived and doctor-derived diagnosis or treatment conclusion differ?
• Can a doctor be liable for following an AI-derived recommendation, if he/she cannot explain how it came to be? Does trusting the performance of AI fall below the standard of care expected of a reasonable physician?
• Can a doctor be liable for refusing to follow an AI-derived recommendation, which may help to save a patient’s life, or for failing to use available AI tools entirely? Afterall, a benefit of AI is that it can reproduce cognitive functions (e.g. problem solving, recognition and data processing) above human capabilities.
• If a robot and/or AI tool err, then who (e.g. a software developer, doctor and/or hospital) should be legally responsible?
• Can a doctor and hospital be liable for using an AI tool that may be discriminatory?4
• To obtain informed consent, must doctors tell patients of potential legal risks, specifically that a warrant in a criminal investigation (or a motion for production in tort law) can be made for information related to the patient’s biometric data?5
Currently, Canada has no legal framework for regulating AI.6 In 2019, the Canadian Institutes of Health Research reported that “legislation may need to be developed to accommodate the unique requirements to verify and validate AI systems.”7 Also, the Government of Canada’s Directive on Automated Decision-Making (ADM) took effect, which sets out minimum requirements for federal government departments that wish to use ADM technology.8 However, further research, like that underway by the Law Commission of Ontario, on digital rights is needed.9 Canada is ready for clarity on AI.
- Humber River Hospital, “Humber becomes the first hospital in Canada to recruit a humanoid robot” (15 October 2018), online: https://www.hrh.ca/2018/10/15/humber-becomes-the-first-hospital-in-canada-to-recruit-a-humanoid-robot/.
- Zachary Tomlinson, “15 Medical Robots that are Changing the World” (11 October 2018), online: Interesting Engineering <https://interestingengineering.com/15-medical-robots-that-are-changing-the-world>.
- Ivar Mendez, “How Robots are Helping Doctors Save Lives in the Canadian North” (11 December 2018), online: The Conversation < https://theconversation.com/how-robots-are-helping-doctors-save-lives-in-the-canadian-north-104462>.
- See e.g. Ewert v. Canada, 2018 SCC 30  2 SCR 165 (concern of bias in incarceration assessment tools).
- Dylan Roskams-Edris, “The Eye Insider: Remote Biosensing Technologies in Healthcare and the Law” (2018) 27 Dal J Leg Stud 59. See also Lee-Ann Conrod, “Smart Devices in Criminal Investigations: How Section 8 of the Canadian Charter of Rights and Freedoms can better protect privacy in the search of technology and seizure of information (2019) 24 Appeal 115.
- Chris Reynolds, “Canada lacks laws to tackle problems posed by artificial intelligence: Experts” (19 May 2019), online: The Canadian Press <https://globalnews.ca/news/5293400/canada-ai-laws/>.
- Canadian Institutes of Health Research, “Introduction of Artificial Intelligence and Machine Learning in Medical Devices,” online: <http://www.cihr-irsc.gc.ca/e/51459.html> (Date modified: 10 May 2019).
- Government of Canada, “Directive on Automated Decision-Making” (2019), online: <https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592>.
- Law Commission of Ontario, “Digital Rights” (2019), online: <https://www.lco-cdo.org/en/our-current-projects/law-reform-and-technology/>.