Baker Tilly’s Insight on How AI Is Revolutionizing the Healthcare and Life Sciences Industry

Mar 25, 2024 9:00 AM ET

Authored by Arun Parekkat

Use of artificial intelligence (AI), including applications such as machine learning (ML), where AI software is trained to form its own decision-making criteria based on previous examples of a particular task in life sciences has the potential to transform how we better human health and conduct medical research. According to the Artificial Intelligence Report 2023 prepared by Stanford University in 2022, medical and healthcare was the AI focus area with the most investment with $6.1 billion.

To better understand the potential for how AI can revolutionize the life sciences industry, let’s first explore the concept of AI.

What is AI? A definitional treatment 

AI is a term that most of us are now familiar with, but its interpretation is extremely varied. At a high level, the term “artificial intelligence” encompasses the use of technology to perform tasks typically associated only with human beings, such as learning and decision making. This can be seen anywhere from “strong AI” or Artificial General Intelligence (AGI), which is where a machine could have intelligence equivalent to a human, to “weak AI,” the version we are most familiar with, from voice assistants and driverless cars, where software is trained to perform focused and specific tasks.

According to the AI Act, EU regulatory framework for AI - 2021, AI is defined as "software that is developed with techniques and approaches that can generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." Annex I of the act outlines approaches such as ML, explicit logic-based approaches as well as more general statistical techniques.

The current U.S. Food and Drug Administration (FDA) definition of AI describes it as the “science and engineering of making intelligent machines, especially intelligent computer programs.” Wherein, AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and ML. The FDA also states that ML is a subset technique of AI that can be used to design and train software algorithms to learn from and act on data. Software developers can use ML to create an algorithm that is ‘locked’ so that its function does not change, or ‘adaptive’ so its behavior can change over time based on new data.

What does AI mean in medical technology (medtech) and in pharmaceutical terms? 

Given the significant expenditure associated with drug development and delivery for burgeoning global populations, it is unsurprising that AI is sought as a tool to increase productivity and efficiency in healthcare. We must go back to 1995 to find the first ML technology approved by the FDA. Since then, more than 500 medical devices have gained 510k status aiding image analysis and diagnosis of diseases such as cancer with AI led software devices. These devices seek to optimize delivery of surgery and post-operative care for orthopedic patients receiving an implant such as an artificial hip.

Within pharmaceutical research and development, there are multiple instances where AI is being used across the discovery and development pipeline. The first drugs to have been developed “in silico,” or by computer, are now entering human clinical trials. In its broadest sense, AI is being used to improve the identification of candidate molecules and aiding in the recruitment and retention of patients for Phase I to III clinical trials. For marketed drugs, AI technology such as Large Language Model chatbots are being used as symptom assessment tools to improve awareness of rare diseases amongst the public and primary care providers.

Nature of managing the risk of AI – an assessment 

While the promises of AI in life sciences are undeniable, potential issues and ethical considerations warrant much attention. Data privacy and security is a concern as AI relies on vast amounts of sensitive patient data. Thus, ensuring the confidentiality and protection of this information is paramount. The potential for data breaches or unauthorized access poses significant risks to patient privacy and could erode public trust in AI-driven healthcare solutions.

Another ethical challenge pertains to the "black box" nature of some AI algorithms, especially given the importance in the life sciences industry to be able to consistently explain the clinical benefit and value of a product with clear understandable evidence. Complex ML models may arrive at conclusions without providing transparent explanations for their decisions. In the medical world, where accountability and transparency are crucial, potential biases and lack of interpretability could pose ethical problems. Clinicians and regulators need to understand how AI arrives at its conclusions to ensure patient safety and maintain high ethical standards.

Solutions such as data de-identification have been proposed to address some data privacy issues that may come from AI in healthcare and life sciences. Proposals for addressing data privacy concerns in AI for healthcare and life sciences include de-identification of data, which involves controlling data access based on patient consent and tracking usage purposes over time. Additionally, strategies such as encryption, differential privacy (sharing group attributes without revealing individual ones), federated learning (avoiding centralized data aggregation), and data minimization (limiting personal data based on application scope) may also address the concern of data privacy.

A key development has been European Union’s AI Act. A first of its kind regulatory framework for AI in which AI systems are analyzed and classified according to the risk they pose to users. It creates a risk pyramid with an outright ban for certain AI applications, stringent requirements for AI systems classified as high risk, and a more limited set of (transparency) requirements for AI applications with a lower risk. The stated goal of the EU’s AI act is “a balanced and proportionate approach limited to the minimum necessary requirements to address the risks linked to AI without unduly constraining technological development.”

Risk assessments in AI-enabled healthcare products 

While there is considerable interest from clinicians and regulators in being able to understand to what degree AI is being utilized within a product, particularly where it concerns the long-term efficacy and accuracy of ‘adaptive’ technologies, fundamental product development and engineering principles apply equally to devices which use ‘true’ AI as those which do not.

Assessing whether a product is using AI as per the various regulations under consideration and in place across the globe, and subsequently the measures needed to robustly assess it for its readiness to meet market approval requirements, will need to include the underlying software prediction models, the data used to train and validate it as well as the way it is delivered to relevant end users (whether HCPs or individual patients) within a clearly described patient journey. Recent FDA guidance has provided greater clarity around how manufacturers can safely build adaptive products which have the capacity to learn and improve as they are exposed to increasing volumes of data once placed on the market.

Developers can take comfort in the fact that clear intended use definitions, use of robust evidence-based methodologies, and total product lifecycle approaches, including Agile, that are applied during development and post-launch will remain the cornerstones of the regulatory standard.

The task of identifying and assessing risks that could arise specifically from AI technologies is not trivial and the following paper will explore this domain in more detail, with examples that can help manufacturers ensure safety, efficacy as well benefits unique to this technology such as personalization, autonomy and mitigations for bias including healthcare inequalities.

For more insights, visit Baker Tilly’s healthcare & life sciences page.