Begin with the basics of artificial intelligence.

Artificial intelligence (AI) is an emerging field that uses computers to understand human intelligence behavior, decision-making, and problem-solving.

This 7-minute educational video defines AI principles of machine learning and deep learning methods.

All videos are optimal with sound on and on maximum screen view.

Learn more about how AI  works - (07:51)

Learn AI in simple terms through a series of short videos.

AI introduction and how it works - (01:00)

Explaining artificial neural networks - (00:52)

Cloud computing and the power of AI - (00:44)

AI as a supportive tool in healthcare - (00:40)


Demystifying artificial intelligence in healthcare

The application of AI in healthcare has many potential benefits to physicians, patients, and healthcare systems alike. Learn more about terminology to discover suitable applications of AI.

A computer algorithm is a sequence of instructions provided to solve a class of problems or perform a computation.

Artificial intelligence (AI) is an overarching term for intelligent machines or technologies that can emulate the functions of the human brain, and replicate human capabilities such as decision-making, problem-solving, reasoning, visual perception and speech recognition. AI has the ability to learn through situations that are derived from patterns or features of data. Machine learning (ML), neural networks, and deep learning (DL) are all subsets of AI.

Machine learning (ML) is one of the most exciting and promising areas in AI. ML is a subset of AI. It employs algorithms that learn from data to make predictions or decisions, and its performance improves with experience. ML gives computers the ability to learn without being explicitly programmed. ML algorithms can be developed to be "locked" so that its function does not change, or "adaptive" so its performance can adapt over time based on new inputs.

Deep learning (DL) is a specialized subset of ML, using multi-layered (sometimes 100+ layers) deep neural networks to build algorithms that teach systems to perform tasks on their own, based on large sets of data. DL is one type of ML algorithm and therefore a subset of ML.

Training data is labeled data used to teach AI or machine learning algorithms to make proper decisions. The data must be robust to provide the most suitable outcomes for AI in clinical practice. The training data should be relevant to real-life scenarios and contain variability that ensures the viability of the AI being addressed and created.
 

For example, for a visual recognition problem, the training data or training set must properly represent all the variability that may be encountered. This can include the various perspectives on the subject, illumination, deformation, occlusions of object, background clutter, and interclass variation of the object. When the training data is robust, it will increase the likelihood of the AI algorithms meeting a solution.

Natural language processing (NLP) is a subfield of AI concerned with the interactions between computers and human language. In particular, how to program computers to process and analyze large amounts of data. NLP is used to comprehend speech or text to extract its meaning. The result is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. NLP can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

 

NLP systems can be used to evaluate how to optimize the patient experience, reduce costs and improve care outcomes that are hidden in unstructured data. This capability is valuable across various use cases in healthcare today:

  • Speech recognition
  • Improvement in clinical documentation
  • Data mining research
  • Computer-assisted coding
  • Automated reporting


As AI algorithms improve through NLP, there are use cases emerging that will have impact: 

  • Clinical trial matching
  • Prior authorization
  • Clinical decision support
  • Risk adjustment
  • Population health management and analytics


Healthcare organizations can use NLP to transform the way they deliver care and manage solutions. Organizations can use ML in healthcare to improve provider workflows and patient outcomes.

The definitions the FDA adheres to are as follows: a radiological CADe device is “intended to identify, mark, highlight or otherwise direct attention to portions of an image […] that may reveal abnormalities during interpretation of images by the clinician.” A CADx device is “intended to provide information beyond identifying […] abnormalities, such as an assessment of disease.” Whenever software is not intended to highlight an abnormality, it is not considered a CADe nor a CADx device. For example, segmentation of brain structures is not considered CADe, while the detection of a tumor candidate is considered CADe. An algorithm that adds information on tumor grade would make it a CADx device.

The advancement of AI in healthcare can complement physician decision-making. Data in healthcare is often unstructured and there is a large amount to source and scale for individual assessment. The use of AI can allow for improved efficiencies, streamline processes and information sharing, and enhance decisions — all in support of patient care.

AI in healthcare is very exciting because of the benefits it can offer to enhance the physician’s ability to care for the patient. However, it is important that enthusiasm doesn’t turn into misguided use of information. AI is not ready to work on its own. AI and physicians must work together to gain the greatest benefits for improving patient care. There are unrealistic predictions and a false sense of what AI can do as a complement in clinical practice. There are precise methods in testing and areas to evaluate the suitability for AI in clinical application. Here are some of the limitations of AI:
 

  • Explainability and transparency – Machine Learning, and in particular, deep learning, can act as a black box which can make it difficult to understand how the AI system arrived at the decision. Explainability is a process that allows end-users of the application to describe what the AI model is doing to get to a decision. This step is helpful to better comprehend its expected impact. There is ongoing effort to improve transparency of AI algorithms in clinical practice.  
  • Bias – The purpose of the AI solution might warrant evaluation of the data sources to eliminate bias that could interfere with the results. There is still a lot of research necessary to determine the implications of potential bias of the training data compared to real life. 
  • Cost – Typically training deep learning models either requires purchasing expensive computational resources with expensive and powerful computers or renting them from cloud providers. 
  • Regulatory – The regulatory landscape for AI-based products and services is still evolving. For example, the FDA is looking to develop a regulatory framework to support the iterative nature of AI-based software, while still ensuring the continued safety and effectiveness of AI solutions.
  • Privacy – It is important to use only anonymized or deidentified data with appropriate patient consent and meet other compliance with applicable laws and regulations. This patient level information is not necessary for AI development and therefore there is no issue to remove it from visibility. Privacy and security are important principles in AI-enabled therapies to respect and protect the personal and sensitive information of users, patients, clinicians, and partners throughout the total product lifecycle.


There are many areas to evaluate when considering the application of AI in healthcare. The above are some considerations on the appropriate solutions that can benefit the physician, healthcare system, and patient outcomes.

It is important to use high quality and variability of data within AI algorithms to help ensure it aligns to the real-life use cases. The data should be tested properly to validate the accuracy of outcomes. 
 

Questions that should be considered include:

  • Will the product/customer/patient benefit from the application of AI for your use case?
  • Can you collect enough data to support the performance that you need?
  • How has the data been tested to prove a desired outcome? 
  • Is your data deidentified/anonymized?
  • Do you have patient consent for R&D development?
  • What are the benefits of your AI application?
  • What risks does your AI application introduce?
  • How does your AI interact with the physician?
  • How does the physician interact with the AI application?

Considerations for suitable artificial intelligence in healthcare

A detailed review of considerations will support the trust of an AI application and further confirm that it has value to clinical practice. AI will not replace physician decision-making; it is intended to enhance it.

The World Health Organization (WHO) issued its first global report about AI in health. Learn about the six guiding principles:

  • Protecting human autonomy
  • Promoting human well-being and the public interest
  • Ensuring transparency, explainability, and intelligibility
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity
  • Promoting AI that is responsive and sustainable

The U.S. Food and Drug Administration (FDA), and others have issued a report with 10 guiding principles about Good Machine Learning Practice (GMLP). The principles from the report include:

  1. Multi-disciplinary expertise is leveraged throughout the total product life cycle
  2. Good software engineering and security practices are implemented
  3. Participants and data sets are representative of the intended patient population
  4. Training data sets are independent of test sets
  5. Selected reference datasets are based upon best available methods
  6. Model design is tailored to the available data and reflects the intended use of the device
  7. Focus is placed on the performance of the human-AI team
  8. Testing demonstrates device performance during clinically relevant conditions
  9. Users are provided clear, essential information
  10. Deployed models are monitored for performance and re-training risks are managed

More ways to learn about AI and connect

Abstract illustration of blue and purple dotted lines

Physicians perspectives

Find articles and clinical evidence about AI in GI.

Photo of a 3D illustration of a network abstract background.

The future is now — AI in gastroenterology

The benefits of GI Genius™ system introduces new possibilities in the field of endoscopy.

Photo of a 3D illustration of a network abstract background.

Sign up

Subscribe to receive updates on the latest advancements and physician perspectives of AI in healthcare.