White Paper

Current Approaches to Artificial Intelligence in Medical Imaging

by Kris Dickie, VP of Research and Development


There has been a tremendous amount of discussion and research around the development of artificial intelligence (AI) and machine learning (ML), a subset of AI. With respect to medicine and medical imaging, AI has been around for years, helping to provide more automated approaches by which humans interact with data and images. Depending on the application, AI is currently used to sort data, apply transformations that help with interpretation and diagnoses, and even to perform actual diagnoses and detection. Clarius has built AI into its products from the very beginning to provide the most optimized and high quality image, as well as to reduce the number of operations required to attain a high-quality image.

Read PDF version

282 KB / PDF

Machine Learning

Machine learning is a subset of AI and builds on the foundational work of developing neural networks within a computing engine. AI relies on predefined logic that follows an “if this then that” model, and complexity is limited only by the number of pathways that engineers and scientists can create within a given problem definition. With ML, the AI is trained on data sets which provide enough variability to account for as many scenarios as possible, essentially creating an exponential number of pathways compared to a traditional AI approach. The machine thus learns from the variability provided. For ML to be applied to real-world medical imaging, ideally tens of thousands of images need to be captured and analysed for the machine to build a model that is accurate enough to provide a useful AI tool.

Clarius does not currently use machine learning or deep learning techniques in its products, however it has been investing in machine learning projects and is collecting and analyzing data to help launch the next generation of automation and detection inside a handheld ultrasound.

AI Automates TGC Control

Clarius uses automated AI to make it easier than ever for a medical professional to use an ultrasound system. It's one of the few ultrasound companies to build a real time, truly automated time-gain-compensation (TGC) engine. Most ultrasound devices require the user to adjust up to eight individual TGC controls to optimize the gain, or amplification level, as a function of depth along the image. As the user adjusts the imaging depth, frequency, and other parameters, it is very likely that the gain will need to be re-tuned to provide the best interpretation. In conjunction with parameter adjustment, every time the user changes the scanning plane, the differences in tissue will result in an attenuation variance and thus force the user to adjust the TGC once again.

Clarius’ automated AI approach negates the requirement for the user to interact with the TGC control if they don’t want to. Through the analysis of every single ultrasound image that is captured during scanning at rates of up to 30 frames per second, the gain can be tuned with a high level of precision down to 1mm, all in real-time, with instant feedback. The AI uses a region-based histogram analysis to determine the difference between the optimal gain and what is currently being acquired. Some important functionality of the AI are:

  • Use of the analog gain components to take advantage of the increased signal-to-noise ratio offered
  • Determination of fluid filled structures to account for overgain potentials
  • Regional contact analysis to prevent misinterpretation of gain calculations

Fig. 1

Uniform brightness controlled throughout entire image.

Clarius ultrasound image
AI needle survey phase

Fig. 2

Survey phase


Fig. 3

Needle enhancement phase

AI Simplifies Needle Enhancement

Many point-of-care ultrasound systems use a form of needle enhancement (NE) which relies on a set of angled beams to be run in parallel with the standard greyscale image. These beams highlight the needle as any ultrasound beam bouncing off a strong reflective surface at a perpendicular angle will result in the bright structure shown in the image. More advanced systems, through some image processing and filtering, can then isolate the needle to try and hide strong reflections coming from other tissue structures. The main problem with this technique is the use of a single angle that must be controlled by the user; with more angles, the frame rate would dramatically slow down, as would the computational resources needed for any post-processing filtering. Since one hand must hold the ultrasound scanner, while the other navigates the needle, it becomes quite difficult to make the appropriate angle adjustment on the ultrasound device to align the perpendicular beams unless the angle does not change or if there is another operator in the room.

Clarius has developed a special AI to allow users to maintain high frame rates without having to control the needle enhancement angle. By interleaving detector frames at multiple angles with greyscale imaging, each separate angled ultrasound image is analyzed in real-time for the needle signature, which is comprised of a multitude of parameters that help differentiate
it from standard tissue that acts as a bright reflector. Once the needle has been detected, the system locks onto that angle and will process the image to show the needle as a highlighted line overlaying on top of the greyscale image. If the user significantly changes with insertion angle or if the needle fall out of plane with the ultrasound image, Clarius AI recognizes the change and returns to detector mode.

Thanks to AI, Clarius users can simply press one button to access a unique needle enhancement tool, which leaves the physician's hands free to perform the procedure.

Calculating Heart Rate Using AI

To measure heart rate in most ultrasound systems, it is extremely common for users to enable M-Mode to visualize one beam over time and then make a time measurement within a spectrum. This calculated time can then be converted to a heart rate measurement. While this method is not overly complicated, it does require the user to perform the following steps:

  1. Enable M-Mode
  2. Place a gate at the correct point on the image
  3. Pause the scan
  4. Enable measurement tools
  5. Place two cursors on the spectrum

To help speed up the workflow when just a basic heart-rate calculation is required, Clarius created an AI tool to automatically calculate the heart rate from the greyscale image without using M-Mode. By analyzing motion within a set of regions within the image, the standard deviation for each region can be calculated. When the standard deviation is filtered over time, and if a periodic pattern emerges, the use of Fourier transforms can be applied to calculate the rhythm, or peak frequency, of the image, and thus correlate the calculation to a heart rate measurement. By acquiring data over a number of seconds before displaying information, averaging is applied to account for the lower frame rates of greyscale imaging versus M-Mode, such as 30 frames per second versus 200 M-Mode lines per second.

Currently, Clarius’ Heart Rate AI works on adult cardiac images, with obstetric cardiac calculations currently being developed.

AI heart rate monitor

Fig. 4

Heart Rate Monitor

AI power saver on

Fig. 5

Not scanning → Power Saver on

AI power saver off

Fig. 5

Scanning → Power Saver off

Managing Power with AI

Clarius Scanners are powered by a rechargeable lithium-ion battery that provides up to one hour of continuous scanning time. Although every scanner comes with two batteries, optimizing battery life is still a priority. Clarius incorporates a powerful AI algorithm to manage the power supply to the scanner. For example, when users put the scanner down on a table in an imaging state, the AI analyzes multiple scanner inputs, such as motion sensors and the image itself, to determine if a lower frame rate state should be engaged. If the user then starts imaging a patient, the AI will engage a high frame rate state once again. If the scanner is left for too long in the lower frame rate state, it will pause imaging, at which point the user will need to manually engage imaging through the freeze button within the Clarius App.