Adaptive Ultrasound Imaging with Physics-Informed NV-Raw2Insights-US AI
- AI模型直接学习原始超声信号,突破传统波束成形管道限制。
- 通过估计声速实现自适应图像聚焦,提升个体化医疗影像精度。
- 将复杂计算简化为单次AI推理,显著提高处理效率。





- [Introduction](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#introduction "Introduction")
- [Raw2Insights](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#raw2insights "Raw2Insights")
- [Deployment](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#deployment "Deployment")
- [System Capabilities](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#system-capabilities "System Capabilities")
- [Closing Perspective](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#closing-perspective "Closing Perspective")

Introduction
Ultrasound is one of the most widely used medical imaging modalities due to its safety, real-time capability, portability, and low cost. For decades, ultrasound images have been formed using a hand-engineered reconstruction pipeline that compresses rich raw sensor measurements into a final image while also making simplifying assumptions about physics, including a constant speed of sound throughout the body.
In the era of AI and foundation models, a natural question emerges: can we move beyond the traditional beamforming pipeline, learn directly from raw ultrasound sensor data, and make use of information that is normally discarded during reconstruction? And if so, what new capabilities does that unlock?
NVIDIA and researchers from Siemens Healthineers teamed up to find answers to these questions. The result of this work is a reconstruction model we are releasing called NV-Raw2Insights-US.
Raw2Insights

At its core, ultrasound is not an image—it’s sound. What clinicians ultimately see on the screen is a reconstructed picture built from millions of tiny echoes returning from the body. But in that reconstruction process, much of the original signal—the richness of how sound actually moved through tissue—is simplified or lost.
Our approach starts earlier. Instead of working from finished images, NV-Raw2Insights-US learns directly from the raw signals captured by the ultrasound probe—the closest representation of how sound truly interacts with the body. This allows the model to “listen” more carefully and understand how each patient uniquely shapes those sound waves. Our vision is to enable end-to-end AI for ultrasound imaging and this is the first step towards that vision. We call this class of models Raw2Insights.

In this first Raw2Insights application we estimate speed of sound for adaptive image focusing. The result is a system that can generate a personalized map of sound speed for each patient—and use it to correct the image in real time. What once required complex, time-consuming computation is now performed in a single AI pass. This is the shift from _raw ultrasound channel data_ to _actionable insight_: an AI system that doesn’t just process ultrasound images, but actively understands and adapts to the physics of each individual patient.
Deployment
Typically raw ultrasound channel data is not easily accessible on clinical grade ultrasound scanners due to its high-bandwidth. Holoscan Sensor Bridge (HSB), is an open source FPGA IP developed by NVIDIA that allows high-bandwidth low latency data transfer to the GPU via (RDMA over Converged Ethernet). An Altera Agilex-7 FPGA development kit paired with NVIDIA Holoscan Sensor Bridge enables raw ultrasound channel data streaming from an ACUSON Sequoia ultrasound scanner’s DisplayPort outputs. We call this technology Data over DisplayPort. The NVIDIA HSB then packetizes the data and transmits it over Ethernet to NVIDIA IGX for data collection and AI inference. This demonstrates how modern high-performance computational capacity can be integrated with existing scanner architectures using high-bandwidth DisplayPort outputs.
We deploy NV-Raw2Insights-US using **NVIDIA Holoscan**, an edge AI sensor processing platform designed for high-performance, real-time workloads on systems such as **NVIDIA IGX Thor** and **NVIDIA DGX Spark**.
Once the data is in GPU memory, NV-Raw2Insights-US runs accelerated inference on a **Blackwell-class GPU**, producing a patient-specific sound-speed estimate. This estimate is streamed back to the ultrasound scanner, enabling improved focus in the live imaging stream.
System Capabilities
This demonstration architecture provides flexibility across both development and deployment:
- ### [](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#software-only-integration-nvidia-acceleration-of-existing-medical-devices-is-possible-with-software-only-modifications-using-data-over-displayport)**Software-Only Integration:** NVIDIA acceleration of existing medical devices is possible with software-only modifications using Data over DisplayPort.
- ### [](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#software-defined-ultrasound-this-software-defined-approach-enables-continuous-improvement-through-software-updates)**Software-Defined Ultrasound:** This software-defined approach enables continuous improvement through software updates.
- ### [](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#modular-expansion-with-raw-ultrasound-channel-data-already-in-gpu-memory-new-ai-models-can-be-integrated-seamlessly)**Modular Expansion:** With raw ultrasound channel data already in GPU memory, new AI models can be integrated seamlessly.
Closing Perspective
By shifting ultrasound intelligence from traditional algorithms to an AI-driven Raw2Insights pipeline, we unlock a scalable path to AI-native imaging. Learning directly from raw ultrasound channel data rather than reconstructed images, NV-Raw2Insights-US reduces errors introduced by traditional assumptions and effectively adapts imaging for each patient.
This architecture not only improves image clarity today, but also establishes a modular foundation for the next generation of AI-powered diagnostic systems. You can get started developing on top of NV-Raw2Insights-US here ( GitHub / Model Weights / Dataset).
#### [](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#references) References
1. **"Ultrasound Autofocusing: Common Midpoint Phase Error Optimization via Differentiable Beamforming,"**_IEEE Transactions on Medical Imaging_, Vol. 45, Issue 2, Feb. 2026. https://ieeexplore.ieee.org/document/11154013 2. **"Investigating Pulse-Echo Sound Speed Estimation in Breast Ultrasound with Deep Learning,"**_arXiv:2302.03064_, 2023. https://arxiv.org/abs/2302.03064 3. **NVIDIA Holoscan SDK Documentation**, https://developer.nvidia.com/holoscan-sdk
#### [](http://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging#acknowledgement) Acknowledgement
This project was conducted in close collaboration with Siemens-Healthineers, we are appreciative of their support, including the direct collaboration of Ismayil Guracar and Rickard Loftman of the AI & Advanced Platforms group.
This technology is under investigational development and not cleared for or available for sale in the U.S. or other countries. Its future availability cannot be guaranteed.