GRCVS: A GPU-based and real-time cardiac
visualization system
Shui Yu, Xiaoqing Liang, Kuanquan Wang, Yongfeng Yuan
School of Computer Science and Technology
Harbin Institute of Technology
Harbin, China
yushi@hit.edu.cn
, lxq.first@163.com, wangkq@hit.edu.cn, yongfeng.yuan@hit.edu.cn
Abstract—Visualization is very useful for the study of
the heart, but it is still a challenging work to display the
detailed heart because of the complex structure and
expensive computation. Very few systems have been
developed to satisfy this work specially. In this paper, we
designed and implemented a GPU-based and real-time
cardiac visualization system (GRCVS). The illumination
model and the context-preserving model are integrated
into the system, so that it can generate better images and
preserve the important boundary information at the same
time. The rendering process is programmed with CUDA,
which helps to view the final image in real time. The
system also provides strong interactive ability: it supplies
visual and concise interfaces to set various parameters
which are used to adjust image effect and user can also use
simple ways to observe the images quickly and
conveniently. The system has been proved to be valuable
for the study of the heart in biomedical field.
Keywords—visualization system; illumination model; context-
preserving model; cardiac visualization; CUDA
I. INTRODUCTION
Heart disease is one of the most serious diseases that
threaten human life. A lot of studies have been carried out to
reveal the mechanism of it. Visualization that can supply visual
and intuitive result plays a crucial role in these studies. But it is
still a challenging work to observe the detailed structure
information in real time in the existed visualization system of
heart. Due to the complex layered structures and the expensive
computing cost, very few visual platforms have been
developed to attack these difficulties.
Vassilios developed a system named Virtual Heart for
clinical skills training about interventional cardiology and
electrophysiology [1], but this system rendered the cardiac
structure with surface rendering, and cannot display the special
position between different tissues. Zhang proposed a heart
illustration platform based on GPU [2] and later introduced
another system named G-heart [3], which helps to analyze the
3D cardiac medical data and assess the electrophysiological
simulation data. Kharche proposed a high-performance
computing and visualization application of 3D virtual human
atrium [4], but the hardware requirements are relative high.
Wang et al. presented a system based on a simplified LH
histogram transfer function design method to visualize the
multi-boundary and electrophysiology simulation data
interactively [5], but it was CPU-based and not real-time.
Zhang et al. provided a visualization system to display dynamic
4D real-time multidetector computed tomographical (MDCT)
image [6] and later expanded the system to 4D cardiac MRI
and ultrasound images [7]. Khalifa et al. presented a CAD
system to analyzing cardiac first-pass MR images [8]. Nemanja
et al. implemented a system to process and visualize the ECG
signal by using smartphone [9], but Khalifa and Nemanja’s
emphases are on the functional data, not anatomic data.
Besides those systems, some other works have been done based
on VTK (Visualization Toolkit) because of its rich plugins
[10][11], but they are very complex, and not open for user, and
the optimization of existing plugins is very difficult.
In this paper, we designed and implemented an interactive
and real-time visual platform for the visualization of the heart.
The platform realizes the ray-casting algorithm based on GPU,
and builds the Blinn-Phong shading model to highlight the
interested region. It also supplies interactive ways to construct
the color and lighting transfer function. Furthermore, an
improved context-preserving model is implemented to exhibit
the internal structure information while retaining the external
contour, and generate the display effect of cutting plane.
The rest of the paper is organized as follows. Section II
describes the overall workflow of our system; section III
discusses the Blinn-phong model and the lighting transfer
function; after that, we introduce our improved context-
preserving model and the workflow in CUDA (Compute
Unified Device Architecture). The results and discussions are
presented in section IV. The conclusion of this paper and some
future work are listed in section V.
II. A
RCHITECTURE OVERVIEW
The overall workflow of our system is shown in Fig. 1.
First of all, all 2D slices of the original data to be visualized are
transferred into system. After preprocessing, they are processed
to generate 3D volume data which can be further used to
produce other 2D slices according to user custom view
direction. Secondly, the 3D volume data is rendered with ray
casting method based on GPU directly. Finally, some
interactive means for the configuration of the transfer functions
are presented during the ray casting process.
Some techniques of computer graphics and visualization
are integrated into our system. As shown in Fig. 1, the context-
preserving model and the illumination model are presented as
This work was supported by the National Nature Science Foundation of
China (Nos. 61173086)