SpatialVisVR: An Immersive, Multiplexed Medical Image Viewer With Contextual Similar-Patient Search

SpatialVisVR: An Immersive, Multiplexed Medical Image Viewer With Contextual Similar-Patient Search

Jai Prakash Veerla25, Partha Sai Guttikonda25, Amir Hajighasemi25, Jillur Rahman Saurav25, Aarti Darji25,
Cody T Reynolds25, Mohamed Mohamed25, Mohammad S Nasr25, Helen H Shang24, Jacob M Luber25
2Department of Computer Science, University of Texas at Arlington, USA 5 Multi-Interprofessional Center for Health Informatics, University of Texas at Arlington, USA 4 Ronald Reagan UCLA Medical Center, USA Email: jacob.luber@uta.edu
Abstract

In contemporary pathology, multiplexed immunofluorescence (mIF) and multiplex immunohistochemistry (mIHC) present both significant opportunities and challenges. These methodologies shed light on intricate tumor microenvironment interactions, emphasizing the need for intuitive visualization tools to analyze vast biological datasets effectively. As electronic health records (EHR) proliferate and physicians face increasing information overload, the integration of advanced technologies becomes imperative. SpatialVisVR emerges as a versatile VR platform tailored for comparing medical images, with adaptability for data privacy on embedded hardware. Clinicians can capture pathology slides in real-time via mobile devices, leveraging SpatialVisVR’s deep learning algorithm to match and display similar mIF images. This interface supports the manipulation of up to 100 multiplexed protein channels, thereby assisting in immuno-oncology decision-making. Ultimately, SpatialVisVR aims to streamline diagnostic processes, advocating for a comprehensive and efficient approach to immuno-oncology research and treatment.

Index Terms:
Machine Learning, Deep Learning, Human Computer Interaction, Computational Biology, Multiplexed Imaging, Pathology, Virtual Reality, Multi-omics, Proteomics, Immuno-oncology, Visualization, Large biological datasets, H&E, CODEX, Bioinformatics
Refer to caption
Figure 1: An overview of the SpatialVisVR Pipeline: 1) A medical professional uses the mobile app to take a photo of a pathology image (this could be on a screen, in a textbook, etc.). 2) The Edge detection algorithm captures relevant patches from the slide and streams them to the similar patient search, 3) The similar patient search, which operates by compressing pathology image patches into latent spaces with a variational autoencoder (VAE) and then performing dynamic time warping on these latent spaces, retrieves more clinically useful multiplexed proteomics images that are similar to the query image. Both the machine learning steps will be modular and inference will be able to occur on an ARM Cortex or NVIDIA Jetson nano platform within hospitals, 4) The Unity App in VR streams the original slide the pathologist captured and an interactive viewer where they can select similar multiplexed slides, a comparison viewer to compare between patients, and an interactive viewer to add and subtract multiplexed protein markers to the image.

I Introduction

Multiplexed immunofluorescence (mIF) and multiplex immunohistochemistry (mIHC) are revolutionizing the realm of pathology, especially in the field of immuno-oncology, by providing spatial context for proteomic measurements in whole slide images [1]. They offer unparalleled insights into the tumor microenvironment and various immune populations [2]. Notably, our SpatialVisVR platform focuses on visualizing CODEX images, which represent state-of-the-art multiplexed tissue imaging technology [3]. However, these advanced mIF and mIHC images, showcasing approximately 60 markers, present unique visualization challenges [4]. With approximately 20,000 stainable proteins and each laboratory opting for different markers, standardized visualization becomes challenging. Moreover, most multiplex viewers available today are restricted to desktop or web platforms and typically involve subscription fees [5, 6, 7].

Meanwhile, machine learning approaches play a pivotal role in various areas, ranging from theoretical aspects like fairness defects [8] to practical applications [9, 10, 11]. Additionally, considerable work on different virtual reality (VR) applications is emerging as a transformative tool for data visualization. In the biomedical arena, applications such as visualizing nucleotide sequences and protein structure through BioVR are gaining traction [12].

By incorporating Virtual Reality (VR) technology like SpatialVisVR into contemporary pathology, the diagnostic abilities of pathologists can be greatly improved. This is achieved by utilizing the concepts of embodied cognition, which emphasize the need of physically interacting with data for cognitive growth and comprehension. Virtual reality (VR) technologies allow pathologists to easily traverse and edit complicated datasets of multiplexed immunofluorescence (mIF) and multiplex immunohistochemistry (mIHC). This interactive experience surpasses conventional viewing methods, enabling doctors to spatially and dynamically investigate tumor microenvironments using up to 100 multiplexed protein channels. These qualities can enhance their comprehension of complex biological relationships and enhance decision-making in the field of immuno-oncology. Through the utilization of SpatialVisVR, pathologists will be able to not only visually perceive but also actively engage with the data, resulting in a more profound cognitive involvement with the information. This heightened level of engagement is crucial for ensuring precise diagnosis and efficient treatment planning. This method, backed by cognitive research, highlights that cognition is not solely a mental process but also a physical one, where action improves comprehension[13]. This makes virtual reality an indispensable tool for pathologists to address the intricacies of contemporary diagnostic procedures.

To facilitate multimodal search SpatialVisVR utilizes Multimodal Pathology Image Retrieval (MPIR) [14], allowing users to efficiently compare medical images. This enables them to promptly get and evaluate related mIF images from a large database for analysis of large biological datasets. This feature offers pathologists a comprehensive perspective on similar patients, assisting in the detection of patterns, abnormalities, and significant biomarkers that are crucial for diagnosing and planning treatment in immuno-oncology. Pathologists can utilize this contextual similarity search to optimize their decision-making processes, extracting knowledge from a wider range of cases and enhancing diagnostic precision. Furthermore, this method simplifies the research process, conserving precious time and resources that would otherwise be utilized in manually seeking out pertinent instances.

In the context of the increasing use of mIF, the dearth of publicly available mIF data becomes evident, especially when compared to the vast archive of traditional H&E stained images on which many tasks have been performed [15, 16]. Addressing this disparity, our app, SpatialVisVR, developed in Unity, implements multimodal search using a machine learning approach to visualize similar mIF images (compared to more standard, inputted H&E slides) that have useful proteomic context for tasks such as selecting cancer immunotherapy treatments. Through a mobile device, one can capture an H&E slide, subsequently initiating a segmentation to differentiate tissue from the backdrop. The tool then cross-references with its database, pinpointing analogous tissues, thus allowing juxtaposition with mIF-detailed tissues of similar cancer categories. As we advance our tool, we are working towards ensuring its compatibility with lightweight platforms, especially the Jetson Nano. Our aspiration is to offer resource-efficient in-house imaging suitable even for clinics with restricted resources while emphasizing data confidentiality and minimizing transfer risks.

Refer to caption
Figure 2: SpatialVisVR Interface: (Left) H&E slide visualization; (Center) Multi-channel CODEX ome.tiff viewer with navigation tools; (Right) Top five analogous CODEX images to H&E slide. Aiding pathologists in diagnostic workflows.

Driven by these trends, our research synergizes VR and mIF to redefine diagnostic paradigms and fortify immuno-oncology research, with a focus on comprehensive quantitative and spatial image examination. In alignment with our mission, we intend to incorporate enhanced quantitative diagnostic tools and delve deeper into cell segmentation or interactions within the cellular environment for more profound insights.

Key contributions of our endeavor include:

  • Pioneering VR and mIF Fusion with SpatialVisVR: To our knowledge, SpatialVisVR is the first VR app geared towards this specialized task, crafting a sophisticated interface for in-depth multiplexed image exploration.

  • On-the-Go Mobile Imaging with SpatialVisVR: Addressing the need for a bridge between traditional H&E images and mIF data, our software empowers users to effortlessly capture, segment, and cross-reference H&E slides. Weighing only 53 MB, it serves dual roles as both a multiplexed image viewer and an image search engine.

  • Striving Towards an Efficient Pipeline: In our journey towards fully embracing lightweight platforms, we’re taking strides with the Jetson Nano. Our vision is to optimize in-house imaging for all clinics, particularly those with restricted resources, without compromising patient data privacy.

Following this introduction, Section 2 discusses related works, Section 3 details our methodologies, Section 4 discusses limitations of the work, section 5 concludes the paper, and Section 6 discusses the future direction of this work.

Refer to caption
Figure 3: ResNet-50-based detection with a segmentation head for H&E segmentation from the captured image

II Current State of the Field

In the burgeoning field of medical research, emerging technologies like Virtual Reality (VR) and Augmented Reality (AR) are gaining prominence due to their potential in revolutionizing clinical practice and medical education. Our SpatialVisVR sits at the intersection of multiplexed imaging, machine learning in pathology, and the immersive capabilities of VR and AR, highlighting the advancements and challenges in these interconnected domains.

Histopathology has traditionally relied on H&E stained imaging for cellular and tissue structure visualization [17]. However, modern techniques like CODEX have driven a paradigm shift, enabling visualization of over 60 protein markers within a single tissue site, thus enhancing cellular-level understanding and facilitating sophisticated data amalgamation in complex fields such as oncology [4, 18, 19].

The research conducted by Veerla et al. [20] highlighted the intricate nature of analysing extensive biological datasets, pushing us to recognize the significance of efficient visualization tools. Their research emphasized the necessity of strong visualization methods to assist in the examination of complex transcriptome and proteome data, which influenced our strategy in creating tools such as SpatialVisVR.

The complexity of navigating through advanced imaging data, particularly Whole-Slide Imaging (WSIs), which can be up to 50k by 50k pixels in size, presents a unique challenge [21]. Emerging solutions such as Mistic, Viv, and Minerva address this challenge by providing intuitive navigation and efficient data access for multiplexed imaging, utilizing client-side GPU rendering and cloud-supported guided analyses [6, 5].

VR and AR play a pivotal role in modern medical imaging, with applications ranging from virtual surgeries to diagnostic advancements [22, 23]. Innovations like ConfocalVR and ExMicroVR showcase VR’s potential in rendering cellular structures, while AR continues to evolve in therapeutic and educational domains despite challenges in achieving optimal visual fidelity [7, 24, 25].

The integration of traditional histopathology, advanced multiplexed imaging, and immersive tools like VR and AR heralds a new era in pathology imaging. Platforms such as Minerva and Avivator lead the way in rendering multiplexed visuals with unparalleled detail, envisioning a future where VR and AR significantly enhance the user experience in exploring both traditional and multiplexed images [5, 26].

III Methods

Refer to caption
Figure 4: Overview of the search engine: 1) Multimodal VAE Architecture compresses H&E and mIF images. 2) Dynamic Time Warping to integrate latent space of mIF and H&E patches. 3) Using cosine similiarity and centroid based indexing to compare latent spaces accross image modalities 4) Ranked Choice voting to aggregate patch level similiarty to slide level similarity

This section outlines how we utilized methodologies, including object detection, hardware setups, and Multimodal Pathology Image Retrieval (MPIR) integration, for enhanced pathology data analysis and visualization, linking computational techniques with tangible user interactions.

III-A SpatialVisVR App

III-A1 VR Application:

We utilized VR to visualize CODEX images as intricate 3D channel stacks. On the server-side, we implemented Flask to cache user-selected CODEX images in RAM for faster data retrieval. We created APIs to return channel names and the nth channel of specified CODEX images, enabling dynamic rendering in VR. We facilitated interactive coloring selection via APIs, where we processed user-selected channels server-side based on color and threshold parameters before sending them to VR for 3D arrangement and visualization, enhancing them with Unity shaders for better clarity and color vibrancy.

III-A2 GUI Overview:

SpatialVisVR, designed for pathologists, focuses on providing a seamless user experience that utilizes spatially multiplexed data to improve the diagnostic workflow, as showcased in Fig. 4 The interface comprises three tailored windows that serve distinct purposes in the diagnostic workflow.

The left window visualizes H&E slides through the edge detection of real-time captures, featuring an ’Update’ button to initiate searches in the third window and a primary viewing area for real-time image processing.

The central window, serving as the primary viewer, facilitates the exploration of CODEX ome.tiff images, allowing users to view up to seven channels simultaneously for understanding cellular interactions. Key UI elements include a sliding carousel for channel selection, a dropdown for accessing stored CODEX images, and a navigation bar for channel management, with each channel color-coded for differentiation.

The right window displays search results for similar CODEX images to an H&E slide, showing the five most similar CODEX images and aiding in further exploration.

Overall, the multi-window setup in SpatialVisVR enhances pathologists’ workflow and diagnostic decision-making by providing diverse data examination, critical insights, and a more efficient diagnostic process. An overview of the interface depicted in Fig. 2.

III-A3 API Implementation

For communication between the server-side application and VR, we deployed a server-client model using RESTful APIs [27]. We set up a Flask application on the server end and used Unity to process API call data, enhancing data visualization with shaders. To improve user experience and address VR headset limitations, we shifted resource-intensive tasks to the server. The server, equipped with a 4TB SSD, hosted CODEX datasets, and limited the data transfer rate to 200 MB for better VR responsiveness.

III-B ML Backbone

III-B1 Object Detection Implementation

The object detection system utilized a ResNet-50 backbone implemented using PyTorch, wherein the classification layer was replaced with a segmentation head composed of a Conv2D layer, ReLU layer, and ConvTranspose layer. This configuration processed the ResNet-50 output through a 3x3 convolution operation and scaled the image to a 2x640x640 output using ConvTranspose2D, thereby distinguishing between pathology slide and non-pathology slide pixels. The architecture, depicted in Fig. 3, illustrates this setup.

During the training phase, image preprocessing included Resize, RandomHorizontalFlip, RandomInvert, RandomRotation, GaussianBlur, and ColorJitter transforms to augment model robustness. For prediction, preprocessing steps involved resizing the images to 640x640, applying padding, and subsequent resizing to rectify bounding area inconsistencies.

Performance was influenced by various aspects of photo setup such as lighting conditions, angle of capture, and focus. Consistent lighting conditions resulted in superior outcomes, while shadows or inconsistent lighting led to suboptimal results. The system performed optimally at 90-degree angles, with acceptable outcomes at shallow angles. However, steep angles or out-of-focus photographs resulted in inconsistent detection and bounding, impacting the overall output quality.

III-B2 MPIR Implementation

We utilized Multimodal Pathology Image Retrieval (MPIR) to retrieve similar codex images to a given H&E image slide [14]. The process involved encoding images into lower-dimension latent vectors using a Variational Autoencoder (VAE). This VAE, comprising an encoder and decoder, mapped input data to a probabilistic latent space and generated data samples from these latent variables, aiding in dimensionality reduction and data reconstruction [28, 29].

Initially, we trained the VAE on H&E images, followed by separate training on seven specific channels of codex images selected for their relevance in indicating tumor and immune cell interactions. Preprocessing included selecting useful patches from high-resolution images for training purposes. During testing, we obtained latent vectors from codex and H&E image patches, using the dynamic time warping algorithm [30] for batch effect correction. Subsequently, we employed cosine similarity and a search algorithm to identify the top 5 most similar patches of codex slides for each H&E image patch. Results across different channels were consolidated using rank choice voting algorithms to retrieve the most similar codex slides for the H&E input. This algorithm was also applied to camera-captured H&E images post edge detection processing.

To test the MPIR model, we used two H&E image inputs: the actual image and the result from the edge detection algorithm. For the actual image, we obtained the top five similar codex slides, and a different set was identified for the edge-detected image, with each slide assigned a voting number representing its similarity rank.

Refer to caption
Figure 5: Hardware Overview: Smartphones for H&E capture, Jetson Nano and MacBook for detection, DGX Server, Precision tower for MPIR training, and Meta Quest Pro for visualization.

III-C Hardware Setup

We utilized various hardware components for this project. iOS and Android smartphones were employed to capture H&E slide images in real-time, functioning as IP Webcams over a Local Area Network (LAN). The OpenCV Python library facilitated video stream interfacing due to its GStreamer integration and object detection capabilities.

For testing the object detection model, we utilized a Jetson Nano 2GB Developer Kit to ensure real-time performance. Deployment tasks were carried out using an Apple 13-inch MacBook Air with an M1 chip and 16GB RAM. Post-object detection, a DGX Server equipped with eight NVIDIA A100 GPUs was used for testing and training the MPIR algorithm. The execution phase was then shifted to the MacBook Air for practical purposes.

During each inference run with our model, the top 5 candidates identified by the MPIR’s ranked choice voting algorithm were transmitted to a Meta Quest Pro headset via a Unity Development Environment on a Dell Precision 5820 Tower Workstation for final visualization. After viewing the visualization, the physicians on our team offered valuable feedback on the user interface design. Looking ahead, our future work aims to fully deploy the pipeline exclusively to Jetson Nano, ARM Cortex, and any compatible VR headset. The hardware setup is depicted in Fig. 5, providing an overview of the different components utilized in this research.

III-D Data Used

We trained the VAE model using H&E and mIF images from CODEX technology, which included colorectal cancer samples from 35 patients with over 50 protein markers. These images were captured using a Keyence BZ-X710 microscope [31]. Additionally, we visualized 109 stitched CODEX images from NIH HuBMAP, representing various organs, in VR [32]. For object detection, we utilized a dataset comprising around 500 H&E slides from three universities and 195 H&E slides from The Cancer Genome Atlas for breast cancer. These slides were digitized using Aperio or Ventana scanners [33].

IV Discussion

SpatialVisVR represents a significant advancement in the visualization of multiplexed immunohistochemistry slides using virtual reality. However, the transfer of large multiplexed imaging datasets through APIs imposes a considerable burden on the machines processing these images, potentially affecting performance and usability. Additionally, while VR offers an immersive experience, prolonged use can lead to discomfort, although this is mitigated by incorporating user-centered design tailored to pathologists’ needs. Moreover, deploying such technology in resource-constrained settings poses challenges due to high computational demands. Furthermore, the Multimodal Pathology Image Retrieval (MPIR) engine [14], a core feature of SpatialVisVR, requires higher quality H&E images as input since lower resolution images might not yield good search results. Being the first of its kind to search and query multiplexed proteomics imaging such as CODEX from H&E stained images, the algorithm’s similarity search results may not always align with clinical expectations due to the limited data available for training. Future iterations of the platform will need to focus on optimizing data handling, improving search accuracy, and enhancing user comfort to extend its applicability and effectiveness in diverse clinical environments.

V Conclusion

Traditional pathology primarily relies on immunohistochemistry staining for diagnostic or prognostic evaluations. Multiplex immunohistochemistry staining has enriched insights on spatial cellular interactions, especially in oncology, elucidating tumor microenvironments (TME) [4][34][31]. However, the 2D nature of multiplexed images constrains spatial exploration to the x and y axes. Recent attempts towards 3D reconstructions of multiplexed images will further spatial understanding of disease processes, yet 2D interfaces for visualization remain a challenge [35].

Our VR platform addresses this by enabling interactive visualization of multiplexed immunohistochemistry stains that are contextual to the environment of existing H&E slides, facilitating intuitive exploration for researchers and clinicians. This capability enhances the analysis of spatial features in medical imaging and provides a more interactive environment for navigating whole slide images in the context of clinically richer multiplexed immunofluorescent ones, aiding in better diagnostic and prognostic evaluations.

The incorporation of Virtual Reality (VR) technology with digital pathology will enable pathologists to not only visually perceive the data but also actively engages with the data, leading to a better cognitive involvement with the information. This added ability of involving cognition while analyzing the data with an extension to actions is something that cannot be achieved by the tradition way of interacting via 2D displays. Furthermore, our platform facilitates slide comparison and retrieval, directing pathologists towards potential diagnoses or treatment responses based on observations from similar patients by other clinicians.

VI Future Directions

We aim to enhance the resolution of multiplexed slides in VR to display distinct cellular interactions. This advancement could enable manual manipulation of cellular interactions and prediction of downstream effects within VR.

Integration of features for swift extraction of diagnostic or prognostic information is also envisioned. For instance, facilitating quick calculations of HER2 expression in 3D to ascertain candidacy for HER2-directed therapies. We plan to broaden slide comparison and retrieval capabilities by expanding our spatial atlas of multiplex slides with contributions from end-users.

Transitioning towards exploring entire tumors in 3D, beyond individual multiplexed slides, will allow easy visualization of protein expression heterogeneity like HER2 or PDL1 in VR. This could uncover missed treatment opportunities from single biopsy slides, potentially impacting treatment avenues for millions globally.

VII Demo Video, Code and Data Availability

A demonstrative video by the authors is available in the GitHub Repository summarizing the method’s potential benefits for pathologists. A version of our code sufficient to recreate our entire pipeline is available on GitHub at: https://github.com/jaiprakash1824/SpatialVisVR. All the Multiplexed imaging datasets used are available from the NIH HuBMAP consortium at: https://hubmapconsortium.org.

Acknowledgment

This work was supported by a University of Texas System Rising STARs Award (J.M.L) and the CPRIT First Time Faculty Award (J.M.L)

References

  • [1] Paul W Harms, Timothy L Frankel, Myrto Moutafi, Arvind Rao, David L Rimm, Janis M Taube, Dafydd Thomas, May P Chan, and Liron Pantanowitz, “Multiplex immunohistochemistry and immunofluorescence: a practical update for pathologists,” Modern Pathology, vol. 36, no. 7, pp. 100197, 2023.
  • [2] Darci Phillips, Christian M Schürch, Michael S Khodadoust, Youn H Kim, Garry P Nolan, and Sizun Jiang, “Highly multiplexed phenotyping of immunoregulatory proteins in the tumor microenvironment by codex tissue imaging,” Frontiers in Immunology, vol. 12, pp. 687673, 2021.
  • [3] Yury Goltsev, Nikolay Samusik, Julia Kennedy-Darling, Salil Bhate, Matthew Hale, Gustavo Vazquez, Sarah Black, and Garry P Nolan, “Deep profiling of mouse splenic architecture with codex multiplexed imaging,” Cell, vol. 174, no. 4, pp. 968–981, 2018.
  • [4] Sarah Black, Darci Phillips, John W Hickey, Julia Kennedy-Darling, Vishal G Venkataraaman, Nikolay Samusik, Yury Goltsev, Christian M Schürch, and Garry P Nolan, “Codex multiplexed tissue imaging with dna-conjugated antibodies,” Nature protocols, vol. 16, no. 8, pp. 3802–3835, 2021.
  • [5] John Hoffer, Rumana Rashid, Jeremy L Muhlich, Yu-An Chen, Douglas Peter William Russell, Juha Ruokonen, Robert Krueger, Hanspeter Pfister, Sandro Santagata, and Peter K Sorger, “Minerva: a light-weight, narrative image browser for multiplexed tissue images,” Journal of open source software, vol. 5, no. 54, 2020.
  • [6] Trevor Manz, Ilan Gold, Nathan Heath Patterson, Chuck McCallum, Mark S Keller, Bruce W Herr, Katy Börner, Jeffrey M Spraggins, and Nils Gehlenborg, “Viv: multiscale visualization of high-resolution multiplexed bioimaging data on the web,” Nature Methods, vol. 19, no. 5, pp. 515–516, 2022.
  • [7] Caroline Stefani, Adam Lacy-Hulbert, and Thomas Skillman, “Confocalvr: immersive visualization for confocal microscopy,” Journal of molecular biology, vol. 430, no. 21, pp. 4028–4035, 2018.
  • [8] Verya Monjezi, Ashutosh Trivedi, Gang Tan, and Saeid Tizpaz-Niari, “Information-theoretic testing and debugging of fairness defects in deep neural networks,” in 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2023, pp. 1571–1582.
  • [9] Sara Shomal Zadeh, Meisam Khorshidi, Farhad Kooban, et al., “Concrete surface crack detection with convolutional-based deep learning models,” arXiv preprint arXiv:2401.07124, 2024.
  • [10] M Moein Esfahani and Hossein Sadati, “Application of transfer learning in optimized filter-bank regularized csp to classification of eeg signals with small dataset,” in 2022 30th International Conference on Electrical Engineering (ICEE). IEEE, 2022, pp. 963–967.
  • [11] Soroush Zare, Mohammad Reza Hairi Yazdi, Mehdi Tale Masouleh, Dan Zhang, Sahand Ajami, and Amirhossein Afkhami Ardekani, “Experimental study on the control of a suspended cable-driven parallel robot for object tracking purpose,” Robotica, vol. 40, no. 11, pp. 3863–3877, 2022.
  • [12] Jimmy F Zhang, Alex R Paciorkowski, Paul A Craig, and Feng Cui, “Biovr: a platform for virtual reality assisted biological data integration and visualization,” BMC bioinformatics, vol. 20, pp. 1–10, 2019.
  • [13] Scott R. Klemmer, Björn Hartmann, and Leila Takayama, “How bodies matter: five themes for interaction design,” in Proceedings of the 6th Conference on Designing Interactive Systems, New York, NY, USA, 2006, DIS ’06, p. 140–149, Association for Computing Machinery.
  • [14] Amir Hajighasemi, MD Saurav, Mohammad S Nasr, Jai Prakash Veerla, Aarti Darji, Parisa Boodaghi Malidarreh, Michael Robben, Helen H Shang, and Jacob M Luber, “Multimodal pathology image search between h&e slides and multiplexed immunofluorescent images,” arXiv preprint arXiv:2306.06780, 2023.
  • [15] Helen H Shang, Mohammad Sadegh Nasr, Jai Prakash Veerla, Parisa Boodaghi Malidarreh, MD Saurav, Amir Hajighasemi, Manfred Huber, Chace Moleta, Jitin Makker, and Jacob M Luber, “Histopathology slide indexing and search: Are we there yet?,” arXiv preprint arXiv:2306.17019, 2023.
  • [16] Michael Robben, Amir Hajighasemi, Mohammad Sadegh Nasr, Jai Prakesh Veerla, Anne M Alsup, Biraaj Rout, Helen H Shang, Kelli Fowlds, Parisa Boodaghi Malidarreh, Paul Koomey, et al., “The state of applying artificial intelligence to tissue imaging for cancer research and early detection,” arXiv preprint arXiv:2306.16989, 2023.
  • [17] Babak Ehteshami Bejnordi, Geert Litjens, Nadya Timofeeva, Irene Otte-Höller, André Homeyer, Nico Karssemeijer, and Jeroen AWM Van Der Laak, “Stain specific standardization of whole-slide histopathological images,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 404–415, 2015.
  • [18] Kevin M Boehm, Pegah Khosravi, Rami Vanguri, Jianjiong Gao, and Sohrab P Shah, “Harnessing multimodal data integration to advance precision oncology,” Nature Reviews Cancer, vol. 22, no. 2, pp. 114–126, 2022.
  • [19] Anika Cheerla and Olivier Gevaert, “Deep learning with multimodal representation for pancancer prognosis prediction,” Bioinformatics, vol. 35, no. 14, pp. i446–i454, 2019.
  • [20] Jai Prakash Veerla, Jillur Rahman Saurav, Michael Robben, and Jacob M Luber, “Analyzing lack of concordance between the proteome and transcriptome in paired scrna-seq and multiplexed spatial proteomics,” arXiv preprint arXiv:2307.00635, 2023.
  • [21] Yukako Yagi, Shigeatsu Yoshioka, Hiroshi Kyusojin, Maristela Onozato, Yoichi Mizutani, Kiyoshi Osato, Hiroaki Yada, Eugene J Mark, Matthew P Frosch, and David N Louis, “An ultra-high speed whole slide image viewing system,” Analytical Cellular Pathology, vol. 35, no. 1, pp. 65–73, 2012.
  • [22] Mohd Javaid and Abid Haleem, “Virtual reality applications toward medical field,” Clinical Epidemiology and Global Health, vol. 8, no. 2, pp. 600–605, 2020.
  • [23] Navid Farahani, Robert Post, Jon Duboy, Ishtiaque Ahmed, Brian J Kolowitz, Teppituk Krinchai, Sara E Monaco, Jeffrey L Fine, Douglas J Hartman, and Liron Pantanowitz, “Exploring virtual reality technology and the oculus rift for the examination of digital pathology slides,” Journal of pathology informatics, vol. 7, no. 1, pp. 22, 2016.
  • [24] Zhangyu Cheng, Caroline Stefani, Thomas Skillman, Aleksandra Klimas, Aramchan Lee, Emma F DiBernardo, Karina Mueller Brown, Tatyana Milman, Yuhong Wang, Brendan R Gallagher, et al., “Micromagnify: A multiplexed expansion microscopy method for pathogens and infected tissues,” Advanced Science, p. 2302249, 2023.
  • [25] Chiara Innocente, Luca Ulrich, Sandro Moos, and Enrico Vezzetti, “Augmented reality: Mapping methods and tools for enhancing the human role in healthcare hmi,” Applied Sciences, vol. 12, pp. 4295, 04 2022.
  • [26] Mark S Keller, Ilan Gold, Chuck McCallum, Trevor Manz, Peter V Kharchenko, and Nils Gehlenborg, “Vitessce: a framework for integrative visualization of multi-modal and spatially-resolved single-cell data,” 2021.
  • [27] Andy Neumann, Nuno Laranjeiro, and Jorge Bernardino, “An analysis of public rest web service apis,” IEEE Transactions on Services Computing, vol. 14, no. 4, pp. 957–970, 2021.
  • [28] Mohammad Sadegh Nasr, Amir Hajighasemi, Paul Koomey, Parisa Boodaghi Malidarreh, Michael Robben, Jillur Rahman Saurav, Helen H Shang, Manfred Huber, and Jacob M Luber, “Clinically relevant latent space embedding of cancer histopathology slides through variational autoencoder based image compression,” in 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023, pp. 1–5.
  • [29] Neel R Vora, Amir Hajighasemi, Cody T Reynolds, Amirmohammad Radmehr, Mohamed Mohamed, Jillur Rahman Saurav, Abdul Aziz, Jai Prakash Veerla, Mohammad S Nasr, Hayden Lotspeich, et al., “Real-time diagnostic integrity meets efficiency: A novel platform-agnostic architecture for physiological signal compression,” arXiv preprint arXiv:2312.12587, 2023.
  • [30] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978.
  • [31] Christian M Schürch, Salil S Bhate, Graham L Barlow, Darci J Phillips, Luca Noti, Inti Zlobec, Pauline Chu, Sarah Black, Janos Demeter, David R McIlwain, et al., “Coordinated cellular neighborhoods orchestrate antitumoral immunity at the colorectal cancer invasive front,” Cell, vol. 182, no. 5, pp. 1341–1359, 2020.
  • [32] Michael P. Snyder, Shin Lin, Amanda Posgai, Mark Atkinson, Aviv Regev, Jennifer Rood, Orit Rozenblatt-Rosen, Leslie Gaffney, Anna Hupalowska, Rahul Satija, Nils Gehlenborg, Jay Shendure, Julia Laskin, Pehr Harbury, Nicholas A. Nystrom, Jonathan C. Silverstein, Ziv Bar-Joseph, Kun Zhang, Katy Börner, Yiing Lin, Richard Conroy, Dena Procaccini, Ananda L. Roy, Ajay Pillai, Marishka Brown, Zorina S. Galis, Long Cai, Jay Shendure, Cole Trapnell, Shin Lin, Dana Jackson, Michael P. Snyder, Garry Nolan, William James Greenleaf, Yiing Lin, Sylvia Plevritis, Sara Ahadi, Stephanie A. Nevins, Hayan Lee, Christian Martijn Schuerch, Sarah Black, Vishal Gautham Venkataraaman, Ed Esplin, Aaron Horning, Amir Bahmani, Kun Zhang, Xin Sun, Sanjay Jain, James Hagood, Gloria Pryhuber, Peter Kharchenko, Mark Atkinson, Bernd Bodenmiller, Todd Brusko, Michael Clare-Salzler, Harry Nick, Kevin Otto, Amanda Posgai, Clive Wasserfall, Marda Jorgensen, Maigan Brusko, Sergio Maffioletti, Richard M. Caprioli, Jeffrey M. Spraggins, Danielle Gutierrez, Nathan Heath Patterson, Elizabeth K. Neumann, Raymond Harris, Mark deCaestecker, Agnes B. Fogo, Raf van de Plas, Ken Lau, Long Cai, Guo-Cheng Yuan, Qian Zhu, Ruben Dries, Peng Yin, Sinem K. Saka, Jocelyn Y. Kishi, Yu Wang, Isabel Goldaracena, Julia Laskin, DongHye Ye, Kristin E. Burnum-Johnson, Paul D. Piehowski, Charles Ansong, Ying Zhu, Pehr Harbury, Tushar Desai, Jay Mulye, Peter Chou, Monica Nagendran, Ziv Bar-Joseph, Sarah A. Teichmann, Benedict Paten, Robert F. Murphy, Jian Ma, Vladimir Yu. Kiselev, Carl Kingsford, Allyson Ricarte, Maria Keays, Sushma A. Akoju, Matthew Ruffalo, Nils Gehlenborg, Peter Kharchenko, Margaret Vella, Chuck McCallum, Katy Börner, Leonard E. Cross, Samuel H. Friedman, Randy Heiland, Bruce Herr, Paul Macklin, Ellen M. Quardokus, Lisel Record, James P. Sluka, Griffin M. Weber, Nicholas A. Nystrom, Jonathan C. Silverstein, Philip D. Blood, Alexander J. Ropelewski, William E. Shirey, Robin M. Scibek, Paula Mabee, W. Christopher Lenhardt, Kimberly Robasky, Stavros Michailidis, Rahul Satija, John Marioni, Aviv Regev, Andrew Butler, Tim Stuart, Eyal Fisher, Shila Ghazanfar, Jennifer Rood, Leslie Gaffney, Gokcen Eraslan, Tommaso Biancalani, Eeshit D. Vaishnav, Richard Conroy, Dena Procaccini, Ananda Roy, Ajay Pillai, Marishka Brown, Zorina Galis, Pothur Srinivas, Aaron Pawlyk, Salvatore Sechi, Elizabeth Wilder, and James Anderson, “The human body at cellular resolution: the NIH human biomolecular atlas program,” vol. 574, no. 7777, pp. 187–192, Number: 7777 Publisher: Nature Publishing Group.
  • [33] Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, Natalie Shih, John Tomaszewski, Anant Madabhushi, and Fabio González, “High-throughput adaptive sampling for whole-slide histopathology image analysis (hashi) via convolutional neural networks: Application to invasive breast cancer detection,” PloS one, vol. 13, no. 5, pp. e0196828, 2018.
  • [34] Mayar Allam, Thomas Hu, Jeongjin Lee, Jeffrey Aldrich, Sunil S Badve, Yesim Gökmen-Polar, Manali Bhave, Suresh S Ramalingam, Frank Schneider, and Ahmet F Coskun, “Spatially variant immune infiltration scoring in human cancer tissues,” NPJ Precision Oncology, vol. 6, no. 1, pp. 60, 2022.
  • [35] Jia-Ren Lin, Shu Wang, Shannon Coy, Yu-An Chen, Clarence Yapp, Madison Tyler, Maulik K Nariya, Cody N Heiser, Ken S Lau, Sandro Santagata, et al., “Multiplexed 3d atlas of state transitions and immune interaction in colorectal cancer,” Cell, vol. 186, no. 2, pp. 363–381, 2023.