NSF Workshop on Networking and Systems Challenges in Immersive Computing

March 31, 2025 - April 1, 2025, Arlington, VA, USA

Monday (Room 125/126, March 31, 2025)
Time Activity
7:30 AM - 9:00 AM Registration and Welcome Coffee
9:00 AM - 9:15 AM Opening Remarks by Workshop Organizers and NSF Program Directors
9:15 AM - 10:00 AM Keynote 1
Title: Immersive Reality in Medicine: Advancing Medical Education and Training with Precision Virtual Environments
Speaker: Amitabh Varshney, University of Maryland
More

Abstract: The demand for high-precision virtual environments is burgeoning across diverse fields, such as education, simulation, training, performance, and entertainment. In this talk, I will discuss the role of virtual environments in medical education for surgical training and physician assistant programs. Traditional cadaver-based medical training is resource-constrained, resulting in observational education rather than hands-on experiential learning. This directly impacts rural communities, smaller medical facilities, and developing nations. I will introduce HoloCamera, a state-of-the-art volumetric capture system with 300 high-resolution RGB cameras designed to create cinematic-quality high-precision virtual environments. We use advanced implicit neural representations, including Gegenbauer polynomial-based transformations and progressive multi-scale networks, to handle the immense data from such dense light fields, which enhance efficiency and realism in VR simulations. These technologies address the computational challenges of processing 4K imagery and enable reduced rendering times while mitigating aliasing concerns through foveated rendering. In this talk, I will also give an overview of the use of virtual environments for education in the Physician Assistant program at the University of Maryland, Baltimore, showcasing how immersive computing enables scalable VR training programs for medicine. We are on the cusp of using virtual environments to make high-fidelity medical training more accessible and practical, transforming medical education globally. To conclude the talk, I will discuss the future of precision virtual environments and their wide-ranging applications.

Bio: Amitabh Varshney is Dean of the College of Computer, Mathematical and Natural Sciences and Professor of Computer Science at the University of Maryland at College Park. Varshney is currently exploring applications of virtual and augmented reality in several applications, including education, healthcare, and telemedicine. His research focus is on exploring the applications of immersive visualization in engineering, science, and medicine. He has worked on several research areas including visual saliency, summarization of large visual datasets, and visual computing for big data. He has served in various roles in the IEEE Visualization and Graphics Technical Committee, including as its Chair, 2008–2012. He received the IEEE Visualization Technical Achievement Award in 2004. He is a Fellow of IEEE and a member of the IEEE Visualization Academy.

10:00 AM - 10:45 AM Invited Talk Session (Chair: Songqing Chen, George Mason University)
​​System Challenges to Immersive Telepresence with All-Day Smart Glasses
Henry Fuchs, University of North Carolina at Chapel Hill
More

Abstract: Our everyday prescription eyeglasses are starting to be enhanced with cameras, displays, speakers, microphones, and other sensors. This suite of technologies, and their integrated AI assistants, will enable immersive telepresence, as well as enabling many other new capabilities in our digital and physical worlds. Major system challenges remain in displays, vision algorithms, tracking, interaction, assistance, and privacy. Key to widespread adoption will be developing efficient power strategies and compute off-loading to allow the glasses to be used all day without recharging. While current pace of development is encouraging, the history of unfulfilled predictions about such smart glasses should be a sobering reminder that often the challenges are more difficult than anticipated.

Bio: Henry Fuchs (PhD, Utah, 1975) is the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at the University of North Carolina at Chapel Hill, where he leads UNC's Graphics and Virtual Reality Research Group. He has been active in 3D computer graphics and computer vision since the 1970s, with rendering algorithms (BSP Trees), high performance graphics hardware (Pixel-Planes), office of the future, virtual and augmented reality, telepresence, and medical applications. He is a member of the US National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, a fellow of the ACM, a Life Fellow of the IEEE, recipient of the ACM SIGGRAPH Steven Anson Coons Award, and an honorary doctorate from TU Wien, the Vienna University of Technology.

OpenFlame: A Federated Naming Infrastructure for Spatial Applications
Srinivasan Seshan, Carnegie Mellon University
More

Abstract: Spatial applications are difficult to develop and deploy due to the lack of an effective spatial naming system that resolves real-world locations to names. Today, spatial naming systems come in the form of digital map services from companies like Google and Apple. These maps and the location-based services provided on top of these maps are primarily controlled by a few large corporations and mostly cover outdoor public spaces. In this talk, we present a case for a federated approach to spatial naming. Federation allows disparate parties to manage and serve their own maps of physical regions, enabling essential features such as scalable map management and access control for map data. These features are necessary for scaling coverage to provide detailed map information for all popular outdoor and indoor spaces. The talk will also explore several design challenges associated with the federated approach, including re-architecting how services such as address-to-location mapping, location-based search, and routing are implemented.

Bio: Srinivasan Seshan is currently the Joseph F. Traub Professor of Computer Science and Computer Science Department Head at Carnegie Mellon University. His research interests include network protocols, mobile computing, distributed applications, and system support for AR/VR. More info: http://www.cs.cmu.edu/~srini.

Immersive Computing via Named, Secured Data
Lixia Zhang, University of California, Los Angeles
More

Abstract: Immersive computing integrates digital information and content into users' physical environment where all devices, big and small, need to be seamlessly and securely interconnected via various communication media. This short talk articulates the advantages and feasibilities of moving networking from the existing TCP/IP network model to a data-centric direction and explores a roadmap for future research.

Bio: Lixia Zhang is a professor in the Computer Science Department of UCLA. She received her PhD in computer science from MIT, and worked as a member of the research staff at Xerox Palo Alto Research Center before she joined UCLA. She holds the Jonathan B. Postel Chair in Computer Systems, is a fellow of ACM and IEEE, and is the recipient of the ACM SIGCOMM Lifetime Achievement Award and IEEE Internet Award. Since 2010, she has been leading the effort on the design and development of Named Data Networking, a new Internet protocol architecture (https://named-data.net/).

10:45 AM - 11:15 AM Morning Break
11:15 AM - 12:15 PM Breakout Sessions: Group I
Networked Systems for Immersive Computing
Disussion Lead: Ashutosh Dhekne (Georgia Institute of Technology)
Scribe: Jaehong Kim (Carnegie Mellon University) & Yasra Chandio (University of Massachusetts Amherst)
Research & Innovation Platforms, Benchmarking, and Testbeds in XR
Disussion Lead: Hongwei Zhang (Iowa State University) & Yao Liu (Rutgers University)
Scribe: Jiayi Meng (The University of Texas at Arlington) & Yongjie Guan (The University of Maine)
AI and Machine Learning for Immersive Experiences
Disussion Lead: Jacob Chakareski (New Jersey Institute of Technology)
Scribe: Qiao Jin (Carnegie Mellon University) & Xueyu Hou (The University of Maine)
12:15 PM - 1:45 PM Lunch and Networking
1:45 PM - 2:30 PM Keynote 2
Title: Unlocking the Potential of Immersive Computing: An End-to-End Systems Approach
Speaker: Sarita Adve, University of Illinois Urbana-Champaign
More

Abstract: Immersive computing has the potential to transform most industries and human activities. Delivering on this potential, however, requires bridging an orders of magnitude gap between the power, performance, and quality-of-experience attributes of current and desirable immersive systems. With a number of conflicting requirements - 100s of milliwatts of power budget, milliseconds of latency constraint, unbounded compute to realize realistic sensory experiences – no silver bullet is available. Further, the true goodness metric of such systems must measure the subjective human experience within the immersive application. This talk calls for an integrative research agenda that drives codesigned end-to-end systems including hardware, system software, network, AI models, and applications, spanning the user-device/edge/cloud, with metrics that reflect the immersive human experience. I will discuss work pursuing such an approach as part of the IMMERSE Center for Immersive Computing which brings together immersive technologies, applications, and human experience, and using the ILLIXR (ILLinois eXtended Reality) open-source end-to-end XR system and research testbed designed to democratize XR systems research. I will focus on our work to offload compute intensive XR components to remote servers over wireless networks as a concrete example underscoring the importance of end-to-end systems research driven by user experience and device power constraints.

Bio: Sarita Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois Urbana-Champaign where she directs IMMERSE, the Center for Immersive Computing. Her research interests span the system stack, ranging from hardware to applications, with a current focus on extended reality (XR). Her group released the ILLIXR (Illinois Extended Reality) testbed, an open-source XR system and research testbed, and launched the ILLIXR consortium to democratize XR research, development, and benchmarking. Her work on the data-race-free, Java, and C++ memory models forms the foundation for memory models used in most hardware and software systems today. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM, IEEE, and AAAS, and a recipient of the ACM/IEEE-CS Ken Kennedy award. As ACM SIGARCH chair, she co-founded the CARES movement, winner of the Computing Research Association (CRA) distinguished service award, to address discrimination and harassment in Computer Science research events. She has also received University and College awards for graduate mentoring, leadership in diversity, equity, and inclusion, and regularly appears on the campus list of excellent teachers. She received her PhD from the University of Wisconsin-Madison and her B.Tech. from the Indian Institute of Technology, Bombay.

2:30 PM - 3:30 PM Panel
Title: Thriving Together: Tackling the Core Networking and Systems Challenges and Growing the XR Community
Moderator: Maria Gorlatova, Duke University 
Panelists:
Henry Fuchs, University of North Carolina at Chapel Hill
Tian Guo, Worcester Polytechnic Institute
Bin Li, Pennsylvania State University
Brendan David-John, Virginia Tech
3:30 PM - 4:00 PM Afternoon Break
4:00 PM - 4:45 PM Invited Talk Session (Chair: Mallesham Dasari, Northeastern University)
Toward Secure Immersive Computing: AR/VR Security and Privacy Study
Yingying (Jennifer) Chen, Rutgers University
More

Abstract: Immersive computing is transforming traditional computing paradigms by integrating emerging technologies across diverse domains, including Augmented/Virtual Reality (AR/VR), Internet of Things (IoT), Artificial Intelligence (AI), and NextG networking. While these advancements enable a wide range of innovative applications, they also introduce significant security and privacy risks, such as sensitive data leakage and stealthy cyber threats. Ensuring the trustworthiness of immersive computing has become a critical challenge for the safe deployment of future applications. In this talk, I will first examine the increasingly popular face-mounted AR/VR devices and show that a broad range of sensitive user information, ranging from user’s identity and gender to vital signs and body fat percentage, can be derived via motion sensors embedded in VR headsets, posing severe privacy risks. To protect the security and privacy of AR/VR users, voice authentication has emerged as a promising technology. Th authentication mechanism leveraging the voice biometrics can be applied to voice commands to access the sensitive data or control the AR/VR programs. We introduce the first spoofing-resistant and text-independent speech authentication system for AR/VR headsets. The system captures facial geometry deformations during speeches, referred to as visemes, the facial counterparts of phonemes, by leveraging minute facial vibrations upon the headset. It can be seamlessly integrated into mainstream headsets to secure voice inputs, such as those used in voice dictation, navigations, and app controls, achieving transparent and passive user authentication.

Bio: Yingying (Jennifer) Chen is a Professor and Department Chair of Electrical and Computer Engineering at Rutgers University. Her research areas include mobile computing, IoT, AI security, and smart healthcare. More info: http://www.winlab.rutgers.edu/~yychen/.

Benchmarks and Network Support for Virtual Reality Applications
Sonia Fahmy, Purdue University
More

Abstract: We explore networked virtual reality applications and discuss the components of benchmarks for evaluating these applications. We also give the results of some measurements over Wi-Fi networks and demonstrate the impact of Wi-Fi control parameters on application performance.

Bio: Sonia Fahmy is a professor of Computer Science at Purdue University. She received the National Science Foundation CAREER award in 2003. She is a fellow of the IEEE.

Enhancing Security and Privacy in Augmented Reality - Through the Lens of Eye Tracking
Bo Ji, Virginia Tech
More

Abstract: Augmented Reality (AR) devices distinguish themselves from other mobile devices by providing an immersive and interactive experience. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this talk, I will discuss how readily available eye-tracking sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. Our research has presented three new systems, BystandAR, ShouldAR, and GazePair, leveraging the user's eye gaze to improve security and privacy expectations in or with AR. As these devices grow in power and number, such solutions are necessary to prevent perception and privacy failures that hindered earlier devices. This work is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful AR devices.

Bio: Bo Ji is an Associate Professor of Computer Science and a College of Engineering Faculty Fellow at Virginia Tech. His research interests include interdisciplinary intersections of computing and networking systems, artificial intelligence and machine learning, security and privacy, and extended reality. More info: https://people.cs.vt.edu/boji.

4:45 PM - 5:45 PM Breakout Session Reports and Discussion: Group I
5:45 PM - 7:00 PM Poster/Demo Session (Poster size limitation: 36" × 48")
7:00 PM Dinner
Tuesday (Room 125/126, April 1, 2025)
Time Activity
7:30 AM - 9:00 AM Morning Coffee and Networking
9:00 AM - 9:45 AM Keynote 3
Title: User-centered Design of Augmented Reality Smartglasses
Speaker: Thad Starner, Georgia Institute of Technology
More

Abstract: Augmented reality smartglasses are being used in industry for tasks such as remote expert consulting and order picking. However, unlike the smartphone or smartwatch, AR smartglasses have not yet transitioned to everyday consumer electronics. In the speaker's opinion, AR smartglasses must first appear like regular eyeglasses before they can achieve "ubiquity." The user's self-perception while using AR smartglasses is a crucial constraint to their design and places strong limitations on heat, power, weight, networking, and user interfaces. This talk will present results from recent user studies that provide practical guidelines on placement of the virtual image in the user's visual field, field of view, lens tint, nose weight, interface control, network throughput, inclusion of cameras and several other issues that augmented reality smartglasses designers ignore at their own peril.

Bio: Thad Starner is a Professor of Computing at Georgia Tech and a staff research scientist at Google Deepmind. In 1990, Starner coined the term "augmented reality" to describe the types of interfaces he envisioned for the future. In 1997, Thad was a founder of the annual ACM International Symposium on Wearable Computers (ISWC). From 2010-2018 Dr. Starner was a Technical Lead on Google's Glass, which was named a "50 Most Influential Gadget of All Time" by Time Magazine.  Professor Starner was named an ACM Fellow in 2025 and inducted into the CHI Academy in 2017 and AWE's XR Hall of Fame in 2024. He has over 100 issued United States utility patents and 500 publications on wearables, artificial intelligence and interfaces.

9:45 AM - 10:45 AM Breakout Sessions: Group II
Edge and Cloud Computing for XR Applications
Disussion Lead: Jiayi Meng (The University of Texas at Arlington)
Scribe: Jaehong Kim (Carnegie Mellon University) & Eman Ramadan (University of Minnesota-Twin Cities)
Privacy, Security, and Trust in XR Systems
Disussion Lead: Ming Li (University of Texas at Arlington)
Scribe: Xiaokuan Zhang (George Mason University) & Brendan David-John (Virginia Tech)
Human-Centered Design and User Experience in XR
Disussion Lead: Mallesham Dasari (Northeastern University) & Justin Chan (Carnegie Mellon University)
Scribe: Qiao Jin (Carnegie Mellon University) & Yasra Chandio (University of Massachusetts Amherst)
10:45 AM - 11:15 AM Morning Break
11:15 AM - 12:00 PM Invited Talk Session (Chair: Maria Gorlatova, Duke University)
Computation-Communication Trade-Offs in Next Generation Wireless AR/VR Systems
Jacob Chakareski, New Jersey Institute of Technology
More

Abstract: This talk will highlight several recent studies that exploited computation-communication trades-offs and machine learning to advance emerging AR/VR systems. First, I will outline a multi-user mobile VR system that integrates dual connectivity (sub-6 GHz and 60 GHz) and edge computing to stream high-quality immersive content at 8K-120fps spatial and temporal resolution (ACM TOMM 2023, IEEE TMM 2024). Second, I will outline BONES – a near-optimal neural-enhanced control-theoretic streaming system that enhances delivered lower-quality content through neural computation at the client (SIGMETRICS 2024). Third, I will highlight an AI-augmented immersive computing system for distributed decision making and processing of elastic VR tasks over emerging wireless networks (IEEE TMM 2025, IEEE TMC 2025). Lastly, I will present a brief demo of our AI+VR system for automated low-vision rehab (NIH R01 project).

Bio: Jacob Chakareski is an associate professor in the College of Computing at NJIT, where he holds the Panasonic Chair of Sustainability and directs the Lab for AI-Enabled Wireless XR Systems and Societal Applications. He organized the first NSF visioning workshop on future AR/VR communications and network systems in 2018. His research interests include NextG wireless XR systems, physics-aware machine learning systems, AI-enabled 5G edge computing networks, optical and millimeter wave wireless networking, multi-connectivity scalable streaming, and societal applications. His research has been supported by the NSF, NIH, AFRL, Adobe, Tencent Research, NVIDIA, Intel, and Microsoft. For further information, please visit http://www.jakov.org.

Interactive Perception & Graphics for a Universally Accessible Metaverse
Ruofei Du, Google
More

Abstract: The emergent revolution of generative AI and spatial computing will fundamentally change the way we work and live. However, it remains a challenge how to make generative AI and spatial computing useful in our daily life. In this talk, we will delve into a series of innovations in interactive graphics, that aim to make both the virtual metaverse and the physical world universally accessible. In log-rectilinear 360° video streaming, we introduce a new transformation which leverages summed-area tables, foveation, and standard video codecs for foveated 360° video streaming in VR headsets with eye-tracking. In MonoAvatars, we present a novel method that represents avatars as 3DMM-anchored neural radiance fields, with volumetric rendering for XR applications. In FaceFolds, we further stream avatars as meshed radiance manifolds for efficient volumetric rendering of dynamic faces. Finally, we present ChatDirector, a video conferencing system with space-aware scene rendering and speech-driven layout transition. We conclude the talk with highlights of Android XR, offering a visionary glimpse into the future of a universally accessible metaverse.

Bio: Ruofei Du serves as Interactive Perception & Graphics Lead / Manager at Google and works on creating novel interactive technologies for virtual and augmented reality. His research focuses on interactive perception, computer graphics, and human-computer interaction. He serves on the program committees of ACM CHI and UIST and is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology. He holds 6 US patents and has published over 40 peer-reviewed publications in top venues of HCI, Computer Graphics, and Computer Vision, including CHI, SIGGRAPH, UIST, TVCG, CVPR, and ICCV. His work won multiple Best Paper Awards. Dr. Du holds a Ph.D. and an M.S. in Computer Science from University of Maryland, College Park; and a B.S. from ACM Honored Class, Shanghai Jiao Tong University. Website: https://duruofei.com

TVMC: Time-Varying 4D Mesh Compression
Mallesham Dasari, Northeastern University
More

Abstract: Streaming high-fidelity 3D content requires tens of gigabits per second (post compression), far exceeding current Internet capabilities. As immersive AR/VR applications grow, efficient 3D representations are critical for real-time streaming. Among various options, meshes offer the best balance of quality and efficiency, yet there is limited work on compressing time-varying meshes (TVMs), which change dynamically in structure and topology. In this talk, I will introduce TVMC, a novel compression method that significantly reduces bandwidth requirements for real-time streaming of large-scale 3D scenes.

Bio: Mallesham Dasari is an Assistant Professor at Northeastern University and the Director of the Spatial Intelligence Research Group (sinrg.org). His research interests span Spatial Intelligence, AR/VR, Computer Networks, and Mobile and Wearable computing. More info at https://mallesham.com

12:00 PM - 1:00 PM Breakout Session Reports and Discussion: Group II
1:00 PM - 2:00 PM Box Lunch and Networking
2:00 PM Closing Remarks