7:30 AM - 9:00 AM |
Registration and Welcome Coffee |
9:00 AM - 9:15 AM |
Opening Remarks by Workshop Organizers and NSF Program Directors |
9:15 AM - 10:00 AM |
Keynote 1
Title: Immersive Reality in Medicine: Advancing Medical Education and Training with Precision Virtual Environments
Speaker: Amitabh Varshney, University of Maryland
More
Abstract: The demand for high-precision virtual environments is burgeoning across diverse fields, such as education, simulation, training, performance, and entertainment. In this talk, I will discuss the role of virtual environments in medical education for surgical training and physician assistant programs. Traditional cadaver-based medical training is resource-constrained, resulting in observational education rather than hands-on experiential learning. This directly impacts rural communities, smaller medical facilities, and developing nations. I will introduce HoloCamera, a state-of-the-art volumetric capture system with 300 high-resolution RGB cameras designed to create cinematic-quality high-precision virtual environments. We use advanced implicit neural representations, including Gegenbauer polynomial-based transformations and progressive multi-scale networks, to handle the immense data from such dense light fields, which enhance efficiency and realism in VR simulations. These technologies address the computational challenges of processing 4K imagery and enable reduced rendering times while mitigating aliasing concerns through foveated rendering. In this talk, I will also give an overview of the use of virtual environments for education in the Physician Assistant program at the University of Maryland, Baltimore, showcasing how immersive computing enables scalable VR training programs for medicine. We are on the cusp of using virtual environments to make high-fidelity medical training more accessible and practical, transforming medical education globally. To conclude the talk, I will discuss the future of precision virtual environments and their wide-ranging applications.
Bio: Amitabh Varshney is Dean of the College of Computer, Mathematical and Natural Sciences and Professor of Computer Science at the University of Maryland at College Park. Varshney is currently exploring applications of virtual and augmented reality in several applications, including education, healthcare, and telemedicine. His research focus is on exploring the applications of immersive visualization in engineering, science, and medicine. He has worked on several research areas including visual saliency, summarization of large visual datasets, and visual computing for big data. He has served in various roles in the IEEE Visualization and Graphics Technical Committee, including as its Chair, 2008–2012. He received the IEEE Visualization Technical Achievement Award in 2004. He is a Fellow of IEEE and a member of the IEEE Visualization Academy.
|
|
10:00 AM - 10:45 AM |
Invited Talk Session (Chair: Songqing Chen, George Mason University)
System Challenges to Immersive Telepresence with All-Day Smart Glasses Henry Fuchs, University of North Carolina at Chapel Hill
More
Abstract: Our everyday prescription eyeglasses are starting to be enhanced with cameras, displays, speakers, microphones, and other sensors. This suite of technologies, and their integrated AI assistants, will enable immersive telepresence, as well as enabling many other new capabilities in our digital and physical worlds. Major system challenges remain in displays, vision algorithms, tracking, interaction, assistance, and privacy. Key to widespread adoption will be developing efficient power strategies and compute off-loading to allow the glasses to be used all day without recharging. While current pace of development is encouraging, the history of unfulfilled predictions about such smart glasses should be a sobering reminder that often the challenges are more difficult than anticipated.
Bio: Henry Fuchs (PhD, Utah, 1975) is the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at the University of North Carolina at Chapel Hill, where he leads UNC's Graphics and Virtual Reality Research Group. He has been active in 3D computer graphics and computer vision since the 1970s, with rendering algorithms (BSP Trees), high performance graphics hardware (Pixel-Planes), office of the future, virtual and augmented reality, telepresence, and medical applications. He is a member of the US National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, a fellow of the ACM, a Life Fellow of the IEEE, recipient of the ACM SIGGRAPH Steven Anson Coons Award, and an honorary doctorate from TU Wien, the Vienna University of Technology.
|
OpenFlame: A Federated Naming Infrastructure for Spatial Applications Srinivasan Seshan, Carnegie Mellon University
More
Abstract: Spatial applications are difficult to develop and deploy due to the lack of an effective spatial naming system that resolves real-world locations to names. Today, spatial naming systems come in the form of digital map services from companies like Google and Apple. These maps and the location-based services provided on top of these maps are primarily controlled by a few large corporations and mostly cover outdoor public spaces. In this talk, we present a case for a federated approach to spatial naming. Federation allows disparate parties to manage and serve their own maps of physical regions, enabling essential features such as scalable map management and access control for map data. These features are necessary for scaling coverage to provide detailed map information for all popular outdoor and indoor spaces. The talk will also explore several design challenges associated with the federated approach, including re-architecting how services such as address-to-location mapping, location-based search, and routing are implemented.
Bio: Srinivasan Seshan is currently the Joseph F. Traub Professor of Computer Science and Computer Science Department Head at Carnegie Mellon University. His research interests include network protocols, mobile computing, distributed applications, and system support for AR/VR. More info: http://www.cs.cmu.edu/~srini.
|
Immersive Computing via Named, Secured Data Lixia Zhang, University of California, Los Angeles
More
Abstract: Immersive computing integrates digital information and content into users' physical environment where all devices, big and small, need to be seamlessly and securely interconnected via various communication media. This short talk articulates the advantages and feasibilities of moving networking from the existing TCP/IP network model to a data-centric direction and explores a roadmap for future research.
Bio: Lixia Zhang is a professor in the Computer Science Department of UCLA. She received her PhD in computer science from MIT, and worked as a member of the research staff at Xerox Palo Alto Research Center before she joined UCLA. She holds the Jonathan B. Postel Chair in Computer Systems, is a fellow of ACM and IEEE, and is the recipient of the ACM SIGCOMM Lifetime Achievement Award and IEEE Internet Award. Since 2010, she has been leading the effort on the design and development of Named Data Networking, a new Internet protocol architecture (https://named-data.net/).
|
|
10:45 AM - 11:15 AM |
Morning Break |
11:15 AM - 12:15 PM |
Breakout Sessions: Group I
Networked Systems for Immersive Computing
Disussion Lead: Ashutosh Dhekne (Georgia Institute of Technology)
Scribe: Jaehong Kim (Carnegie Mellon University) & Yasra Chandio (University of Massachusetts Amherst)
|
Research & Innovation Platforms, Benchmarking, and Testbeds in XR
Disussion Lead: Hongwei Zhang (Iowa State University) & Yao Liu (Rutgers University)
Scribe: Jiayi Meng (The University of Texas at Arlington) & Yongjie Guan (The University of Maine)
|
AI and Machine Learning for Immersive Experiences
Disussion Lead: Jacob Chakareski (New Jersey Institute of Technology)
Scribe: Qiao Jin (Carnegie Mellon University) & Xueyu Hou (The University of Maine)
|
|
12:15 PM - 1:45 PM |
Lunch and Networking |
1:45 PM - 2:30 PM |
Keynote 2
Title: Unlocking the Potential of Immersive Computing: An End-to-End Systems Approach
Speaker: Sarita Adve, University of Illinois Urbana-Champaign
More
Abstract: Immersive computing has the potential to transform most industries and human activities. Delivering on this potential, however, requires bridging an orders of magnitude gap between the power, performance, and quality-of-experience attributes of current and desirable immersive systems. With a number of conflicting requirements - 100s of milliwatts of power budget, milliseconds of latency constraint, unbounded compute to realize realistic sensory experiences – no silver bullet is available. Further, the true goodness metric of such systems must measure the subjective human experience within the immersive application. This talk calls for an integrative research agenda that drives codesigned end-to-end systems including hardware, system software, network, AI models, and applications, spanning the user-device/edge/cloud, with metrics that reflect the immersive human experience. I will discuss work pursuing such an approach as part of the IMMERSE Center for Immersive Computing which brings together immersive technologies, applications, and human experience, and using the ILLIXR (ILLinois eXtended Reality) open-source end-to-end XR system and research testbed designed to democratize XR systems research. I will focus on our work to offload compute intensive XR components to remote servers over wireless networks as a concrete example underscoring the importance of end-to-end systems research driven by user experience and device power constraints.
Bio: Sarita Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois Urbana-Champaign where she directs IMMERSE, the Center for Immersive Computing. Her research interests span the system stack, ranging from hardware to applications, with a current focus on extended reality (XR). Her group released the ILLIXR (Illinois Extended Reality) testbed, an open-source XR system and research testbed, and launched the ILLIXR consortium to democratize XR research, development, and benchmarking. Her work on the data-race-free, Java, and C++ memory models forms the foundation for memory models used in most hardware and software systems today. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM, IEEE, and AAAS, and a recipient of the ACM/IEEE-CS Ken Kennedy award. As ACM SIGARCH chair, she co-founded the CARES movement, winner of the Computing Research Association (CRA) distinguished service award, to address discrimination and harassment in Computer Science research events. She has also received University and College awards for graduate mentoring, leadership in diversity, equity, and inclusion, and regularly appears on the campus list of excellent teachers. She received her PhD from the University of Wisconsin-Madison and her B.Tech. from the Indian Institute of Technology, Bombay.
|
|
2:30 PM - 3:30 PM |
Panel
Title: Thriving Together: Tackling the Core Networking and Systems Challenges and Growing the XR Community
Moderator: Maria Gorlatova, Duke University
Panelists:
Henry Fuchs, University of North Carolina at Chapel Hill |
Tian Guo, Worcester Polytechnic Institute |
Bin Li, Pennsylvania State University |
Brendan David-John, Virginia Tech |
|
|
3:30 PM - 4:00 PM |
Afternoon Break |
4:00 PM - 4:45 PM |
Invited Talk Session (Chair: Mallesham Dasari, Northeastern University)
Toward Secure Immersive Computing: AR/VR Security and Privacy Study Yingying (Jennifer) Chen, Rutgers University
More
Abstract: Immersive computing is transforming traditional computing paradigms by integrating emerging technologies across diverse domains, including Augmented/Virtual Reality (AR/VR), Internet of Things (IoT), Artificial Intelligence (AI), and NextG networking. While these advancements enable a wide range of innovative applications, they also introduce significant security and privacy risks, such as sensitive data leakage and stealthy cyber threats. Ensuring the trustworthiness of immersive computing has become a critical challenge for the safe deployment of future applications. In this talk, I will first examine the increasingly popular face-mounted AR/VR devices and show that a broad range of sensitive user information, ranging from user’s identity and gender to vital signs and body fat percentage, can be derived via motion sensors embedded in VR headsets, posing severe privacy risks. To protect the security and privacy of AR/VR users, voice authentication has emerged as a promising technology. Th authentication mechanism leveraging the voice biometrics can be applied to voice commands to access the sensitive data or control the AR/VR programs. We introduce the first spoofing-resistant and text-independent speech authentication system for AR/VR headsets. The system captures facial geometry deformations during speeches, referred to as visemes, the facial counterparts of phonemes, by leveraging minute facial vibrations upon the headset. It can be seamlessly integrated into mainstream headsets to secure voice inputs, such as those used in voice dictation, navigations, and app controls, achieving transparent and passive user authentication.
Bio: Yingying (Jennifer) Chen is a Professor and Department Chair of Electrical and Computer Engineering at Rutgers University. Her research areas include mobile computing, IoT, AI security, and smart healthcare. More info: http://www.winlab.rutgers.edu/~yychen/.
|
Benchmarks and Network Support for Virtual Reality Applications Sonia Fahmy, Purdue University
More
Abstract: We explore networked virtual reality applications and discuss the components of benchmarks for evaluating these applications. We also give the results of some measurements over Wi-Fi networks and demonstrate the impact of Wi-Fi control parameters on application performance.
Bio: Sonia Fahmy is a professor of Computer Science at Purdue University. She received the National Science Foundation CAREER award in 2003. She is a fellow of the IEEE.
|
Enhancing Security and Privacy in Augmented Reality - Through the Lens of Eye Tracking Bo Ji, Virginia Tech
More
Abstract: Augmented Reality (AR) devices distinguish themselves from other mobile devices by providing an immersive and interactive experience. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this talk, I will discuss how readily available eye-tracking sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. Our research has presented three new systems, BystandAR, ShouldAR, and GazePair, leveraging the user's eye gaze to improve security and privacy expectations in or with AR. As these devices grow in power and number, such solutions are necessary to prevent perception and privacy failures that hindered earlier devices. This work is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful AR devices.
Bio: Bo Ji is an Associate Professor of Computer Science and a College of Engineering Faculty Fellow at Virginia Tech. His research interests include interdisciplinary intersections of computing and networking systems, artificial intelligence and machine learning, security and privacy, and extended reality. More info: https://people.cs.vt.edu/boji.
|
|
4:45 PM - 5:45 PM |
Breakout Session Reports and Discussion: Group I |
5:45 PM - 7:00 PM |
Poster/Demo Session (Poster size limitation: 36" × 48") |
7:00 PM |
Dinner |