KeckCAVES
Project Overview
The KeckCAVES was a unique visualization collaboration that developed software to interact with three-dimensional data in real-time allowing researchers to move, rotate, color, and manipulate datasets with ease using a wide range of visualization and interaction hardware. The software is built to run on anything from standard computers to fully-immersive virtual reality systems such as CAVEs or VR headsets. At the DataLab we continue to advance data visualization research and develop new tools to address new types of data and data science problems for an expanding group of users.
The KeckCAVES started as a collaboration between geologic and computer scientists focused on co-developing visualization techniques to improve scientific interpretations of complicated data and model results. It has expanded through the years and provides an environment for collaborative research, teaching, education, and mentoring in the use of interactive visualization methods for understanding data and computational models from any area of inquiry.
For more than a decade the core facility at the KeckCAVES has been the CAVE. This system consists of three walls and floor with stereoscopic displays providing full 3D images, head-tracking to render perfect stereo for the tracked viewer, and a set of tracked input devices for in-depth interaction with the visualizations. This platform was used to develop many pieces of research and educational software, which has been made compatible with linux/unix/OSX desktops, 3D TV-based “mini-CAVEs,” GeoWalls, and other systems.
In recent years, the KeckCAVES has fully embraced modern commodity VR headsets, moving towards a decentralized form of operation where multiple users in the same physical space—or at multiple remote locations—can jointly analyze data using off-the-shelf VR hardware.
Several of the software tools that began development in the KeckCAVES have reached wider usage, such as the Augmented Reality (AR) Sandbox, 3DVisualizer, and Crusta.
Products Developed in KeckCAVES
Augmented Reality Sandbox
The Augmented Reality (AR) Sandbox project combines 3D visualization applications with a hands-on physical sandbox exhibit to teach earth science concepts. The AR sandbox allows users to create topography models by shaping real sand, which is then augmented in real time by an elevation color map, topographic contour lines, and simulated flowing water. The system teaches geographic, geologic, and hydrologic concepts such as how to read topography maps, the meaning of contour lines, watersheds, catchment areas, levees, etc.
The project seeks to raise public awareness, understanding, and stewardship of freshwater lake ecosystems and earth science processes using immersive 3D visualizations of lake and watershed processes. The final product is self-contained and ideal for use as a hands-on exhibit in science museums or for classroom use in Earth science education at all levels.
The code for the AR Sandbox has been made available to anyone under the free and open-source GNU General Public License, and detailed plans and installation instructions are provided so others can take advantage of this educational tool. See the AR Sandbox project page to learn more.
The AR Sandbox is the result of an NSF-funded project on informal science education for freshwater lake and watershed science developed by the UC Davis’ W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES), together with the UC Davis Tahoe Environmental Research Center, Lawrence Hall of Science, and ECHO Lake Aquarium and Science Center.
3DVisualizer
3DVisualizer is a highly interactive application for the visualization and analysis of 3D multivariate gridded data, such as produced by finite element method (FEM) simulations, confocal microscopy, serial sectioning, computerized axial tomography (CAT) scans, or magnetic resonance imaging (MRI) scans.
3DVisualizer uses carefully optimized algorithms and data structures to support the high frame rates required for immersion and the real-time feedback required for interactivity. As an application developed for VR from the ground up, 3DVisualizer realizes benefits that usually cannot be achieved by software initially developed for the desktop and later ported to VR. 3DVisualizer can also be used on desktop computers running the Linux operating system with a similar level of real-time interactivity, bridging the ‘‘software gap’’ between desktop and VR that has been an obstacle for the adoption of VR methods in the Geosciences.
While many of the capabilities of 3DVisualizer are already available in other software packages using desktop environments, several features distinguish 3DVisualizer:
- 3DVisualizer can be used in many VR environments including GeoWall, CAVE, modern VR headsets, or on a desktop computer.
- In non-desktop environments the user interacts with the data set directly using a wand or other input devices instead of working indirectly via dialog boxes or text input.
- On desktops, 3DVisualizer provides real-time interaction with very large data sets that cannot easily be viewed or manipulated in other software packages.
LiDAR Viewer
LiDAR (Light Detection and Ranging) uses a laser and sensor to measure the distance and color of surfaces the laser is reflected off of. This generates a sparse representation of any surfaces the LiDAR scanner is run across, generating clouds of points in space. These points typically need significant post-processing to convert clouds of single points into recognizable 3D geometry. LiDAR Viewer directly displays LiDAR point clouds as navigable and interactable 3D environments in VR. LiDAR Viewer can also analyze data; for example, an interactive selection tool can be used to mark subsets of the LiDAR points defining a single feature, and algorithms can then derive equations defining the shape of such features.
LiDAR Viewer uses special data structures to interactively visualize and analyze LiDAR data sets of almost unlimited size, with the current in-house record at 11.3 billion data points. We implemented a brush-based selection tool that allows a user to select points by touching them with a “brush” connected to a VR input device or mouse and keyboard. The software currently supports the categorization of point clouds into line, plane, sphere, or cylinder shapes. It also contains a simple color mapping algorithm that visualizes each point’s distance from a reference plane for creating quick visual elevation maps. More recently, we added point-based illumination, which allows points to be illuminated as if there were full 3D shapes. This utility can handle multiple light sources in real-time with little impact on performance.
Crusta
Crusta is an interactive viewer and mapping tool for local, regional, or global high-resolution topography data, typically combined with aerial or satellite imagery. Unlike most other GIS software, Crusta uses a virtual globe to support mapping directly on the 3D surface, without the need for map projections. Crusta uses a multi-resolution representation that supports interactive work on globe-sized data sets with resolutions reaching a meter or smaller.
Crusta was developed by a computer science graduate student working closely with students in geology who defined the needs, tested the code, and suggested additional features. Crusta allows for the representation of high-resolution surface data, visualization of these data on a real-time virtual globe, and efficient exploration and annotation using real-time interactive software tools.
Crusta has been prominently used during UC Davis’s scientific response to the 2010 Haiti earthquake, and in evaluation of potential landing sites for NASA’s Curiosity Mars rover mission.
Remote Collaboration
To support collaborative scientific exploration for teams of scientists using VR headsets that occlude their vision and prevent real person-to-person interaction, KeckCAVES has been developing a collaboration framework that can put multiple users—each using or wearing their own set of VR equipment—into the same virtual space where they can work on 3D scientific data together. The same framework also supports remote collaboration, which can bring together scientists from multiple locations into a “holographic” meeting. Scientists can work “cross-platform” between CAVEs, VR headsets, or other VR platforms.
This framework has already been used for tele-medicine applications connecting multiple physicians or a physician and their patient, and for a virtual theater performance where multiple actors in different locations acted with each other.
DataLab Contact
- Oliver Kreylos (technical lead)