Although VR applications have different purposes and do not typically work alike, they do have common grounds when performing basic tasks (analogous to how most desktop user interface systems provide more or less consistent “look and feel”). All VR applications only listen to the CAVE wand- the keyboard and mouse are completely ignored except for closing an application as described below – and all interactions are triggered by pressing single buttons or button combinations on the wand, or pushing the wand joystick. The current applications have no need for clicking any of the buttons; performing tasks typically involves pressing a button until something happens, and then releasing it again.
The VR programming toolkit used to write all VR applications is quite flexible in what a user has to do to perform certain tasks (and almost everything can be changed while an application is running), but all applications start up with a default configuration, which is described in the following sections and illustrated in Figure 1.
Basic interactions with VR applications fall into the following categories:
Navigation is the process of changing how the “real” coordinate system of the CAVE relates to the “virtual” coordinate system of the 3D environment displayed by a VR application. In other words, navigation allows a user to walk/fly through a 3D environment, or to pick up and move an entire environment. In the default configuration, VR applications offer two different modes of navigation.
This mode allows a user to pick up a 3D environment by pressing the yellow (bottom-left) wand button, and move it by translating and/or rotating the wand while the button is pressed. This is much more difficult to explain in detail than to actually use.
In addition to translating and rotating, it is also possible to uniformly scale the 3D environment by pushing the red (top-left) wand button while the yellow button is already pressed (this is easiest when pressing the yellow button with the back of one's thumb, and the pushing the red button with the top of the thumb). While both buttons are pressed, moving the wand in the direction it is pointing will shrink the 3D environment, while moving the wand in the opposite direction will enlarge it. The center point of scaling is the tip of the wand. This can lead to a confusing effect when trying to scale a 3D object that is relatively far away from the wand: when enlarging, the model will not only become bigger, but also move away from the wand position, and the effect of enlarging will be offset by the model disappearing in the distance (this is unintuitive because objects in the real world can typically not be scaled). To avoid this confusing effect, it is best first to move the object to be enlarged to the wand position (poking it with the wand), and then to scale it.
This mode allows a user to fly through larger 3D environments by pushing the wand joystick. If the joystick is pushed towards the front of the wand, the user will fly in the direction the wand is pointing, with a velocity proportional to how much the joystick is pushed. Pulling the joystick towards the back will fly backwards. Pushing the joystick left or right will turn around the wand's up axis, with an angular velocity proportional to how much the joystick is pushed. The flying velocity is typically quite fast, so flying is less useful when examining smaller models, and more useful to navigate expansive environments (such as terrain maps at large scales).
Program control allows a user to change the behavior or look of a VR application – it is essentially a fancy way to refer to using an application's main menu. The VR toolkit provides a 2.5D user interface and menu system (essentially flat menus and dialogs just as on the desktop, but floating in space) that are used just as if the CAVE wand were a regular mouse.
Pressing the blue (top-right) wand button will bring up an application's main menu; when first opening, it will appear floating several inches in front of the wand, facing the user. While the blue button is pressed, the VR toolkit will display a red laser ray coming from the tip of the wand. This ray is used to select entries from the menu just as in desktop programs. If the blue button is released while the red ray is hitting a menu entry, the function associated with that button will be executed. To close the main menu without executing a function, one just points the ray away from the menu and releases the blue button.
The menu functions provided by the VR applications vary widely (depending, of course, on the application's function), there are some typical menu entries shared between applications. Most applications have a Center Display menu entry that will reset the application's coordinate system to the state at the start of the program.
Locating is the process of executing a program function at a particular point in space (for example, evaluating a volumetric function or creating a streamline, slice or contour surface). What happens when locating depends on the application, but a common factor is that locating typically starts when some button is pushed, is continually updated while that button is pressed, and stops when the button is released. For example, when the locator function is to create a slice, the slice will be created at the wand's position when the button is pushed, will move along with the wand while the button is pressed, and then stay fixed once the button is released.<P>
Note: The default environment configuration does not map any wand buttons to locating; to do so, a locator tool has to be created (see below).
Dragging is the process of picking up a single object or collection of objects in a VR application, and moving them in a fashion analogous to navigation. Similar to locating, dragging is initiated when some button is pushed. At this point, the VR application selects which object(s) a user wants to drag (typically by selecting the object that is poked by the wand, or is inside some region of space around the wand), and remembers the relative position of that object to the wand. Then, when the wand is rotated and/or translated while the button is pressed, that object will follow along. Dragging stops when the button is released.
The difference between locating and dragging is sometimes subtle; in general, the distinction is that locating performs a function at a position in space, whereas dragging selects an object (such as an atom in a molecular dynamics simulation) and performs a function on that object (typically moving it).
Note: The default environment configuration does not map any wand buttons to dragging; to do so, a dragging tool has to be created (see below).
As mentioned above, the relation between buttons on the wand and program functions such as navigation or locating is not fixed and can be changed according to a user's needs during the application's runtime. The basic idea is that each button (or joystick axis) on an input device such as the wand can be connected to a tool that will perform a certain function when that button is pressed (or the joystick axis is pushed). Tools not only define what action happens when a particular button is pressed, but also how that action happens. For example, there are several different navigation tools a user can select from. All of them navigate, but they do it in different ways; which tool works best is sometimes dependent on what a user wants to do.
When a user pushes a button that currently has no tool assigned to it (in the default environment, these are the green (bottom-right) button, the trigger button on the wand's handle, and the button that is activated when the joystick is pushed down), the tool selection menu will appear. This works the same way as an application's main menu; a user can select a tool from any of the listed categories and map it to the pressed button using the red selection ray. If a tool was selected, and matches the pressed button(s) (some tools require more than one button), the tool will be created. Henceforth, pushing that button will activate the tool. For example, a user can map a locator tool (such as the 6-DOF Locator) to the trigger button and subsequently use the trigger to evaluate 3D functions or create slices or contour lines.
To release a mapped tool (so it can be replaced with a different one), a user has to press the tool's button while the wand is inside the tool destruction zone which in the current default environment is an invisible sphere about one foot to the right of the user's head. Moving the wand into the zone, pushing, briefly holding, and then releasing a button will release the tool mapped to that button, such that the next press of that button outside the zone will display the tool selection menu again.
Figure 1: Diagram of the functions mapped to the CAVE wand's buttons and joystick in the default configuration. The hotspot is the point that applications think the wand is pointing at. It is also denoted by a grey cone displayed in the CAVE; normally, the grey cone's tip should appear exactly at the position of the small nose on the front of the wand (maybe a couple of millimeters off).
Some more complex applications have multiple different functions that can be mapped to locator or dragging tools. For example, a locator tool could be used to create slices, contour surfaces, or streamlines. Typically, applications handle this by offering a setting in their main menu which will influence how future tools are created. For example, a visualization application might offer creating slices and streamlines. When a user creates a locator tool, that tool takes the current function selected in the application. If the selected function is changed afterwards, the previously created tool typically does not change functions, but newly created tools will. (Some applications offer a menu entry to override the functions of existing tools.) This approach allows a user to create different tools performing different functions and map them to different buttons, to be able to use them at the same time. The typical process of mapping an application function to a wand button is thus: