Vrui can record user interactions during a session, and such a recording can be played back later in the same environment, or a different environment (if some care is taken). Recording a session is somewhat like recording a movie, but it works differently. It's similar to the difference between a MIDI music file and a WAV sound file. A MIDI file captures a performance, i.e., which keys on an instrument were pressed when and how long, whereas a WAV file captures the resulting sound. MIDI files are much more flexible: they can be edited (Vrui recordings, however, can not), and their playback parameters can be changed. For example, a MIDI recording of a piano performance can be played back using guitar sounds. Put differently, WAV samples a sound wave, whereas MIDI samples a musician.
Technical: Session recording works by recording all input to a Vrui application, i.e., timer ticks, random numbers, and input device states (positions/orientations and button/valuator events). If a Vrui application is deterministic (which includes using random numbers), then feeding the same input stream to it multiple times will result in the same behaviour every time.
Note: If recording/playback does not work with a particular Vrui application, that usually means there is something wrong with the application; if playback doesn't work, the application will probably not run on a cluster, either, since Vrui's cluster distribution mechanism uses the same basic idea as session recording. The only exception are properly multithreaded applications where application state is changed from a background thread, and then later used for interaction in the foreground thread. Such applications can not be recorded reliably even when they are correct; it just won't work. Use a video camera or frame grabbing utility instead.
As everything else, session recording/playback is configured via Vrui's configuration file, Vrui.cfg.
To enable recording, set the
inputDeviceDataSaver tag in an environment's root section to a valid unique section name, such as
InputDeviceDataSaver. Then create a section of that same name under the environment's root section, and insert the following tag/value pairs:
inputDeviceDataFileName: Set this to the name of the data file you want to create. I recommend using a .dat extension (although Vrui doesn't care). Vrui will automatically insert a 4-digit number before the extension to create unique file names. This is the only required tag, and the generated file is everything that's needed to play back a recorded session.
soundFileName: Vrui can optionally capture a sound file to go along with a recorded session. This is useful to record voice-overs directly during a session. Set this tag to the sound file name you want to create. The file will be saved in WAV format, and must have a .wav extension. Just like input device data file names, Vrui will insert a 4-digit number before the extension to create unique file names.
sampleResolution: Defines the sound file's sample resolution in bits per sample. This should be set to 16, because the default of 8 bits per sample sounds awful on most consumer-level sound hardware.
numChannels: Defines the number of channels in the sound file (1: mono, 2:stereo). This should usually be set to 1, since most microphones only record mono anyways.
sampleRate: Defines the sound sampling rate in Hz. Should be set to either 44100 or 48000, depending on the computer's sound hardware. It is generally safe to always use 44100, but some sound hardware can actually only record at 48000 Hz, and requesting any other sample rate will involve a resampling step that should be avoided.
Additionally, you can throttle the Vrui application to a maximum frame rate during recording. This is useful when recording a session in a fast environment (like the CAVE), and then playing it back in a slow environment (such as a laptop). If the recording consistently has a higher frame rate than the playback environment can sustain, playback will be slower, and the synchronization between visuals and a recorded sound track will drift. To throttle, add a
maximumFrameRate <max frame rate in Hz> tag to the environment's root section. This tag should be removed or commented out for normal use.
The result of recording a session is an input device data file, typically with a .dat extension, and (optionally) a sound file, with a .wav extension. To play back a recorded session, Vrui needs to be configured for playback, and then the same application that was used during recording needs to be started with the exact same command line arguments. In playback mode, Vrui will read input device and timing data from the recorded data file instead of the real devices, and as a result, the application will exhibit the exact same behaviour as during recording.
To enable playback, set the
inputDeviceAdapterNames tag in an environment's root section to the name of a valid playback input device adapter section, for example
PlaybackAdapter. Then create a section of that same name under the environment's root section, and insert the following tag/value pairs:
inputDeviceDataFileName: The full name of the input device data file to be played back.
synchronizePlayback: If this tag is set to
true,Vrui will attempt to play the session at exactly the speed it was recorded. This will lead to pretty good synchronization with a recorded sound track, unless the playback environment is consistently slower than the recording environment. If that is the case, use the
maximumFrameRatetag during recording as described above. If
synchronizePlaybackis set to
false,Vrui will play the session as fast as possible (which can be very fast).
soundFileName: The full name of the sound track file that was recorded alongside the input device data file.
These are the most important playback settings; for additional options see the Vrui configuration file reference.
It is typically safest to play back a recorded session in the same environment where it was recorded, making no other changes to the Vrui.cfg configuration file than the ones listed above. However, with care it is possible to transfer a recording. This can be very useful to record a demo in a fully immersive environment, such as a CAVE, and then play it back in a “lesser” environment, such as a laptop computer at a conference or during a talk.
To pull this off, the playback environment has to be configured exactly like the recording environment, with only the screen and window sections adapted to the playback environment. This is crucial, since playback is based on input device data; if there is any mismatch between the two environments, playing back the same input device events will lead to different application behaviour, which will mean things go really wrong real fast.
It is recommended to copy the original environment's configuration file to the playback environment, and then make only the minimally necessary changes to get proper rendering in the playback environment. In general, this means creating a new “fake” viewer and screen/window, and directing Vrui to render to the new window instead. This needs more detail.
The settings for recording and playback are rather self-contained, and ideally suited to be put in patch configuration files. For example, here is a complete patch configuration file to record sessions in the CAVE:
section Vrui section "caveman.geology.ucdavis.edu" inputDeviceDataSaver InputDeviceDataSaver maximumFrameRate 60.0 section InputDeviceDataSaver inputDeviceDataFileName /scratch_data/Recording/InputDeviceData.dat soundFileName /scratch_data/Recording/SoundData.wav sampleResolution 16 numChannels 1 sampleRate 44100 endsection endsection endsection
Assuming that this file is saved as ~/Vrui.recording.cfg, a session can be recorded by running:
<application name> <application arguments> -mergeConfig ~/Vrui.recording.cfg
The matching patch file to play back a previously recorded session looks like this:
section Vrui section "caveman.geology.ucdavis.edu" inputDeviceAdapterNames (Playback) viewerNames (FakeCAVEViewer, PlaybackViewer, ConsoleViewer) section Playback inputDeviceAdapterType Playback inputDeviceDataFileName /scratch_data/Recording/InputDeviceData-MA0003.dat #soundFileName /scratch_data/Recording/SoundData0016.wav synchronizePlayback true quitWhenDone true device1GlyphType Cone device3GlyphType Cone endsection section FakeCAVEViewer name FakeCAVEViewer headTracked true headDevice Head headDeviceTransformation translate (-6.0, -110.0, 30.0) viewDirection (0.0, 1.0, 0.0) monoEyePosition (0.0, -2.0, -1.5) leftEyePosition (0.0, -2.0, -1.5) rightEyePosition (0.0, -2.0, -1.5) headLightEnabled true headLightPosition (0.0, -2.0, -1.5) headLightDirection (0.0, 1.0, 0.0) headLightColor (0.4, 0.4, 0.4) headLightSpotCutoff 180.0 headLightSpotExponent 0.0 endsection section PlaybackViewer name CAVEViewer headTracked false headDevice Head headDeviceTransformation translate (0.0, -148.0, 48.0) viewDirection (0.0, 1.0, 0.0) monoEyePosition (0.0, 0.0, 0.0) leftEyePosition (0.0, 0.0, 0.0) rightEyePosition (0.0, 0.0, 0.0) headLightEnabled true headLightPosition (0.0, 0.0, 0.0) headLightDirection (0.0, 1.0, 0.0) headLightColor (0.6, 0.6, 0.6) headLightSpotCutoff 180.0 headLightSpotExponent 0.0 endsection endsection endsection
If this file is saved as ~/Vrui.playback.cfg, a session can be played back using the command line
<application name> <application arguments> -mergeConfig ~/Vrui.playback.cfg
where application name and application arguments are exactly the same as those used during recording.
The two viewer sections in the above patch file are used to play back the session from a different point of view. Vrui's main viewer is set to the head tracker from the recording, which is necessary to guarantee correct playback, but the CAVE walls are drawn from the second viewer's point of view, which is set to a fixed position outside the CAVE. This will allow to play back a session for a larger audience.
Other common playback tricks are to attach the ConsoleViewer to the tracked head position, to see a first-person view in the console window and capture a movie, or to use both a playback input device adapter and a real input device adapter to allow a head-tracked user to explore a captured session (in that case, only the head device should be enabled to prohibit interactions from the viewing user, which would throw off session playback).