Direct manipulation in VC is performed through sensors. Sensors are used to select actively or passively objects in a scene, or to drag them from one location to another. Sensor influence on objects is based on the intersection of the sensor geometry and the object bounding boxes. An object that is selected or dragged by a sensor receives messages from this sensor that can be used to implement reactive behaviors to sensor-based interactions. In order to assign objects with an active role in the scene, sensors can also be attached to scene graphs.
Two selection mechanisms through sensors are available in VC. Passive selection relies on touchSensors. There are four default touchSensors: mouse, head, hand, and laser. The availability of these sensors depend on the interaction environment. In 2D interaction environments, the only default touchSensor is the mouse. The activation of the mouse as touchSensor is triggered by pressing the opening parenthesis key '('. It is stopped by pressing ')'. It can also be preset by the mouse_passive_selection attribute of the window layout element in the configuration file.
Active selection is only available through the mouse. It is triggered by pressing the mouse button defined by the mouse_active_selection attribute of the window layout element in the configuration file.
Both active and passive selections rely on axis aligned bounding boxes that are attached to media objects and recomputed after each modification of their geometry. The receptivity of objects to passive selection, active selection, and dragging are respectively controlled by the testable, interactable, and draggable boolean attributes of scene graph nodes. In the two selection modes, the selected object (if there is one) is the closest receptive object whose bounding box is intersected by a ray coming from the eye of the observer and touching the extremity of the pointing device.
Both selections are associated with automatic message emission to the selected object. Passive selection and deselection through the mouse is associated with the messages sensor-mouse-in and sensor-mouse-out, and passive selection and deselection through the wand with sensor-laser-in and sensor-laser-out. Passive selection through the user-defined touchSensors are defined by the values of the attributes sensor-touchSensor-in and sensor-touchSensor-out. Active selection and deselection through the mouse is associated with the messages sensor-mouse-catch and sensor-mouse-end-catch, and active selection and deselection through the wand with sensor-laser-catch and sensor-laser-end-catch.
Passive selection automatically draws the bounding box of the selected object. Response to sensor selection is defined in scripts through a message_event trigger:
<node id="quad_back_wall_envelope" testable="true" interactable="false"> <script id="script_back_wall"> <command> <trigger type="message_event" value="sensor-touchSensor-in" /> <action> <set_sound_attribute_value operator="="> <sound begin="now" end="100000" /> </set_sound_attribute_value> <set_internal_state value="A_ongoing" /> <target type="single_node" value="#quad back wall envelope" /> </action> <action> <send_message_udp value="source play" /> <target type="single_host" value="spatializer" /> </action> </command> <command> <trigger type="message_event" value="sensor-touchSensor-out" state="A_ongoing" bool_operator="==" /> <action> <set_sound_attribute_value operator="="> <sound end="now" /> </set_sound_attribute_value> <set_internal_state value="A_ongoing" /> <target type="single_node" value="#quad back wall envelope" /> </action> <action> <send_message_udp value="source stop" /> <target type="single_host" value="spatializer" /> </action> </command> </script> <sound id="explosion_1" ... > </sound> <subdivision id="envelope_back" ... > ... </subdivision> </node>
Active selection (see Section 2.6.1) can also be used to drag an object by moving the sensor while the object is selected.
Dragging is only available through the mouse. It is triggered by pressing the right mouse button and moving the mouse while the right button is pressed. When selecting some object and moving the pointing device, the selected object moves in conjunction with the pointer in a plane perpendicular to the view vector.
The receptivity of objects to active selection and dragging is controlled by the interactable boolean attributes of scene graph nodes.
Dragging is associated with automatic message emission to the selected object. During dragging with the mouse, a sensor-mouse-drag message is sent at each frame to the dragged object.
Response to dragging by mouse sensor is defined in scripts through a message_event trigger:
<command>
<trigger type="message_event" value="sensor-mouse-drag" />
<repeatAction begin="1" end="{#nbVoiles}" step="1" id="n">
<action>
<send_message value="update_phys"/>
<target type="single_node" value="#phys({$pyramid:n})"/>
</action>
</repeatAction>
</command>