Interaction Transformation Nodes


Touch Translation Single touch gesture to translate an axis.
Touch Rotation Single touch gesture to rotate around an axis.
Touch Transformation Two-touch gesture to translate, rotate and scale at the same time.

These three interaction nodes form the set of transformation gestures. They operate in an object-based fashion so to activate them a touch has to hit a mesh/font/volume that is part of the subtree spanned by the interaction node. At first glance it might seem an unnecessary duplication to have dedicated translation and rotation nodes when the Touch Transformation can handle all of these transformations. However, they all are custom tailored to special use cases and achieve different effects.

The transformation nodes are derived from Touch Button and as such inherit the same properties and events to detect whether a touch is hovering inside the active area of the node or actually pressed. These can for example be used to add some custom visual feedback when the user activates the transformation node.


All of the transformation nodes have in common that the values they generate are computed by intersecting a viewing ray (starting at the camera and going through the touch position) with the X/Y-plane of the coordinate system active at the location of the node in the hierarchy and then mapping this value. The underlying meshes/fonts/volumes are only used to decide which interaction node is activated and will not affect the value generation.

In addition, all three nodes have an inherent behavior and a simplified physics simulation. There is no collision detection but objects can have a certain inertia which keeps the gesture moving after the user has released the touch. By using the Limit input properties, movement in the directions can artificially be restricted.

Touch Translation

Use for: Sliders, 2D positioning...

A single touch is used to translate the object in x/y direction. This is usually used with the X- and Y-axis aligned to the screen but rotated version are also possible in order to move objects back and forth in 3D space.

Be careful to have the vanishing points of both axes lie outside the area visible by the camera as mapping calculation close to those points is error prone.



A simple slider can be built by using the Limit input properties to restrict movement in one direction.

Touch Rotation

Use for: Slot machine style wheels, volume knob style elements, stearing wheel, ...

The most important properties for Touch Rotation are the Rotation Axis and the Mapping mode. The on-axis mapping mode assumes that the user roughly looks along the rotation axis onto the top of the object (i.e. a steering wheel or the dial plate on an antique telephone). In this mode, the node will try to follow the touch as closely as possible in order to keep the same point on the surface under the touch position at all times. This makes it possible to do very precise rotations even up to the point where one can do a complete revolution. However, this mode can feel weird if the touch changes its distance from the rotation axis.


The off-axis mapping mode assumes that the rotation axis is at roughly a 90 degree angle to the viewing direction (i.e. a slot machine). When rotating, the object will only stick to the touch position up to a certain point. As the touch gets closer to the boundary of the object, a different mapping is used so that moving the touch further along will keep the object rotating at a constant speed. While one looses the feeling of the object sticking exactly to the finger, this mode feels much nicer when it comes to making an object spin.

Both modes work completely different and which one is more suitable depends on the specific case of usage.

Touch Transformation

Use for: Moving picture metaphor, scaling objects, ...

The Touch Transformation node combines the most common one and two finger gestures to interact with an object. A single touch can be used to translate an object. Two touches can be used to rotate the object (by doing a circular motion), scale it (pinch/stretch), and translate it (by moving all touches in the same direction).

Due to the degree of freedom with this node, there are only few practical use cases for it. Typically, a user will use the Limit properties to restrict movement in one or the other direction. In most cases however, one should try to avoid the use of the Transformation node in favor of the dedicated Translation or Rotation node.


Customized Motion

Each of the transformation nodes automatically performs the necessary transformation implied by the gesture (i.e. a translation node moves without an explicit node in front of it). To customize the way objects behave, this implicit motion can be disabled by unchecking BehaviorMotion. When BehaviorMotion is disabled, the node will still do all of its calculation and provide the result as output properties. Those can be manually bound to an Axis node and by using Math Effect nodes like Decay or Dampener, different interaction results can be achieved.

Physics Simulation

All transformation nodes contain a simplified physics model to simulate effects like inertia or other post-motion. Subtle effects like these are crucial to convey realism or "weight" in a MultiTouch scene.

As stated before, the physics model inside Ventuz is a simplified one custom tailored for MultiTouch purposes. It is not designed to handle collisions, gravity or other factors expected in a full-fledged physics simulation.


The physics simulation is by default enabled for all Touch Transformation Nodes. To disable it, uncheck the BehaviorApply property. This is independent of whether or not the object will move (see BehaviorMotion) and just prevents the physics simulation of influencing the values generated by the mapping calculation.

To modify the physics properties of a node, click the Edit Behavior button at the bottom of the Property Editor.


There are three forces that can be applied to the object moved by any of the transformation nodes:


All these parameters are based on the nodes own mapping system instead of real world parameters like screen distance in cm. While this is often unintuitive, it is the only plausible solution to be both screen size independent and capable of handling motion that is not aligned with the screen plane.

The Machine Configuration has two options to adapt physics behavior parameter to changing screen size. As long as Scale Thresholds with Window Size is active, Ventuz will do its best to internally scale all parameters such that the user experience stays the same. For more explicit measures, an additional scaling factor that is applied to all parameters expect tick spacing has been created.