Hiroo Iwata (Japan)
predavanjeDevice Art: Why a Doctor of Engineering Launched Movement in Art?
27.10.2009. from 19:00 to 21:00
Conference hall of Zagreb Center for Independent Culture and Youth
Device Art: Why a Doctor of Engineering Launched Movement in Art?
1. Introduction - Why Art?
This paper presents work carried out for a project to develop interface devices for embodied sensation that includes finger/hand haptics, the sense of locomotion and vestibular sensation. It is well known that sense of touch is indispensable for understanding the real world. The last decade has seen a significant advance in the development of haptic interfaces. However, methods for the implementation of haptic devices are still in the phase of trial and error. Compared to visual and auditory displays, haptic displays have not been used in everyday life. In order to overcome this limitation, we have exhibited interface devices as art works.Art works are exposed to public and evaluated by various people. Thus it is an effective way to polish interactive technologies. This artistic activity has led to the national project named “Expressive Science and Technology for Device Art” funded by the Japan Science and Technology Agency.
It is well known that the sense of touch is indispensable for understanding the real world. The use of force feedback to enhance computer-human interaction has often been discussed. A haptic interface is a feedback device that generates sensation to the skin and muscles, including the sense of touch, weight and rigidity. Compared to ordinary visual and auditory sensations, haptics is difficult to synthesize. Visual and auditory sensations are gathered by specialized organs, the eyes and ears. On the other hand, the sensation of force can occur at any part of the human body, and is therefore inseparable from actual physical contact. These characteristics lead to many difficulties when developing a haptic interface.
We have been developing various haptic interfaces since 1986. Through research activity, we have found that the effect of a haptic interface cannot be sufficiently presented by the publication of papers. This is why since 1990 we have devoted our efforts to making practical demonstrations. Coincidentally, a demonstration session in which countries from across the world apply to take part in was launched at the SIGGRAPH worldwide event for computer graphics in 1994. Our submissions were accepted for 14 consecutive years, which is an unbeaten record.
In the course of continuing to present our demonstrations we started to wonder whether there was some sort of way of spreading our work further afield and going beyond the world of academic meetings. It was at this point that we discovered the presentation format of art. There is an interactive art category at the Ars Electronica, the most famous festival of media art, and we won a prize for our first submission. This was the launching point for our artistic activities. Art works are exposed to public and evaluated by various people. This is an effective way to polish interactive technologies.
2. Sensory modes and Interface Devices
Sensory modes are classified into seven categories. Table 1 shows these modes, the roles of each sensory mode and existing interface devices corresponding to each mode. Visual, auditory, olfactory, vestibular and taste are gathered by specialized sense organ; the eye, ear, nose, semicircular canals, and tongue respectively.
Haptics is composed of proprioception and skin sensation. Proprioception is complemented by mechanoreceptors of skeletal articulations and muscles. There are three types of joint position receptors: receptors detect contact forces applied by an obstacle in the environment. Skin sensation is derived from mechanoreceptors and the thermoreceptors of the skin. The sense of touch is evoked by those receptors. The mechanoreceptors of the skin are classified into four types: Merkel disks, Ruffini Capsules, Meissner Corpuscles, and Pacinian Corpuscles. These receptors detect the edge of an object, skin stretch, velocity, and vibration respectively.
Acceleration generates not only vestibular sensation but also the forces acting on the whole body. Thus it is related to proprioseption. Taste is not only gathered by chemical receptors on the tongue. It is composed of food texture or vibration while biting. Therefore, proprioception, skin sensation, taste and vestibular sensation are related to the activity of human body. We call it “embodied sensation”. Interface devices for these sensory modes are in an early stage. Our research focuses on new interface devices for the embodied sensation.
3. Haptic Interface for Finger/Hand Manipulation
3.1 Desktop Force Display
Our research into haptic interfaces started in 1986. The first step was the use of an exoskeleton. In the field of robotics research, exoskeletons have often been used as master-manipulators for teleoperations. A virtual reality system in the 80s employed a conventional master-manipulator. However, most master-manipulators entail a large amount of hardware and therefore have a high cost, which restricts their application areas. Compact hardware is needed in order to use them in human-computer interactions. We therefore proposed the concept of the desktop force display and the first prototype was developed in 1989. The device is a compact exoskeleton for desktop use. Current commercial haptic interfaces, such as PHANToM, have a desk-top configuration.
We demonstrated the haptic interfaces to a number of people, and found that some of them were unable fully to experience virtual objects through the medium of synthesized haptic sensation. There seem to be two reasons for this phenomenon. Firstly, these haptic interfaces only allow the users to touch the virtual object at a single point or at a group of points. These contact points are not spatially continuous, due to the hardware configuration of the haptic interfaces. The user feels a reaction force through a grip or thimble. Exoskeletons provide more contact points, but these are achieved by using Velcro bands attached to a specific part of the user's fingers, and they are not continuous. Therefore, these devices cannot recreate a natural interaction when compared to manual manipulation in the real world.
The second reason why they fail to perceive the sensation is related to a combination of visual and haptic displays. A visual image is usually combined with a haptic interface by using a conventional monitor or projection screen. Thus, the user receives visual and haptic sensations through different displays, and therefore has to integrate the visual and haptic images in his/her brain. Some users, especially elderly people, have difficulties in this integration process.
Considering these problems, a new configuration for a visual/haptic display was designed. The device is composed of a flexible screen, an array of actuators, and a projector. The flexible screen is deformed by the actuators in order to simulate the shape of virtual objects. An image of the virtual objects is projected onto the surface of the flexible screen. Deformation of the screen converts the 2D image from the projector into a solid image. This configuration enables users to touch the image directly using any part of their hands. The actuators are equipped with force sensors to measure the force applied by the user. The hardness of the virtual object is determined by the relationship between the measured force and its position on the screen. If the virtual object is soft, a large deformation is caused by a small applied force.
We developed a virtual Anomalocaris as part of the content of the FEELEX. Anomalocaris is the name given to an animal that was supposed to have lived during the Cambrian Era. Figure 1 shows an image of the Anomalocaris projected onto the flexible screen. The creature appears to be in motion depending on the force applied by the user. If the user pushes its head, it gets angry and struggles. We prepared sixteen patterns of motion. Four patterns represent the state of anger. The motion of the Anomalocaris is generated by combining these patterns.
This content was selected as a long term exhibition at the Ars Electronica Center (Linz, Austria) in 1999.
4. Vestibular Interface
Vestibular sensation is typically displayed by using motion platforms. A motion platform generates synthetic sensation to semicircular canals and otolith organs. The device also accelerates the whole body so that it creates muscle sensation.
We tend to use motion platforms for displaying sense of motion. The typical system is an art work named Cross-active System. It is a modified virtual reality in which the motion input from one participant provides sensory feedback for the other participant. The system is composed of a motion platform and a micro-video camera with a position sensor. One participant sits on the motion platform and the other participant holds the micro-video camera. An image of the video camera is provided to the participant on the motion platform. The motion platform moves according to the data from the position sensor attached to the video camera. Thus any slight motion of the video camera causes large motion of the participant on the motion platform. This system creates unusual communication between the two participants by dividing the sensory feedback from their motion input. This work won honorary mentions at Prix Ars Electronical 96. Figure 2 shows exhibition at Ars Electronica Festival.
5. Locomotion Interface
5.1 Virtual Perambulator
In most applications of virtual environments, such as training or visual simulations, users need a good sensation of locomotion. We have developed several prototypes of interface devices for walking since 1988. It has often been suggested that the best locomotion mechanism for virtual worlds would be walking. It is well known that the sense of distance or orientation while walking is much better than that while riding in a vehicle. However, the proprioceptive feedback of walking is not provided in most applications of virtual environments.
A possible method for locomotion in virtual space is a hand controller. In terms of natural interaction, the exertion of walking is essential to locomotion. There were two objects in the project. The first was the creation of a sense of walking while the position of the walker is maintained in the physical world. The second was to allow for the changing direction of the walker's feet.
In order to realize these functions, a user of the Virtual Perambulator wore a parachute-like harness and omni-directional roller skates. Figure 6 shows an overall view of the device. The trunk of the walker was fixed to the framework of the system by the harness. An omni directional sliding device is used for changing direction by the feet. We developed a specialized roller skate equipped with four casters which enables two-dimensional motion. The walker can freely move his/her feet in any direction. The motion of the feet is measured by ultrasonic range detector. From the result of this measurement, an image of the virtual space is displayed in the head-mounted display corresponding with the motion of the walker. The direction of locomotion in virtual space us determined according to the direction of the walker's step.
We improved the harness and sliding device of the Virtual Perambulator and demonstrated it at the SIGGRAPH’95 (Los Angeles, USA,1995).
5.2 Robot Tile
From the results of our research into locomotion interface, we determined that an infinite surface is an ideal device for creating a sense of walking. In 2004 we proposed a new locomotion interface named “Robot Tile”. The device employs a group of omni-directional movable tiles to realize the locomotion interface. Each tile is equipped with a holonomic mechanism that achieves omni-directional motion. An infinite surface is simulated by circulation of the movable tiles.
The major innovation of this work is a new method for creating an infinite floor. The easiest way to realize an infinite floor is the use of a treadmill. However, it is not easy to produce omni-directional walking with a treadmill. A motion foot-pad for each foot is an alternative. This has the ability to simulate omni-directional walking as well as walking on an uneven surface. The major limitation of this method is that great accuracy is required for the foot-pad to trace the walker. Actually, the walker has to be careful about miss tracing of the foot-pad.
The Robot Tile is a new method that has the advantage over both treadmill and foot-pad. It creates an omni-directional infinite surface by the use of a group of movable tiles. A combination of the floors provides a sufficient area for walking and thus precision tracing of foot position is not required.
The motion of the feet is measured by position sensors. The tiles move opposite to the measured direction of the walker, so that motion of the step is canceled. The position of the walker is fixed in the real world by this computer-controlled motion of the floors. The circulation of the tiles has an ability to cancel the displacement of the walker in an arbitrary direction. Thus, the walker can freely change direction while walking. Figure 9 shows an overall view of the prototype Robot Tile.
Locomotion interfaces often require bulky hardware, because they have to carry whole body of the user. Also, the hardware is not easy to reconfigure to improve its performance or add new functions. Because of these issues, the Robot Tile has scalable hardware. It is easy to install and its performance can be improved by upgrading the actuators of each tile. Moreover, it has the potential to create an uneven surface by mounting up-and–down mechanism on each tile.
6. Immersive Display for Sense of Locomotion
6.1 Ensphered Vision
Ensphered Vision is an image display system for a wide-angle spherical screen. A sphere is the ideal shape of a screen that covers human visual field. The distance between eyes and screen should be constant while the viewer rotates the head. We use a single projector and a convex mirror in order to display a seamless image. The optical system employs two mirrors: a plane mirror and a spherical convex mirror. The spherical convex mirror scatters the light from the projector in the spherical screen. The image totally surrounds the viewer. The viewing angle of the image is much larger than a dome screen with fish-eye lens. The flat mirror bends the light so that the viewer can see the image from the center of the spherical screen. This optical configuration enables seamless wide-angle image in a small space. CAVE-like screens or dome screens require large spaces for installation. On the other hand, Ensphered Vision can be built in a very limited space. The wearable dome screen is realized using this technology.
6.2 Floating Eye
Floating Eye is an interactive installation that separates vision from the body (Figure 4). The participant can only see a wide-angle image floating in the air, he/she cannot see the real scene. A wide-angle image is taken by a specialized camera-head mounted in an airship. The image is displayed in a wearable dome screen. In order to realize a wearable dome screen, Ensphered Vision technology is employed in this installation.
Ensphered Vision has an advantage in displaying a wide-angle video image. We developed a camera-head using a spherical convex mirror. The camera head gets a wide-angle image by a single video camera. The camera records a predistorted image. The spherical convex mirror is designed to minimize the distortion of the image on the spherical screen. In the Floating Eye installation, the camera-head is attached to an airship. The camera is equipped with a wireless transmitter. Thus, the image is transmitted to the wearable dome screen.
The camera-head is designed to capture a look-down image from the sky. The participant can see his/her body in the captured image. This configuration simulates an out-of -the-body experience. The airship can be maneuvered by towing the string. The participant can walk looking him/herself as well as surrounding scene. However, a slight wind will disturb the airship. Thus, the participant is forced to interact with the atmosphere. This installation evokes new style of self-recognition and relationship between human beings and the atmosphere.
7. Starting up Device Art Project
The Device Art project is funded by Core Research for Evolutional Science and Technology (CREST) of Japan Science and Technology Agency. Hiroo Iwata leads the project, its formal title being, “Expressive Science and Technology for Device Art”. The name and concept of Device Art came about during the process of its creation in 2004. The goal of this project was to systematize technologies found in Device Art and to establish reasonable methods for evaluating the works. In order to achieve this goal, a new framework named “Gadgetrium”, which is composed of a laboratory, exhibition room and venture business was constructed. In 2008, we opened the permanent exhibition space, “Device Art Gallery,” in the National Museum of Emerging Science and Innovation (Miraikan) in Tokyo. We believethe technology will advance and be refined with help from audience feedback and participation. Collaborators on this project are: Hideyuki Ando, Masahiko Inami, Hiroo Iwata, Machiko Kusahara, Ryota Kuwakubo, Sachiko Kodama, Novmichi Tosa, Kazuhiko Hachiya, Taro Maeda, and Hiroaki Yano
Device Art is a new form of art that displays the essence of technology through the use of new materials and mechatronic devices. This concept challenges the traditional paradigm of art by its fostering the convergence of technology, art and design.
Device Art possesses three main characteristics:
(1) The Device itself is content. The mechanism represents the theme of the piece. Content and tool are no longer separable.
(2) Artworks are often playful and can sometimes be commercialized into devices or gadgets for use in everyday life.
(3) Refined design and playful features are traced back to the Japanese tradition of appreciating tools and materials. Traditional Japanese culture, such as tea ceremony or flower arrangement, uses sophisticated devices. These devices are the roots of Device Art.
These characteristics are not familiar in Western artwork. For this reason, its novelty has drawn world-wide attention to Device Art. Significant advances have been observed in the interactive art of Japan over the last few decades. These advances include innovative interface devices used by Device Artists. The concept of Device Art arose when the latest technologies were fused with the traditional Japanese value of art as an inextricable part of life. We hope Device Art will provide a model for understanding what it means to live in a world full of new technology.
Visual and auditory displays have more than a one-hundred year history. These displays are widely used in everyday life. On the other hand, most haptic interfaces are still used in specific laboratories. Very few haptic interface applications are used in the information media.
History of media technology may provide a hint for this problem. It is well known that the father of paper medium is Gutenberg. However, he did not invent the printing press, which had been developed by many before him. The reason why he has remained in history is due to his content and fonts. Something similar may be applied to the haptic interface. We will have to make many trial-and-error applications in order to be able to find a killer configuration for a haptic interface. Device Art will be able to provide haptic interface and thus have a chance of achieving popularity.