Papers
2015
TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week
Hiroshi Ishii, Daniel Leithinger, Sean Follmer, Amit Zoran, Philipp Schoessler, and Jared Counts, “TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.
DOI: http://dx.doi.org/10.1145/2702613.2702969
TRANSFORM fuses technology and design to celebrate the transformation from a piece of static furniture to a dynamic machine driven by streams of data and energy. TRANSFORM aims to inspire viewers with unexpected transformations, as well as the aesthetics of a complex machine in motion. This paper describes the concept, engine, product, and motion design of TRANSFORM, which was first exhibited at LEXUS DESIGN AMAZING 2014 MILAN in April 2014.
THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens
Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2015. THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘15). ACM, New York, NY, USA, 89-96.
DOI: http://dx.doi.org/10.1145/2677199.2680584
The huge influx of mobile display devices is transforming computing into multi-device interaction, demanding a fluid mechanism for using multiple devices in synergy. In this paper, we present a novel interaction system that allows a collocated large display and a small handheld device to work together. The smartphone acts as a physical interface for near-surface interactions on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. As a result, the smartphone can directly interact with data displayed on the host computer, with precisely aligned visual feedback from both devices. The possible interactions are described and classified in a framework, which we exemplify on the basis of several implemented applications. Finally, we present a technical evaluation and describe how our system is unique compared to other existing near-surface interaction systems. The proposed technique can be implemented on existing devices without the need for additional hardware, promising immediate integration into existing systems.
Cord UIs: Controlling Devices with Augmented Cables
Philipp Schoessler, Sang-won Leigh, Krithika Jagannath, Patrick van Hoof, and Hiroshi Ishii. 2015. Cord UIs: Controlling Devices with Augmented Cables. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Em
DOI: http://doi.acm.org/10.1145/2677199.2680601
Cord UIs are sensorial augmented cords that allow for simple metaphor-rich interactions to interface with their connected devices. Cords offer a large underexplored space for interactions as well as unique properties and a diverse set of metaphors that make them potentially interesting tangible interfaces. We use cords as input devices and explore different interactions like tying knots, stretching, pinching and kinking to control the flow of data and/or electricity. We also look at ways to use objects in combination with augmented cords to manipulate data or certain properties of a device. For instance, placing a clamp on a cable can obstruct the audio signal to the headphones. To test and evaluate our ideas, we built five working prototypes that showcase some of the interactions described in this paper as well as special materials such as piezo copolymer cables and stretchable cords.
Sticky Actuator: Free-Form Planar Actuators for Animated Objects
Niiyama, R., Sun, X., Yao, L., Ishii, H., Rus, D., and Kim, S. Sticky Actuator: Free-Form Planar Actuators for Animated Objects. International Conference on Tangible, Embedded, and Embodied Interaction (TEI), ACM Press (2015), 77–84.
DOI: http://dx.doi.org/10.1145/2677199.2680600
We propose soft planar actuators enhanced by free-form fabrication that are suitable for making everyday objects move. The actuator consists of one or more inflatable pouches with an adhesive back. We have developed a machine for the fabrication of free-from pouches; squares, circles and ribbons are all possible. The deformation of the pouches can provide linear, rotational, and more complicated motion corresponding to the pouch’s geometry. We also provide a both manual and programmable control system. In a user study, we organized a hands-on workshop of actuated origami for children. The results show that the combination of the actuator and classic materials can enhance rapid prototyping of animated objects.
Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging
Viirj Kan, Katsuya Fujii, Judith Amores, Chang Long Zhu Jin, Pattie Maes, and Hiroshi Ishii. 2015. Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘15). ACM, New York, NY, USA, 619-624.
DOI: http://dx.doi.org/10.1145/2677199.2688816
Wearable commodities are able to extend beyond the temporal span of a particular community event, offering omnipresent vehicles for producing icebreaking interaction opportunities. We introduce a novel platform, which generates social affordances to facilitate community organizers in aggregating social interaction among unacquainted, collocated members beyond initial hosted gatherings. To support these efforts, we present functional work-in-progress prototypes for Social Textiles, wearable computing textiles which enable social messaging and peripheral social awareness on non-emissive digitally linked shirts. The shirts serve as catalysts for different social depths as they reveal common interests (mediated by community organizers), based on the physical proximity of users. We provide 3 key scenarios, which demonstrate the user experience envisioned with our system. We present a conceptual framework, which shows how different community organizers across domains such as universities, brand communities and digital self-organized communities can benefit from our technology.
bioLogic: Natto Cells as Nanoactuators for Shape
Changing Interfaces
Lining Yao, Jifei Ou, Chin-Yi Cheng, Helene Steiner, Wen Wang, Guanyun Wang, and Hiroshi Ishii. 2015. bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15). ACM, New York, NY, USA, 1-10. DOI=10.1145/2702123.2702611 http://doi.acm.org/10.1145/2702123.2702611
DOI: DOI=10.1145/2702123.2702611 http://doi.acm.org/10.1145/2702123.2702611
Through scientific research in collaboration with biologists,we found natto cells can contract and expand with the change of relative humidity. In this paper, we firstly
describe the scientific discovery of natto cells as a biological actuator. Next, we expand on the technological developments which enables the translation between the nanoscale actuators and the macroscale interface design: the development of the composite biofilm, the development of
the responsive structures, the control setup for actuating biofilms, a simulation and fabrication platform. Finally, we provide a variety of application designs, with and without computer control to demonstrate the potential of our bioactuators. Through this paper, we intend to encourage the use of natto cells and our platform technologies for the design of shape changing interfaces, and more generally, the use and research of biological materials in HCI.
TRANSFORM as Adaptive and Dynamic Furniture
Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp Schoessler, Amit Zoran and Hiroshi Ishii, “TRANSFORM as Adaptive and Dynamic Furniture,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.
DOI: http://dx.doi.org/10.1145/2702613.2732494
TRANSFORM is an exploration of how shape display technology can be integrated into our everyday lives as interactive, shape changing furniture. These interfaces not only serve as traditional computing devices, but also support a variety of physical activities. By creating shapes on demand or by moving objects around, TRANSFORM changes the ergonomics, functionality and aesthetic dimensions of furniture. The video depicts a story with various scenarios of how TRANSFORM shape shifts to support a variety of use cases in the home and in the work environment: It holds and moves objects like fruits, game tokens, office supplies and tablets; creates dividers on demand; and generates interactive sculptures to convey messages and audio.
Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick
Ken Nakagaki, Chikara Inamura, Pasquale Totaro, Thariq Shihipar, Chantine Akiyama, Yin Shuang and Hiroshi Ishii, “Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.
DOI: http://dx.doi.org/10.1145/2702613.2732712
We use sticks as tools for a variety of activities, everything from conducting music to playing sports or even engage in combat. However, these experiences are inherently physical and are poorly conveyed through traditional digital mediums such as video. Linked-Stick is a shape-changing stick that can mirror the movements of another person’s stick-shape tool. We explore how this can be used to experience and learn music, sports and fiction in a more authentic manner. Our work attempts to expand the ways in which we interact with and learn to use tools.
2014
Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration
Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2014. Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST ‘14). ACM, New York, NY, USA, 461-470. DOI=10.1145/2642918.2647377 http://doi.acm.org/10.1145/2642918.2647377
DOI: DOI=10.1145/2642918.2647377 http://doi.acm.org/10.1145/2642918.2647377
We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user’s body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.
Presentation:
AnnoScape: Remote Collaborative Review Using Live Video Overlay in Shared 3D Virtual Workspace
Austin Lee, Hiroshi Chigira, Sheng Kai Tang, Kojo Acquah, and Hiroshi Ishii. 2014. AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI ‘14). ACM, New York, NY, USA, 26-29. DOI=10.1145/2659766.2659776 http://doi.acm.org/10.1145/2659766.2659776
DOI: http://dl.acm.org/citation.cfm?doid=2659766.2659776
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system’s virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape’s potential to facilitate effective communication during remote 3D data reviews.
T(ether): Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation
David Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin, and Hiroshi Ishii. 2014. T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI ‘14). ACM, New York, NY, USA, 90-93. DOI=10.1145/2659766.2659785 http://doi.acm.org/10.1145/2659766.2659785
DOI: http://dl.acm.org/citation.cfm?doid=2659766.2659785
T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users’ heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand’s position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)’s potential for embodied viewport control and 3D modeling interactions.
bioPrint: An automatic deposition system for Bacteria
Spore Actuators
Jifei Ou, Lining Yao, Clark Della Silva, Wen Wang, and Hiroshi Ishii. 2014. bioPrint: an automatic deposition system for bacteria spore actuators. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 121-122.
DOI: DOI=10.1145/2658779.2658806 http://doi.acm.org/10.1145/2658779.2658806
We propose an automatic deposition method of bacteria spores, which deform thin soft materials under environmental humidity change. We describe the process of two-dimensional printing the spore solution as well as a design application. This research intends to contribute to the understanding of the control and pre-programming the transformation of future interfaces.
THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens
Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2014. THAW: tangible interaction with see-through augmentation for smartphones on computer screens. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 55-56
DOI: http://doi.acm.org/10.1145/2658779.2659111
In this paper, we present a novel interaction system that allows a collocated large display and small handheld devices to seamlessly work together. The smartphone acts both as a physical interface and as an additional graphics layer for near-surface interaction on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. The proposed technique can be implemented on existing devices without the need for additional hardware.
Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion
Xiao Xiao, Basheer Tome, and Hiroshi Ishii. 2014. Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion. In Proceedings of the 14th International Conference on New Interfaces for Musical Expression (NIME ‘14). Goldsmiths University of London. London, UK.
We present Andante, a representation of music as animated characters walking along the piano keyboard that appear to play the physical keys with each step. Based on a view of music pedagogy that emphasizes expressive, full- body communication early in the learning process, Andante promotes an understanding of music rooted in the body, taking advantage of walking as one of the most fundamental human rhythms. We describe three example visualizations on a preliminary prototype as well as applications extending our examples for practice feedback, improvisation and composition. Through our project, we reflect on some high level considerations for the NIME community.
jamSheets: Thin Interfaces with Tunable Stiffness Enabled by Layer Jamming
Jifei Ou, Lining Yao, Daniel Tauber, Jürgen Steimle, Ryuma Niiyama, and Hiroshi Ishii. 2014. jamSheets: thin interfaces with tunable stiffness enabled by layer jamming. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI ‘14). ACM, New York, NY, USA, 65-72. DOI=10.1145/2540930.2540971 http://doi.acm.org/10.1145/2540930.2540971
DOI: DOI=10.1145/2540930.2540971 http://doi.acm.org/10.1145/2540930.2540971
This works introduces layer jamming as an enabling technology for designing deformable, stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable stiffness properties can yield dynamic haptic feedback and shape deformation capabilities. In comparison to the particle jamming, layer jamming allows for constructing thin and lightweight form factors of an interface. We propose five layer structure designs and an approach which composites multiple materials to control the deformability of the interfaces. We also present methods to embed different types of sensing and pneumatic actuation layers on the layer-jamming unit. Through three application prototypes we demonstrate the benefits of using layer jamming in interface design. Finally, we provide a survey of materials that have proven successful for layer jamming.
Weight and Volume Changing Device with Liquid Metal Transfer
Ryuma Niiyama, Lining Yao, and Hiroshi Ishii. 2014. Weight and volume changing device with liquid metal transfer. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI ‘14). ACM, New York, NY, USA, 49-52. DOI=10.1145/2540930.2540953 http://doi.acm.org/10.1145/2540930.2540953
DOI: DOI=10.1145/2540930.2540953 http://doi.acm.org/10.1145/2540930.2540953
This paper presents a weight-changing device based on the transfer of mass. We chose liquid metal (Ga-In-Tin eutectic) and a bi-directional pump to control the mass that is injected into or removed from a target object. The liquid metal has a density of 6.44g/cm3, which is about six times heavier than water, and is thus suitable for effective mass transfer. We also combine the device with a dynamic volume-changing function to achieve programmable mass and volume at the same time. We explore three potential applications enabled by weight-changing devices: density simulation of different materials, miniature representation of planets with scaled size and mass, and motion control by changing gravity force. This technique opens up a new design space in human-computer interactions.
Integrating Optical Waveguides for Display and Sensing
on Pneumatic Soft Shape Changing Interfaces
Lining Yao, Jifei Ou, Daniel Tauber, and Hiroshi Ishii. 2014. Integrating optical waveguides for display and sensing on pneumatic soft shape changing interfaces. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 117-118.
DOI: DOI=10.1145/2658779.2658804 http://doi.acm.org/10.1145/2658779.2658804
We introduce the design and fabrication process of integrating optical fiber into pneumatically driven soft composite shape changing interfaces. Embedded optical waveguides can provide both sensing and illumination, and add one more building block to the design of designing soft pneumatic shape changing interfaces.
2013
inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation
Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. inFORM: Dynamic Physical Affordances and Constraints
through Shape and Object Actuation. To appear in Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ‘13). ACM, New York, NY, USA.
DOI: http://doi.acm.org/10.1145/2501988.2502032
Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adaptive Presentation for Video Conferencing
Lining Yao, Anthony DeVincenzi, Anna Pereira, and Hiroshi Ishii. 2013. FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing. In Proceedings of the 1st symposium on Spatial user interaction (SUI ‘13). ACM, New York, NY, USA, 73-76. DOI=10.1145/2491367.2491377 http://doi.acm.org/10.1145/2491367.2491377
DOI: DOI=10.1145/2491367.2491377 http://doi.acm.org/10.1145/2491367.2491377
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users’ focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.
Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays
Daniel Leithinger, Sean Follmer, Alex Olwal, Samuel Luescher, Akimitsu Hogge, Jinha Lee, and Hiroshi Ishii. 2013. Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays. In Proceedings of the 2013 ACM annual conference on Human factors in computing systems (CHI ‘13). ACM, New York, NY, USA, 1441-1450.
DOI: http://dx.doi.org/10.1145/2470654.2466191
Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video see-through AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.
Beyond Visualization – Designing Interfaces to Contextualize Geospatial Data
Samuel Luescher
The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering.
The main contribution of this thesis is a concrete application scenario and initial prototype of a “Designed Environment” where we can explore the relationship between the surface of Japan’s islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the “Spatial Web”, bringing us one step closer to a tangible “dashboard of the Earth.”
synchroLight: Three-dimensional Pointing System for Remote Video Communication
Jifei Ou, Sheng Kai Tang, and Hiroshi Ishii. 2013. synchroLight: three-dimensional pointing system for remote video communication. In CHI ‘13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘13). ACM, New York, NY, USA, 169-174. DOI=10.1145/2468356.2468387 http://doi.acm.org/10.1145/2468356.2468387
DOI: DOI=10.1145/2468356.2468387 http://doi.acm.org/10.1145/2468356.2468387
Although the image quality and transmission speed of current remote video communication systems have vastly improved in recent years, its interactions still remain detached from the physical world. This causes frustration and lowers working efficiency, especially when both sides are referencing physical objects and space. In this paper, we propose a remote pointing system named synchroLight that allows users to point at remote physical objects with synthetic light. The system extends the interaction of the existing remote pointing systems from two-dimensional surfaces to three-dimensional space. The goal of this project is to approach a seamless experience in video communication.
exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality
Shunichi Kasahara, Ryuma Niiyama, Valentin Heun, and Hiroshi Ishii. 2013. exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality. In Proceedings of the seventh international conference on Tangible, embedded, and embodied interaction (TEI ‘13). ACM, Barcelona, Spain.
DOI: http://doi.acm.org/10.1145/2460625.2460661
As domestic robots and smart appliances become increasingly common, they require a simple, universal interface to control their motion. Such an interface must support a simple selection of a connected device, highlight its capabilities and allow for an intuitive manipulation. We propose “exTouch”, an embodied spatially-aware approach to touch and control devices through an augmented reality mediated mobile interface. The “exTouch” system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object. We demonstrate that the system can be used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room.
MirrorFugue III: Conjuring the Recorded Pianist
Xiao Xiao, Anna Pereira and Hiroshi Ishii. 2013. MirrorFugue III: Conjuring the Recorded Pianist. In Proceedings of 13th conference on New Interfaces for Musical Expression (NIME ‘13). KAIST. Daejeon, South Korea.
The body channels rich layers of information when playing music, from intricate manipulations of the instrument to vivid personifications of expression. But when music is captured and replayed across distance and time, the performer’s body is too often trapped behind a small screen or absent entirely.
This paper introduces MirrorFugue III, an interface to conjure the recorded performer by combining the moving keys of a player piano with life-sized projection of the pianist’s hands and upper body. Inspired by reflections on a lacquered grand piano, our interface evokes the sense that the virtual pianist is playing the physically moving keys.
Through MirrorFugue III, we explore the question of how to viscerally simulate a performer’s presence to create immersive experiences. We discuss design choices, outline a space of usage scenarios and report reactions from users.
PneUI: Pneumatically Actuated Soft Composite Materials for
Shape Changing Interfaces
Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, and Hiroshi Ishii. 2013. PneUI: pneumatically actuated soft composite materials for shape changing interfaces. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ‘13). ACM, New York, NY, USA, 13-22. DOI=10.1145/2501988.2502037 http://doi.acm.org/10.1145/2501988.2502037
This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites’ multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.
2012
Towards Radical Atoms – Form-giving to
Transformable Materials
Dávid Lakatos, Hiroshi Ishii. 2012. Towards Radical Atoms — Form-giving to transformable materials. In proceedings of Cognitive Infocommunications (CogInfoCom), 2012 IEEE 3rd International Conference, Kosice, Slovakia
DOI: 10.1109/CogInfoCom.2012.6422023
Form, as the externalization of an idea has been present in our civilization for several millennia. Humans have used their hands and tools to directly manipulate and alter/deform the shape of physical materials. Concurrently, we have been inventing tools in the digital domains that allow us to freely manipulate digital information. The next step in the evolution of form-giving is toward shape- changing materials, with tight coupling between their shape and an underlying digital model. In this paper we compare approaches for interaction design of these shape-shafting entities that we call Radical Atoms. We use three projects to elaborate on appropriate interaction techniques for both the physical and the virtual domains.
Second surface: multi-user spatial collaboration system based on augmented reality
Shunichi Kasahara, Valentin Heun, Austin S. Lee, and Hiroshi Ishii. 2012. Second surface: multi-user spatial collaboration system based on augmented reality. In SIGGRAPH Asia 2012 Emerging Technologies (SA ‘12). ACM, New York, NY, USA, , Article 20 , 4 pages. DOI=10.1145/2407707.2407727 http://doi.acm.org/10.1145/2407707.2407727
DOI: http://doi.acm.org/10.1145/2407707.2407727
An environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environment. We introduce Second Surface, a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup.
Jamming User Interfaces: Programmable Particle Stiffness and Sensing for Malleable and Shape-Changing Devices.
Sean Follmer, Daniel Leithinger, Alex Olwal, Nadia Cheng, and Hiroshi Ishii. 2012. Jamming user interfaces: programmable particle stiffness and sensing for malleable and shape-changing devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST ‘12). ACM, New York, NY, USA, 519-528.
DOI: http://dx.doi.org/10.1145/2380116.2380181
Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming’s potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.
Point and share: from paper to whiteboard
Misha Sra, Austin Lee, Sheng-Ying Pao, Gonglue Jiang, and Hiroshii Ishii. 2012. Point and share: from paper to whiteboard. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ‘12). ACM, New York, NY, USA, 23-24. DOI=10.1145/2380296.2380309 http://doi.acm.org/10.1145/2380296.2380309
DOI: http://doi.acm.org/10.1145/2380296.2380309
Traditional writing instruments have the potential to enable new forms of interactions and collaboration though digital enhancement. This work specifically enables the user to utilize pen and paper as input mechanisms for content to be displayed on a shared interactive whiteboard. We introduce a pen cap with an infrared led, an actuator and a switch.
Pointing the pen cap at the whiteboard allows users to select and position a “canvas” on the whiteboard to display handwritten text while the actuator enables resizing the canvas and the text. It is conceivable that anything one can
write on paper anywhere, could be displayed on an interactive whiteboard.
rainBottles: gathering raindrops of data from the cloud
Jinha Lee, Greg Vargas, Mason Tang, and Hiroshi Ishii. 2012. rainbottles: gathering raindrops of data from the cloud. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA ‘12). ACM, New York, NY, USA, 1901-1906.
DOI: http://doi.acm.org/10.1145/2212776.2223726
This paper introduces a design for a new way of managing the flow of information in the age of overflow. The device, rainBottles, collects virtual data and converts it into a virtual liquid that fills up specially designed glass bottles. The bottles then serve as an ambient interface displaying the quantity of information in a queue as well as a tangible controller for opening the applications associated with the data in the bottles. With customizable data relevance metrics, the bottles can also serve as filters by letting less relevant data overflow out of the bottle.
Point-and-Shoot Data
Stephanie Lin, Samuel Luescher, Travis Rich, Shaun Salzberg, and Hiroshi Ishii. 2012. Point-and-shoot data. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA ‘12). ACM, New York, NY, USA, 2027-2032.
DOI: http://doi.acm.org/10.1145/2223656.2223747
We explore the use of visible light as a wireless communication medium for mobile devices. We discuss the advantages of a human perceptible communication medium in regards to user experience and create tools for direct manipulation of the communication channel.
KidCAD: digitally remixing toys through tangible tools
Sean Follmer and Hiroshi Ishii. 2012. KidCAD: digitally remixing toys through tangible tools. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ‘12). ACM, New York, NY, USA, 2401-2410.
DOI: http://doi.acm.org/10.1145/2208276.2208403
Children have great facility in the physical world, and can skillfully model in clay and draw expressive illustrations. Traditional digital modeling tools have focused on mouse, keyboard and stylus input. These tools may be complicated and difficult for young users to easily and quickly create exciting designs. We seek to bring physical interaction to digital modeling, to allow users to use existing physical objects as tangible building blocks for new designs. We introduce KidCAD a digital clay interface for children to remix toys. KidCAD allows children to imprint 2.5D shapes from physical objects into their digital models by deforming a malleable gel input device, deForm. Users can mashup existing objects, edit and sculpt or draw new designs on a 2.5D canvas using physical objects, hands and tools as well as 2D touch gestures. We report on a preliminary user study with 13 children, ages 7 to 10, which provides feedback for our design and helps guide future work in tangible modeling for children.
People in books: using a FlashCam to become part of an interactive book for connected reading
Sean Follmer, Rafael (Tico) Ballagas, Hayes Raffle, Mirjana Spasojevic, and Hiroshi Ishii. 2012. People in books: using a FlashCam to become part of an interactive book for connected reading. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW ‘12). ACM, New York, NY, USA, 685-694.
DOI: http://doi.acm.org/10.1145/2145204.2145309
We introduce People in Books with FlashCam technology, a system that supports children and long-distance family members to act as characters in children’s storybooks while they read stories together over a distance. By segmenting the video chat streams of the child and remote family member from their background surroundings, we create the illusion that the child and adult reader are immersed among the storybook illustrations. The illusion of inhabiting a shared story environment helps remote family members feel a sense of togetherness and encourages active reading behaviors for children ages three to five. People In Books is designed to fit into families’ traditional reading practices, such as reading ebooks on couches or in bed via netbook or tablet computers. To accommodate this goal we implemented FlashCam, a computationally cost effective and physically small background subtraction system for mobile devices that allows users to move locations and change lighting conditions while they engage in background-subtracted video communications. A lab evaluation compared People in Books with a conventional remote reading application. Results show that People in Books motivates parents and children to be more performative readers and encourages open-ended play beyond the story, while creating a strong sense of togetherness.
Radical Atoms: Beyond Tangible Bits, Toward Transformable Materials
Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (January 2012), 38-51.
DOI: http://doi.acm.org/10.1145/2065327.2065337
“Radical Atoms” is our vision of human interaction with future dynamic materials that are computationally reconfigurable.
“Radical Atoms” was created to overcome the fundamental limitations of its precursor, the “Tangible Bits” vision. Tangible Bits – the physical embodiment of digital information and computation – was constrained by the rigidity of “atoms” in comparison with the fluidity of bits. This makes it difficult to represent fluid digital information in traditionally rigid physical objects, and inhibits dynamic tangible interfaces from being able to control or represent computational inputs and outputs.
In order to augment the vocabulary of Tangible User Interfaces or TUIs, we use dynamic representations such as co-located projections or “digital shadows”. However the physical objects on the tabletop stay static and rigid. To overcome these limitations, we began to experiment with a variety of actuated and kinetic tangibles, which can transform their physical positions or shapes as an additional output modality beyond the traditional manual input mode of TUI’s.
Our vision of “Radical Atoms” is based on hypothetical, extremely malleable and reconfigurable materials that can be described by real-time digital models so that dynamic changes in digital information can be reflected by a dynamic change in physical state and vice-versa. Bidirectional synchronization is key to making Radical Atoms a tangible but dynamic representation & control of digital information, and enabling new forms of Human Computer Interaction.
In this article, we review the original vision and limitations of Tangible Bits and introduce an array of actuated/kinetic tangibles that emerged in the past 10 years of Tangible Media Group’s research to overcome the issue of atoms’ rigidity. Then we illustrate our vision of interactions with Radical Atoms which do not exist today, but may be invented in next 100 years by atom hackers (material scientists, self-organizing nano-robot engineers, etc.) and speculate new interaction techniques and applications which would be enabled by the Radical Atoms.
MirrorFugue2: Embodied
Representation of Recorded Piano Performances
Xiao Xiao and Hiroshi Ishii. 2012. MirrorFugue2: Embodied Representation of Recorded Piano Performances. In Extended Abstracts of the 2012 international conference on Interactive Tabletops and Surfaces (ITS ‘12). ACM, New York, NY, USA.
We present MirrorFugue2, and interface for viewing recorded piano playing where video of the hands and upper body of a performer are projected on the surface of the instrument at full scale. Rooted in the idea that a performer’s body plays a key role in channeling musical expression, we introduce an upper body display, extending a previous prototype that demonstrated the benefits of a full-scale hands display for pedagogy.
We describe two prototypes of MirrorFugue2 and discuss how the interface can benefit pedagogy, watching performances and collaborative playing.
2011
PingPong++: Community Customization in Games and Entertainment
Xiao Xiao, Michael S. Bernstein, Lining Yao, David Lakatos, Lauren Gust, Kojo Acquah, and Hiroshi Ishii. 2011. PingPong++: community customization in games and entertainment. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE ‘11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 24 , 6 pages.
DOI: http://doi.acm.org/10.1145/2071423.2071453
In this paper, we introduce PingPong++, an augmented ping pong table that applies Do-It-Yourself (DIY) and community contribution principles to the world of physical sports and play. PingPong++ includes an API for creating new visualizations, easily recreateable hardware, an end-user interface for those without programming experience, and a crowd data API for replaying and remixing past games. We discuss a range of contribution domains for PingPong++ and share the design, usage, feedback, and lessons for each domain. We then reflect on our process and outline a design space for community-contributed sports.
ZeroN: Mid-air Tangible Interaction Enabled by Computer-Controlled Magnetic Levitation
Jinha Lee, Rehmi Post, and Hiroshi Ishii. 2011. ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 327-336.
DOI: http://doi.acm.org/10.1145/2047196.2047239
This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To accomplish this we developed a magnetic control system that can levitate and actuate a permanent magnet in a predefined 3D volume. This is combined with an optical track- ing and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology, interaction scenarios and challenges, dis- cuss initial observations, and outline future development.
Rope Revolution: Tangible and Gestural Rope Interface for Collaborative Play
Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Rope Revolution: tangible and gestural rope interface for collaborative play. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE ‘11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 11 , 8 pages. DOI=10.1145/2071423.2071437 http://doi.acm.org/10.1145/2071423.2071437
DOI: http://doi.acm.org/10.1145/2071423.2071437
In this paper we describe Rope Revolution, a rope-based gaming system for collaborative play. After identifying popular rope games and activities around the world, we developed a generalized tangible rope interface that includes a compact motion-sensing and force-feedback module that can be used for a variety of rope-based games. Rope Revolution is designed to foster both co-located and remote collaborative experiences by using actual rope to connect players in physical activities across virtual spaces. Results from this study suggest that a tangible user interface with rich metaphors and physical feedback help enhance the gaming experience in addition to helping remote players feel connected across distances. We use this design as an example to motivate discussion on how to take advantage of the various physical affordances of common objects to build a generalized tangible interface for remote play.
Sourcemap: eco-design, sustainable supply chains, and radical transparency
Leo Bonanni. 2011. Sourcemap: eco-design, sustainable supply chains, and radical transparency. XRDS 17, 4 (June 2011), 22-26.
DOI: http://doi.acm.org/10.1145/1961678.1961681
Industry and consumers need tools to help make decisions that are
good for communities and for the environment
Duet for Solo Piano: MirrorFugue for Single User Playing with Recorded Performances
Xiao Xiao and Hiroshi Ishii. 2011. Duet for solo piano: MirrorFugue for single user playing with recorded performances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1285-1290.
DOI: http://doi.acm.org/10.1145/1979742.1979762
MirrorFugue is an interface that supports symmetric, real-time collaboration on the piano using spatial metaphors to communicate the hand gesture of collaborators. In this paper, we present an extension of MirrorFugue to support single-user interactions with recorded material and outline usage scenarios focusing on practicing and self-reflection. Based on interviews with expert musicians, we discuss how single-user interactions on MirrorFugue relate to larger themes in music learning and suggest directions for future research.
Multi-Jump: Jump Roping Over Distances
Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Multi-jump: jump roping over distances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1729-1734.
DOI: http://doi.acm.org/10.1145/1979742.1979836
Jump roping, a game in which one or more people twirl a rope while others jump over the rope, promotes social interaction among children while developing their coordination skills and physical fitness. However, the traditional game requires that players be in the same physical location. Our ‘Multi-Jump’ jump-roping game platform builds on the traditional game by allowing players to participate remotely by employing an augmented rope system. The game involves full-body motion in a shared game space and is enhanced with live video feeds, player rewards and music. Our work aims to expand exertion interface gaming, or games that deliberately require intense physical effort, with genuine tangible interfaces connected to real-time shared social gaming environments.
Direct and Gestural Interaction with Relief: A 2.5D Shape Display
Daniel Leithinger, David Lakatos, Anthony DeVincenzi, Matthew Blackshaw, and Hiroshi Ishii. 2011. Direct and gestural interaction with relief: a 2.5D shape display. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 541-548.
DOI: http://doi.acm.org/10.1145/2047196.2047268
Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of free hand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system.
Kinected Conference: Augmenting Video Imaging with Calibrated Depth and Audio
Anthony DeVincenzi, Lining Yao, Hiroshi Ishii, and Ramesh Raskar. 2011. Kinected conference: augmenting video imaging with calibrated depth and audio. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW ‘11). ACM, New York, NY, USA, 621-624.
DOI: http://doi.acm.org/10.1145/1958824.1958929
The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. In this paper we present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video. We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. We discuss specific scenarios that explore features such as synthetic refocusing, gesture activated privacy, and spatiotemporal graphic augmentation.
Shape-changing interfaces.
Marcelo Coelho and Jamie Zigelbaum. 2011. Shape-changing interfaces. Personal Ubiquitous Comput. 15, 2 (February 2011), 161-173.
DOI: http://dx.doi.org/10.1007/s00779-010-0311-y
The design of physical interfaces has been constrained by the relative akinesis of the material world. Current advances in materials science promise to change this. In this paper, we present a foundation for the design of shape-changing surfaces in human—-computer interaction. We provide a survey of shape-changing materials and their primary dynamic properties, define the concept of soft mechanics within an HCI context, and describe a soft mechanical alphabet that provides the kinetic foundation for the design of four design probes: Surflex, SpeakCup, Sprout I/O, and Shutters. These probes explore how individual soft mechanical elements can be combined to create large-scale transformable surfaces, which can alter their topology, texture, and permeability. We conclude by providing application themes for shape-changing materials in HCI and directions for future work.
MirrorFugue: Communicating Hand Gesture in Remote Piano Collaboration
Xiao Xiao and Hiroshi Ishii. 2010. MirrorFugue: communicating hand gesture in remote piano collaboration. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI ‘11). ACM, New York, NY, USA, 13-20.
DOI: http://doi.acm.org/10.1145/1935701.1935705
Playing a musical instrument involves a complex set of continuous gestures, both to play the notes and to convey expression. To learn an instrument, a student must learn not only the music itself but also how to perform these bodily gestures. We present MirrorFugue, a set of three interfaces on a piano keyboard designed to visualize hand gesture of a remote collaborator. Based their spatial configurations, we call our interfaces Shadow, Reflection, and Organ. We describe the configurations and detail studies of our designs on synchronous, remote collaboration, focusing specifically on remote lessons for beginners. Based on our evaluations, we conclude that displaying the to-scale hand gestures of a teacher at the locus of interaction can improve remote piano learning for novices.
Recompose: Direct and Gestural
Interaction with an Actuated Surface
Matthew Blackshaw, Anthony DeVincenzi, David Lakatos, Daniel Leithinger, and Hiroshi Ishii. 2011. Recompose: direct and gestural interaction with an actuated surface. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1237-1242.
DOI: http://dx.doi.org/10.1145/1979742.1979754
In this paper we present Recompose, a new system for manipulation of an actuated surface. By collectively utilizing the body as a tool for direct manipulation alongside gestural input for functional manipulation, we show how a user is afforded unprecedented control over an actuated surface. We describe a number of interaction techniques exploring the shared space of direct and gestural input, demonstrating how their combined use can greatly enhance creation and manipulation beyond unaided human capability.
deFORM: An Interactive Malleable Surface For
Capturing 2.5D Arbitrary Objects, Tools and Touch
Sean Follmer, Micah Johnson, Edward Adelson, and Hiroshi Ishii. 2011. deForm: an interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 527-536.
DOI: http://doi.acm.org/10.1145/2047196.2047265
We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications.
2010
Virtual Guilds: Collective Intelligence and the Future of Craft
BONANNI, L. AND PARKES, A. Virtual Guilds: Collective Intelligence and the Future of Craft. The Journal of Modern Craft, Volume 3, Number 2, July 2010 , pp. 179-190(12)
DOI: http://dx.doi.org/10.2752/174967810X12774789403564
In its most basic definition, craft refers to the skilled practice of making things, which is shaped as much by technological advancements as by cultural practices. Richard Sennett discusses the nature of craftsmanship as an enduring, basic human impulse, the desire to do a job well for its own sake.1 This encompasses a much broader context than skilled labor and promotes an objective standard of excellence which incorporates shapers of culture, policy, and technology as craftsmen. The emerging nature of craft is transdisciplinary in its formation and must consider how emerging materials, processes and cultures influence the objects we make and how the processes of design and production can be used to reflect new social values and to change cultural practices. In order to re-think the kind of objects we make, it is necessary to rethink the way we craft our objects.
Digital technologies and media are defining a new sort of craft, seamlessly blending technology, design, and production into the post-industrial landscape. As an early pioneer in redefining craftsmanship to include digital processes, Malcolm McCullough explored the computer as a craft medium inviting interpretation and subltleties, with the combined skill sets of the machine and the human (both mind and hands) providing a structured system of transformations resulting in a crafted object.2 The nature of digital technologies also allows craft to evolve into a form which is decentralized and distributed, and can give rise to excellence through a collective desire and a combined multiplicity of knowledge through community
Craft is inherently a social activity, shaped by communal resources and motivations. The collective approach of craft communities – or guilds – is characterized by the master-apprentice model, where practitioners devote significant time passing on their skills to the next generation. The open source software movement embodies the communal character and the highly skilled practices of craft guilds. Until recently skilled handicraft relied on hands-on teaching and access to local physical resources. Mass media and the internet make it possible to transmit skills and resources to isolated individuals, making possible entirely new kinds of distributed craft communities. These “Virtual Guilds” form at the margins of established domains, extending the reach of specialized knowledge and technology.
Virtual Guilds benefit from the free exchange of expert information to bring about innovation in sometimes neglected domains. The growth of open-source software projects provides the model by which dispersed, collective innovation becomes possible in other domains. Shared resources maintained by a socially motivated community form the backbone of these non-commercial efforts. Digital channels of communication can extend this free exchange of information to the domain of craft, so that specialized designs and processes can be shared among a wide audience. Online distribution provides access to rare materials and tools and provides a market for craft products.
Several Virtual Guilds exist today, and they are contributing important inventions and new domains to often neglected markets. These communities of skilled practitioners are characterized by their marginal nature, where the free and open exchange of ideas is carried forward for collective benefit. At the same time, the popularity of Virtual Guilds and the commercial success of their inventions endanger the free exchange of information on which they are built. The survival of collective craft communities is important to under-served groups and for technological innovation, so it is essential that more practitioners engage in collective action. The new generation of digital design and fabrication tools lays the groundwork for more skilled craftspeople to collectively expand on their practice.
Construction by replacement: a new approach to
simulation modeling
James Hines, Thomas Malone, Paulo Gonçalves, George Herman, John Quimby, Mary Murphy-Hoye, James Rice, James Patten, Hiroshi Ishii, Syst. Dyn. Rev. July 2010
DOI: http://dx.doi.org/10.1002/sdr.437
Simulation modeling can be valuable in many areas of management science, but it is often costly, time consuming, and difficult to do. To reduce these problems, system dynamics researchers have previously developed standard pieces of model structure, called molecules, that can be reused in different models. However, the models assembled from these molecules often lacked feedback loops and generated few, if any, insights. This paper describes a new and more promising approach to using molecules in system dynamics modeling. The heart of the approach is a systematically organized library (or taxonomy) of predefined model components, or molecules, and a set of software tools for replacing one molecule with another. Users start with a simple generic model and progressively replace parts of the model with more specialized molecules from a systematically organized library of predefined components. These substitutions either create a new running model automatically or request further manual changes from the user. The paper describes our exploration using this approach to construct system dynamics models of supply chain processes in a large manufacturing company. The experiment included developing an innovative “tangible user interface” and a comprehensive catalog of system dynamics molecules. The paper concludes with a discussion of the benefits and limitations of this approach. Copyright © 2010 John Wiley & Sons, Ltd.
Syst. Dyn. Rev. (2009)
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1354665
Small Business Applications of Sourcemap: A Web Tool for Sustainable Design and Supply Chain Transparency
Bonanni, L., Hockenberry, M., Zwarg, D., Csikszentmihalyi, C., and Ishii, H. 2010. Small business applications of sourcemap: a web tool for sustainable design and supply chain transparency. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ‘10. ACM, New York, NY, 937-946.
DOI: http://doi.acm.org/10.1145/1753326.1753465
This paper introduces sustainable design applications for small businesses through the Life Cycle Assessment and supply chain publishing platform Sourcemap.org. This web-based tool was developed through a year-long participatory design process with five small businesses in Scotland and in New England. Sourcemap was used as a diagnostic tool for carbon accounting, design and supply chain management. It offers a number of ways to market sustainable practices through embedded and printed visualizations. Our experiences confirm the potential of web sustainability tools and social media to expand the discourse and to negotiate the diverse goals inherent in social and environmental sustainability.
Beyond: collapsible tools and gestures for computational design
Jinha Lee and Hiroshi Ishii. 2010. Beyond: collapsible tools and gestures for computational design. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (CHI EA ‘10). ACM, New York, NY, USA, 3931-3936.
DOI: http://doi.acm.org/10.1145/1753846.1754081
Since the invention of the personal computer, digital media has remained separate from the physical world, blocked by a rigid screen. In this paper, we present Beyond, an interface for 3-D design where users can directly manipulate digital media with physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen. The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending physical affordances to the digital space beyond the computer screen.
Tangible Interfaces for Art Restoration
BONANNI, L., SERACINI, M., XIAO, X., HOCKENBERRY, M., COSTANZO, B.C., SHUM, A., TEIL, R., SPERANZA, A., AND ISHII, H.2010. Tangible Interfaces for Art Restoration. International Journal of Creative Interfaces and Computer Graphics 1, 54-66. DOI: 10.4018/jcicg.2010010105
DOI: http://www.igi-global.com/bookstore/article.aspx?titleid=41711
Few people experience art the way a restorer does: as a tactile, multi-dimensional and ever-changing object.
The authors investigate a set of tools for the distributed analysis of artworks in physical and digital realms.
Their work is based on observation of professional art restoration practice and rich data available through
multi-spectral imaging. The article presents a multidisciplinary approach to develop interfaces usable by restorers,
students and amateurs. Several interaction techniques were built using physical metaphors to navigate the
layers of information revealed by multi-spectral imaging, prototyped using single- and multi-touch displays.
The authors built modular systems to accommodate the technical needs and resources of various institutions
and individuals, with the aim to make high-quality art diagnostics possible on different hardware platforms,
as well as rich diagnostic and historic information about art available for education and research through a
cohesive set of web-based tools instantiated in physical interfaces and public installations.
Relief: A Scalable Actuated Shape Display
Leithinger, D. and Ishii, H. 2010. Relief: a scalable actuated shape display. In Proceedings of the Fourth international Conference on Tangible, Embedded, and Embodied interaction (Cambridge, Massachusetts, USA, January 24 – 27, 2010). TEI ‘10. ACM, New York, NY, 221-222.
DOI: http://doi.acm.org/10.1145/1709886.1709928
Relief is an actuated tabletop display, which is able to render and animate three-dimensional shapes with a malleable surface. It allows users to experience and form digital models like geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of 120 motorized pins, which are controlled with a low-cost, scalable platform built upon open-source hardware and software tools. Each pin can be addressed individually and senses user input like pulling and pushing.
g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface.
Jamie Zigelbaum, Alan Browning, Daniel Leithinger, Olivier Bau, and Hiroshi Ishii. 2010. g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI ‘10). ACM, New York, NY, USA, 261-264.
DOI: http://dl.acm.org/citation.cfm?id=1709939
In this paper we present g-stalt, a gestural interface for interacting with video. g-stalt is built upon the g-speak spatial operating environment (SOE) from Oblong Industries. The version of g-stalt presented here is realized as a three-dimensional graphical space filled with over 60 cartoons. These cartoons can be viewed and rearranged along with their metadata using a specialized gesture set. g- stalt is designed to be chirocentric, spatiotemporal, and telekinetic.
Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.
Cati Vaucelle. Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.
DOI: http://hdl.handle.net/1721.1/61936
Tangible Media as an area has not explored how the tangible handle is more than a marker or place-holder for digital data. Tangible Media can do more. It has the power to materialize and redefine our conception of space and content during the creative process. It can vary from an abstract token that represents a movie to an anthropomorphic plush that reflects the behavior of a sibling during play. My work begins by extending tangible concepts of representation and token-based interactions into movie editing and play scenarios. Through several design iterations and research studies, I establish tangible technologies to drive visual and oral perspectives along with finalized creative works, all during a child’s play and exploration.
I define the framework, Gesture Object Interfaces, expanding on the fields of Tangible User Interaction and Gesture Recognition. Gesture is a mechanism that can reinforce or create the anthropomorphism of an object. It can give the object life. A Gesture Object is an object in hand while doing anthropomorphized gestures. Gesture Object Interfaces engender new visual and narrative perspectives as part of automatic film assembly during children’s play. I generated a suite of automatic film assembly tools accessible to diverse users. The tools that I designed allow for capture, editing and performing to be completely indistinguishable from one another. Gestures integrated with objects become a coherent interface on top of natural play. I built a distributed, modular camera environment and gesture interaction to control that environment. The goal of these new technologies is to motivate children to take new visual and narrative perspectives.
In this dissertation I present four tangible platforms that I created as alternatives to the usual fragmented and sequential capturing, editing and performing of narratives available to users of current storytelling tools. I developed Play it by Eye, Frame it by hand, a new generation of narrative tools that shift the frame of reference from the eye to the hand, from the viewpoint (where the eye is) to the standpoint (where the hand is). In Play it by Eye, Frame it by Hand environments, children discover atypical perspectives through the lens of everyday objects. When using Picture This!, children imagine how an object would appear relative to the viewpoint of the toy. They iterate between trying and correcting in a world of multiple perspectives. The results are entirely new genres of child-created films, where children finally capture the cherished visual idioms of action and drama. I report my design process over the course of four tangible research projects that I evaluate during qualitative observations with over one hundred 4- to 14-year-old users. Based on these research findings, I propose a class of moviemaking tools that transform the way users interpret the world visually, and through storytelling.
WOW pod
Vaucelle, C., Shada, S., and Jahn, M. 2010. WOW pod. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ‘10. ACM, New York, NY, 4813-4816.
DOI: http://doi.acm.org/10.1145/1753846.1754237
WOW Pod is an immersive architectural solution for the advanced massive online role-playing gamer that provides and anticipates all life needs. Inside, the player finds him/herself comfortably seated in front of the computer screen with easy-to-reach water, pre-packaged food, and a toilet conveniently placed underneath a built-in throne.
OnObject: programming of physical objects for gestural interaction
Keywon Chung. OnObject: programming of physical objects for gestural interaction. Thesis (MsC)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.
DOI: http://hdl.handle.net/1721.1/61943
Tangible User Interfaces (TUIs) have fueled our imagination about the future of computational user experience by coupling physical objects and activities with digital information. Despite their conceptual popularity, TUIs are still difficult and time-consuming to construct, requiring custom hardware assembly and software programming by skilled individuals. This limitation makes it impossible for end users and designers to interactively build TUIs that suit their context or embody their creative expression. OnObject enables novice end users to turn everyday objects into gestural interfaces through the simple act of tagging. Wearing a sensing device, a user adds a behavior to a tagged object by grabbing the object, demonstrating a trigger gesture, and specifying a desired response. Following this simple Tag-Gesture-Response programming grammar, novice end users are able to transform mundane objects into gestural interfaces in 30 seconds or less. Instead of being exposed to low-level development tasks, users are can focus on creating an enjoyable mapping between gestures and media responses. The design of OnObject introduces a novel class of Human-Computer Interaction (HCI): gestural programming of situated physical objects. This thesis first outlines the research challenge and the proposed solution. It then surveys related work to identify the inspirations and differentiations from existing HCI and design research. Next, it describes the sensing and programming hardware and gesture event server architecture. Finally, it introduces a set of applications created with OnObject and gives observations from user participated sessions.
2009
Trackmate: Large-Scale Accessibility of Tangible User Interfaces
Adam Kumpf. Trackmate: Large-Scale Accessibility of Tangible User Interfaces. Thesis (M.S.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.
There is a long history of Tangible User Interfaces (TUI) in the community of human-
computer interaction, but surprisingly few of these interfaces have made it beyond lab
and gallery spaces. This thesis explores how the research community may begin to
remedy the disconnect between modern TUIs and the everyday computing experience
via the creation and dissemination of Trackmate, an accessible (both ubiquitous and
enabling) tabletop tangible user interface that scales to a large number of users with
minimal hardware and configuration overhead. Trackmate is entirely open source and
designed: to be community-centric; to leverage common objects and infrastructure; to
provide a low floor, high ceiling, and wide walls for development; to allow user mod-
ifications and improvisation; to be shared easily via the web; and to work alongside
a broad range of existing applications and new research interface prototypes.
Wetpaint: Scraping Through Multi-Layered Images
Bonanni, L., Xiao, X., Hockenberry, M., Subramani, P., Ishii, H., Seracini, M., and Schulze, J. 2009. Wetpaint: scraping through multi-layered images. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ‘09. ACM, New York, NY, 571-574.
DOI: http://doi.acm.org/10.1145/1518701.1518789
A work of art rarely reveals the history of creation and interpretation that has given it meaning and value. Wetpaint is a gallery interface based on a large touch screen that allows curators and museumgoers to investigate the hidden layers of a painting, and in the process contribute to the pluralistic interpretation of the piece, both locally and online. Inspired by traditional restoration and curatorial methods, we have designed a touch-based user interface for exhibition spaces that allows “virtual restoration” by scraping through the multi-spectral scans of a painting, and “collaborative curation” by leaving voice annotations within the artwork. The system functions through an online social image network for flexibility and to support rich and collaborative commentary for local and remote visitors
Burn Your Memory Away: One-time Use Video Capture and Storage Device to Encourage Memory Appreciation
Chi, P., Xiao, X., Chung, K., and Chiu, C. 2009. Burn your memory away: one-time use video capture and storage device to encourage memory appreciation. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ‘09. ACM, New York, NY, 2397-2406.
DOI: http://doi.acm.org/10.1145/1520340.1520342
Although modern ease of access to technology enables many of us to obsessively document our lives, much of the captured digital content is often disregarded and forgotten on storage devices, with no concerns of cost or decay. Can we design technology that helps people better appreciate captured memories? What would people do if they only had one more chance to relive past memories? In this paper, we present a prototype design, PY-ROM, a matchstick-like video recording and storage device that burns itself away after being used. This encourages designers to consider lifecycles and human-computer relationships by integrating physical properties into digitally augmenting everyday objects.
Stress OutSourced: A Haptic Social Network via Crowdsourcing
Chung, K., Chiu, C., Xiao, X., and Chi, P. 2009. Stress outsourced: a haptic social network via crowdsourcing. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ‘09. ACM, New York, NY, 2439-2448.
DOI: http://doi.acm.org/10.1145/1520340.1520346
Stress OutSourced (SOS) is a peer-to-peer network that allows anonymous users to send each other therapeutic massages to relieve stress. By applying the emerging concept of crowdsourcing to haptic therapy, SOS brings physical and affective dimensions to our already networked lifestyle while preserving the privacy of its members. This paper first describes the system, its three unique design choices regarding privacy model, combining mobility and scalability, and affective communication for an impersonal crowd, and contrasts them with other efforts in their respective areas. Finally, this paper describes future work and opportunities in the area of haptic social networks.
Some Challenges for Designers of Shape Changing Interfaces
Zigelbaum, J., Labrune, J.B. Some Challenges for Designers of Shape Changing Interfaces. CHI 2009
Workshop on Transitive Materials (2009).
In this paper we describe some challenges we find in the design of shape changing user interfaces through our own work and thoughts on the current state of the art in HCI. Due to the large set of possibilities for shape changing materials we are faced with a too-large constraint system. Without a good understanding and the beginning of a standardization or physical language for shape change it will be hard to design interactions that make sense beyond those in very limited, one-off applications. We are excited by the challenge that this poses to researchers and look forward to understanding how to use programmable and shape changing materials in the future.
Fusing Computation into Mega-Affordance Objects
Chung, K., Ishii, H., Fusing computation into mega-affordance objects. CHI 2009 Workshop on Transitive Materials (2009).
In this paper, I present the concept of “Mega-
Affordance Objects” (MAOs). An MAO is a common
object with a primitive form factor that exhibits multiple
affordances and can perform numerous improvised
functions in addition to its original one. In order to
broaden the reach of Tangible User Interfaces (TUIs)
and create compelling everyday applications, I propose
applying computational power to Mega-Affordance
Objects that are highly adaptable and frequently used.
This approach will leverage the capabilities of smart
materials and contribute to the principles of Organic
User Interface (OUI) design.
Download
Spime Builder: A Tangible Interface for Designing Hyperlinked Objects
Bonanni, L., Vargas, G., Chao, N., Pueblo, S., and Ishii, H. 2009. Spime builder: a tangible interface for designing hyperlinked objects. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 263-266.
DOI: http://doi.acm.org/10.1145/1517664.1517719
Ubiquitous computing is fostering an explosion of physical artifacts that are coupled to digital information – so-called Spimes. We introduce a tangible workbench that allows for the placement of hyperlinks within physical models to couple physical artifacts with located interactive digital media. A computer vision system allows users to model three-dimensional objects and environments in real-time using physical materials and to place hyperlinks in specific areas using laser pointer gestures. We present a working system for real-time physical/digital exhibit design, and propose the means for expanding the system to assist Design for the Environment strategies in product design.
Proverbial Wallet: Tangible Interface for Financial
Awareness
Kestner, J., Leithinger, D., Jung, J., and Petersen, M. 2009. Proverbial wallet: tangible interface for financial awareness. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 55-56.
DOI: http://doi.acm.org/10.1145/1517664.1517683
We propose a tangible interface concept for communicating personal financial information in an ambient and relevant manner. The concept is embodied in a set of wallets that provide the user with haptic feedback about personal financial metrics. We describe how such feedback can inform purchasing decisions and improve general financial awareness.
Stop-Motion Prototyping for Tangible Interfaces
Bonanni, L. and Ishii, H. 2009. Stop-motion prototyping for tangible interfaces. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 315-316.
DOI: http://doi.acm.org/10.1145/1517664.1517729
Stop-motion animation brings the constraints of the body, space and materials into video production. Building on the tradition of video prototyping for interaction design, stop motion is an effective technique for concept development in the design of Tangible User Interfaces. This paper presents a framework for stop-motion prototyping and the results of two workshops based on stop-motion techniques including pixillation, claymation and time-lapse photography. The process of stop-motion prototyping fosters collaboration, legibility and rapid iterative design in a physical context that can be useful to the early stages of
tangible interaction design.
Design of Haptic Interfaces for Psychotherapy
Vaucelle, C., Bonanni, L., and Ishii, H. 2009. Design of haptic interfaces for therapy. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ‘09. ACM, New York, NY, 467-470.
DOI: http://doi.acm.org/10.1145/1518701.1518776
Touch is fundamental to our emotional well-being. Medical science is starting to understand and develop touch-based therapies for autism spectrum, mood, anxiety and borderline disorders. Based on the most promising touch therapy protocols, we are presenting the first devices that simulate touch through haptic devices to bring relief and assist clinical therapy for mental health. We present several haptic systems that enable medical professionals to facilitate the collaboration between patients and doctors and potentially pave the way for a new form of non-invasive treatment that could be adapted from use in care-giving facilities to public use. We developed these prototypes working closely with a team of mental health professionals.
Play-it-by-eye! Collect Movies and Improvise Perspectives with Tangible Video Objects.
Vaucelle, C. and Ishii, H. 2009. Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects. In Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), Special Issue: Tangible Interaction, 23, 305–316. Cambridge University Press.
We present an alternative video-making framework for children with tools that integrate video capture with movie production. We propose different forms of interaction with physical artifacts to capture storytelling. Play interactions as input to video editing systems assuage the interface complexities of film construction in commercial software. We aim to motivate young users in telling their stories, extracting meaning from their experiences by capturing supporting video to accompany their stories, and driving reflection on the outcomes of their movies. We report on our design process over the course of four research projects that span from a graphical user interface to a physical instantiation of video. We interface the digital and physical realms using tangible metaphors for digital data, providing a spontaneous and collaborative approach to video composition. We evaluate our systems during observations with 4- to 14-year-old users and analyze their different
approaches to capturing, collecting, editing, and performing visual and sound clips.
Cost-effective Wearable Sensor to Detect EMF
Vaucelle, C., Ishii, H., and Paradiso, J. A. 2009. Cost-effective wearable sensor to detect EMF. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI’09. ACM, New York, NY, 4309-4314.
DOI: http://doi.acm.org/10.1145/1520340.1520658
In this paper we present the design of a cost-effective wearable sensor to detect and indicate the strength and other characteristics of the electric field emanating from a laptop display. Our Electromagnetic Field Detector Bracelet can provide an immediate awareness of electric fields radiated from an object used frequently. Our technology thus supports awareness of ambient background emanation beyond human perception. We discuss how detection of such radiation might help to “fingerprint” devices and aid in applications that require determination of indoor location.
2008
Sculpting Behavior A Tangible Language for Hands-On Play and Learning
Hayes Raffle. Sculpting Behavior
A Tangible Language for Hands-On Play and Learning. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.
DOI: http://hdl.handle.net/1721.1/44912
For over a century, educators and constructivist theorists have argued that children learn by actively forming and testing – constructing – theories about how the world works. Recent efforts in the design of “tangible user interfaces” (TUIs) for learning have sought to bring together interaction models like direct manipulation and pedagogical frameworks like constructivism to make new, often complex, ideas salient for young children. Tangible interfaces attempt to eliminate the distance between the computational and physical world by making behavior directly manipulable with one’s hands. In the past, systems for children to model behavior have been either intuitive-but-simple (e.g. curlybot) or complex-but-abstract, (e.g. LEGO Mindstorms). In order to develop a system that supports a user’s transition from intuitive-but-simple constructions to constructions that are complex-but-abstract, I draw upon constructivist educational theories, particularly Bruner’s theories of how learning progresses through enactive then iconic and then symbolic representations.
This thesis presents an example system and set of design guidelines to create a class of tools that helps people transition from simple-but-intuitive exploration to abstract-and-flexible exploration. The Topobo system is designed to facilitate mental transitions between different representations of ideas, and between different tools. A modular approach, with an inherent grammar, helps people make such transitions. With Topobo, children use enactive knowledge, e.g. knowing how to walk, as the intellectual basis to understand a scientific domain, e.g. engineering and robot locomotion. Queens, backpacks, Remix and Robo add various abstractions to the system, and extend the tangible interface. Children use Topobo to transition from hands-on knowledge to theories that can be tested and reformulated, employing a combination of enactive, iconic and symbolic representations of ideas.
SpeakCup: Simplicity, BABL, and Shape Change
Zigelbaum, J., Chang, A., Gouldstone, J., Monzen, J. J., and Ishii, H. 2008. SpeakCup: simplicity, BABL, and shape change. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ‘08. ACM, New York, NY, 145-146.
DOI: http://doi.acm.org/10.1145/1347390.1347422
In this paper we present SpeakCup, a simple tangible interface that uses shape change to convey meaning in its interaction design. SpeakCup is a voice recorder in the form of a soft silicone disk with embedded sensors and actuators. Advances in sensor technology and material science have provided new ways for users to interact with computational devices. Rather than issuing commands to a system via abstract and multi-purpose buttons the door is open for more nuanced and application-specific approaches. Here we explore the coupling of shape and action in an interface designed for simplicity while discussing some questions that we have encountered along the way.
Picture This! Film assembly using toy gestures
Vaucelle, C. and Ishii, H. 2008. Picture this!: film assembly using toy gestures. In Proceedings of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY, 350-359.
DOI: http://doi.acm.org/10.1145/1409635.1409683
We present Picture This! a new input device embedded in children’s toys for video composition. It consists of a new form of interaction for children’s capturing of storytelling with physical artifacts. It functions as a video and storytelling performance system in that children craft videos with and about character toys as the system analyzes their gestures and play patterns. Children’s favorite props alternate between characters and cameramen in a film. As they play with the toys to act out a story, they conduct film assembly. We position our work as ubiquitous computing that supports children’s tangible interaction with digital materials. During user testing, we observed children ages 4 to 10 playing with Picture This!. We assess to what extent
gesture interaction with objects for video editing allows children to explore visual perspectives in storytelling. A new genre of Gesture Object Interfaces as exemplified by Picture This relies on the analysis of gestures coupled with objects to represent bits.
From Touch Sensitive to Aerial Jewelry
Cati Vaucelle. From Touch Sensitive to Aerial Jewelry (Book Chapter). In Fashionable Technology, The intersection of Design, Fashion, Science, and Technology. Editor Seymour, S., Springer-Verlag Wien New York, 2008
Now that we constantly travel by plane, use GIS, google map, satellite imagery, our vision is expanded. Our everyday objects have a language that adapts itself to our influences. On the other end, as much as the car has influenced painting and the representation of space and movement, we wanted to show how the use of new technologies can change our way to design personal objects as exemplified by Aerial Jewelry
Handsaw: Tangible Exploration of Volumetric Data
by Direct Cut-Plane Projection
Bonanni, L., Alonso, J., Chao, N., Vargas, G., and Ishii, H. 2008. Handsaw: tangible exploration of volumetric data by direct cut-plane projection. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 251-254.
DOI: http://doi.acm.org/10.1145/1357054.1357098
Tangible User Interfaces are well-suited to handling three-dimensional data sets by direct manipulation of real objects in space, but current interfaces can make it difficult to look inside dense volumes of information. This paper presents the SoftSaw, a system that detects a virtual cut-plane projected by an outstretched hand or laser-line directly on an object or space and reveals sectional data on an adjacent display. By leaving the hands free and using a remote display, these techniques can be shared between multiple users and integrated into everyday practice. The SoftSaw has been prototyped for scientific visualizations in medicine, engineering and urban design. User evaluations suggest that using a hand is more intuitive while projected light is more precise than keyboard and mouse control, and the SoftSaw system has the potential to be used more effectively by novices and in groups.
Renaissance Panel: The Roles of
Creative Synthesis in Innovation
Hockenberry, M. and Bonanni, L. 2008. Renaissance panel: the roles of creative synthesis in innovation. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2237-2240.
DOI: http://doi.acm.org/10.1145/1358628.1358658
The Renaissance ideal can be expressed as a creative synthesis between cultural disciplines, standing in stark contrast to our traditional focus on scientific specialization. This panel presents a number of experts
who approach the synthesis of art and science as the modus operandi for their work, using it as a tool for creativity, research, and practice. Understanding these
approaches allows us to identify the roles of synthesis in successful innovation and improve the implementation of interdisciplinary synthesis in research and practice.
Future Craft: How Digital Media is Transforming Product Design
Bonanni, L., Parkes, A., and Ishii, H. 2008. Future craft: how digital media is transforming product design. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2553-2564.
DOI: http://doi.acm.org/10.1145/1358628.1358712
The open and collective traditions of the interaction community have created new opportunities for product designers to engage in the social issues around industrial production. This paper introduces Future Craft, a design methodology which applies emerging digital tools and processes to product design toward new objects that are socially and environmentally sustainable. We present the results of teaching the Future Craft curriculum at the MIT Media Lab including principal themes of public, local and personal design, resources, assignments and student work. Novel ethnographic methods are discussed with relevance to informing the design of physical products. We aim to create a dialogue around these themes for the product design and HCI communities.
Slurp: Tangibility, Spatiality, and an Eyedropper
Zigelbaum, J., Kumpf, A., Vazquez, A., and Ishii, H. 2008. Slurp: tangibility spatiality and an eyedropper. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2565-2574.
DOI: http://doi.acm.org/10.1145/1358628.1358713
The value of tangibility for ubiquitous computing is in its simplicity-when faced with the question of how to grasp a digital object, why not just pick it up? But this is problematic; digital media is powerful due to its extreme mutability and is therefore resistant to the constraints of static physical form. We present Slurp, a tangible interface for locative media interactions in a ubiquitous computing environment. Based on the affordances of an eyedropper, Slurp provides haptic and visual feedback while extracting and injecting pointers to digital media between physical objects and displays.
Reality-Based Interaction: A Framework for Post-WIMP Interfaces
Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 201-210.
DOI: http://doi.acm.org/10.1145/1357054.1357089
Abstract
We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research.
AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling (in Japanese)
Anabuki, M., Ishii, H. 2008. AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling. Transactions of the Virtual Reality Society of Japan, Special Issue on Mixed Reality 4 (Japanese Edition), Vol.13, No.2, 2008
Abstract
AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling
Mahoro Anabuki*1 and Hiroshi Ishii*2
Abstract —- We introduce AR-Jig, a new handheld tangible user interface for 3D digital modeling
in Augmented Reality space. AR-Jig has a pin array that displays a 2D physical curve coincident
with a contour of a digitally-displayed 3D form. It supports physical interaction with a portion of
a 3D digital representation, allowing 3D forms to be directly touched and modified. This project
leaves the majority of the data in the digital domain but gives physicality to any portion of the
larger digital dataset via a handheld tool. Through informal evaluations, we demonstrate AR-Jig
would be useful for a design domain where manual modeling skills are critical.
Keywords: actuated interface, augmented reality, handheld tool, pin array display
Tangible Bits: Beyond Pixels
Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ‘08. ACM, New York, NY, xv-xxv.
DOI: http://doi.acm.org/10.1145/1347390.1347392
Abstract
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct
manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using
digital technology and at the same time taking advantage of
human abilities to grasp and manipulate physical objects
and materials. This paper discusses a model of TUI, key
properties, genres, applications, and summarizes the
contributions made by the Tangible Media Group and other
researchers since the publication of the first Tangible Bits
paper at CHI 1997. http://tangible.media.mit.edu/
Topobo in the Wild: Longitudinal Evaluations of Educators Appropriating a Tangible Interface
Parkes, A., Raffle, H., and Ishii, H. 2008. Topobo in the wild: longitudinal evaluations of educators appropriating a tangible interface. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 1129-1138.
DOI: http://doi.acm.org/10.1145/1357054.1357232
Abstract
What issues arise when designing and deploying tangibles for learning in long term evaluations? This paper reports on a series of studies in which the Topobo system, a 3D tangible construction kit with the ability to record and playback motion, was provided to educators and designers to use over extended periods of time in the context of their day-to-day work. Tangibles for learning – like all educational materials – must be evaluated in relation both to the student and the teacher, but most studies of tangibles for learning focus on the student as user. Here, we focus on the conception of the educator, and their use of the tangible interface in the absence of an inventor or HCI researcher. The results of this study identify design and pedagogical issues that arise in response to distribution of a tangible for learning in different educational environments.
The Everyday Collector
Vaucelle, C. The Everyday Collector. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY.
Abstract
This paper presents the conceptualization of the Everyday Collector as a bridge between the traditional physical collection and the growing digital one. This work supports a reflection on the collection impulse and the impact that digital technologies have on the physical act of collection.
Electromagnetic Field Detector Bracelet
Vaucelle, C., Ishii, H. and Paradiso,.J. Electromagnetic Field Detector Bracelet. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY.
Abstract
We present the design of a cost-effective wearable sensor to detect and indicate the strength and other characteristics of the electric field emanating from a laptop display. Our bracelet can provide an immediate awareness of electric fields radiated from an object used frequently. Our technology thus supports awareness of ambient background emanation beyond human perception. We discuss how detection of such radiation might help to “fingerprint” devices and aid in applications that require determination of indoor location.
2007
TILTle: exploring dynamic balance
Modlitba, P., Offenhuber, D., Ting, M., Tsigaridi, D., and Ishii, H. 2007. TILTle: exploring dynamic balance. In Proceedings of the 2007 Conference on Designing Pleasurable Products and interfaces (Helsinki, Finland, August 22 – 25, 2007). DPPI ‘07. ACM, New York, NY, 466-472. DOI= http://doi.acm.org/10.1145/1314161.1314207
DOI: http://doi.acm.org/10.1145/1314161.1314207
Abstract
In this paper we introduce a novel interface for exploring dynamic equilibria using the metaphor of a traditional balance scale. Rather than comparing and identifying physical weight, our scale can be used for contrasting digital data in different domains. We do this by assigning virtual weight to objects, which physically affects the scale. Our goal is to make complex comparison mechanisms more visible and graspable.
The Sound of Touch
David Merrill and Hayes Raffle. 2007. The sound of touch. In ACM SIGGRAPH 2007 posters (SIGGRAPH ‘07). ACM, New York, NY, USA, , Article 138
DOI: http://doi.acm.org/10.1145/1280720.1280871
Abstract
All people have experienced hearing sounds produced when they touch and manipulate different materials. We know what it will sound like to bang our fist against a wooden door, or to crumple a piece of newspaper. We can imagine what a coffee mug will sound like if it is dropped onto a concrete floor. But our wealth of experience handling physical materials does not typically produce much intuition for operating a new electronic instrument, given the inherently arbitrary mapping from gesture to sound.
SP3X: a six-degree of freedom device for natural model creation
Richard Whitney. SP3X: a six-degree of freedom device for natural model creation. Thesis (M.S.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.
DOI: http://hdl.handle.net/1721.1/38641
Abstract
This thesis presents a novel input device, called SP3X, for the creation of digital models in a semi-immersive environment. The goal of SP3X is to enable novice users to construct geometrically complex three-dimensional objects without extensive training or difficulty. SP3X extends the ideas of mixed reality and partial physical instantiation while building on the foundation of tangible interfaces. The design of the device reflects attention to human physiologic capabilities in manual precision, binocular vision, and reach. The design also considers cost and manufacturability. This thesis presents prior and contributing research from industry, biology, and interfaces in academia. A study investigates the usability of the device and finds that it is functional and easily learned, and identifies several areas for improvement. Finally, a Future Work section is provided to guide researchers pursuing this or similar interfaces. The SP3X project is a result of extensive collaboration with Mahoro Anabuki, a visiting scientist from Canon Development Americas, and could not have been completed without his software or his insight.
The Sound of Touch: Physical Manipulation of Digital Sound.
David Merrill, Hayes Raffle, and Roberto Aimi. 2008. The sound of touch: physical manipulation of digital sound. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (CHI ‘08). ACM, New York, NY, USA, 739-742.
DOI: http://dl.acm.org/citation.cfm?doid=1357054.1357171
Abstract
The Sound of Touch is a new tool for real-time capture and sensitive physical stimulation of sound samples using digital convolution. Our hand-held wand can be used to (1) record sound, then (2) play back the recording by brushing, scraping, striking or otherwise physically manipulating the wand against physical objects. During playback, the recorded sound is continuously filtered by the acoustic interaction of the wand and the material being touched. The Sound of Touch enables a physical and continuous sculpting of sound that is typical of acoustic musical instruments and interactions with natural objects and materials, but not available in GUI-based tools or most electronic music instruments. This paper reports the design of the system and observations of thousands of users interacting with it in an exhibition format. Preliminary user feedback suggests future applications to foley, professional sound design, and musical performance.
The Sound of Touch
David Merrill and Hayes Raffle. 2007. The sound of touch. In CHI ‘07 extended abstracts on Human factors in computing systems (CHI EA ‘07). ACM, New York, NY, USA, 2807-2812.
DOI: http://doi.acm.org/10.1145/1240866.1241044
Abstract
In this paper we describe the Sound of Touch, a new instrument for real-time capture and sensitive physical stimulation of sound samples using digital convolution. Our hand-held wand can be used to (1) record sound, then (2) playback the recording by brushing, scraping, striking or otherwise physically manipulating the wand against physical objects. During playback, the recorded sound is continuously filtered by the acoustic interaction of the wand and the material being touched. Our texture kit allows for convenient acoustic exploration of a range of materials.An acoustic instrument.s resonance is typically determined by the materials from which it is built. With the Sound of Touch, resonant materials can be chosen during the performance itself, allowing performers to shape the acoustics of digital sounds by leveraging their intuitions for the acoustics of physical objects. The Sound of Touch permits real-time exploitation of the sonic properties of a physical environment, to achieve a rich and expressive control of digital sound that is not typically possible in electronic sound synthesis and control systems.
Simplicity in Interaction Design
Chang, A., Gouldstone, J., Zigelbaum, J., and Ishii, H. 2007. Simplicity in interaction design. In Proceedings of the 1st international Conference on Tangible and Embedded interaction (Baton Rouge, Louisiana, February 15 – 17, 2007). TEI ‘07. ACM, New York, NY, 135-138.
DOI: http://doi.acm.org/10.1145/1226969.1226997
Abstract
Attaining simplicy is a key challenge in interaction design. Our approach relies on a minimalist design exercise to explore the communication capacity for interaction components. This approach results in expressive design solutions, useful perspectives of interaction design and new interaction techniques.
Zstretch: A Stretchy Fabric Music Controller
Chang, A. and Ishii, H. 2007. Zstretch: a stretchy fabric music controller. In Proceedings of the 7th international Conference on New interfaces For Musical Expression (New York, New York, June 06 – 10, 2007). NIME ‘07. ACM, New York, NY, 46-49.
DOI: http://doi.acm.org/10.1145/1279740.1279746
Abstract
We present Zstretch, a textile music controller that supports expressive haptic interactions. The musical controller takes advantage of the fabric’s topological constraints to enable proportional control of musical parameters. This novel interface explores ways in which one might treat music as a sheet of cloth. This paper proposes an approach to engage simple technologies for supporting ordinary hand interactions. We show that this combination of basic technology with general tactile movements can result in an expressive musical interface.
AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve
Anabuki, M.; Ishi, H., “AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve,” Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on , vol., no., pp.55-66, 13-16 Nov. 2007
DOI: http://doi.ieeecomputersociety.org/10.1109/ISMAR.2007.4538826
Abstract
We introduce AR-Jig, a new handheld tangible user interface for
3D digital modeling in augmented reality (AR) space. AR-Jig has
a pin array that displays a 2D physical curve coincident with a
contour of a digitally displayed 3D form. It supports physical
interaction with a portion of a 3D digital representation, allowing
3D forms to be directly touched and modified. Traditional
tangible user interfaces physically embody all the data; in contrast,
this project leaves the majority of the data in the digital domain
but gives physicality to any portion of the larger digital dataset via
a handheld tool. This tangible intersection enables the flexible
manipulation of digital artifacts, both physically and virtually.
Through an informal test by end-users and interviews with
professionals, we confirmed the potential of the AR-Jig concept
while identifying the improvements necessary to make AR-Jig a
practical tool for 3D digital design.
Interfacing Video Capture, Editing and Publication in a Tangible Environment
Vaucelle, C. and Ishii, H. 2007. C. Interfacing Video Capture, Editing and Publication in a Tangible Environment. In. Baranauskas et al. (Eds.): INTERACT 2007, LNCS 4663, Lecture Notes in Computer Science, Part II, pp. 1 – 14, 2007. Springer Berlin / Heidelberg publisher.
DOI: http://dx.doi.org/10.1007/978-3-540-74800-7_1
Abstract
The paper presents a novel approach to collecting, editing and performing visual and sound clips in real time. The cumbersome process of capturing and editing becomes fluid in the improvisation of a story, and accessible as a way to create a final movie. It is shown how a graphical interface created for video production informs the design of a tangible environment that provides a spontaneous and collaborative approach to video creation, selection and sequencing. Iterative design process, participatory design sessions and workshop observations with 10-12 year old users from Sweden and Ireland are discussed. The limitations of interfacing video capture, editing and publication in a self-contained platform are addressed.
Keywords: Tangible User Interface – Video – Authorship – Mobile Technology – Digital Media – Video Jockey – Learning – Children – Collaboration
Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain
LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 801-804.
DOI: http://doi.acm.org/10.1145/1240624.1240744
Abstract
We present Senspectra, a computationally augmented
physical modeling toolkit designed for sensing and
visualization of structural strain. Senspectra seeks to explore
a new direction in computational materiality, incorporating
the material quality of malleable elements of an interface
into its digital control structure. The system functions as a
decentralized sensor network consisting of nodes, embedded
with computational capabilities and a full spectrum LED, and
fl exible joints. Each joint functions as an omnidirectional
bend sensing mechanism to sense and communicate
mechanical strain between neighboring nodes.
Tug n Talk: A Belt Buckle for Tangible Tugging Communication
Adcock, M., Harry, D., Boch, M., Poblano, R.-D. and Harden, V. Tug n’ Talk: A Belt Buckle for Tangible Tugging Communication. Presented at alt.chi 2007
Abstract
Tug and Talk is prototype communication system with which you can send a “tug” to another person. The Tug and Talk device sits on your belt and connects to your shirt. Another Tug and Talk user can tug on the chain coming out of their matching belt and their tugging pattern is replicated as a tug on your own shirt. Tugs can express lots of different ideas, depending on context. A tug could be brief and small to see if someone is interruptable, or large, fast, and long to get someone’s attention in an urgent situation. We think this sort of tangible social channel between people is a powerful idea, and we implemented two prototype devices to explore its potential.
Touch . Sensitive Apparel
Vaucelle, C. and Abbas, Y. 2007. Touch: sensitive apparel. In CHI ‘07 Extended Abstracts on Human Factors in Computing Systems (San Jose, CA, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 2723-2728.
DOI: http://doi.acm.org/10.1145/1240866.1241069
Abstract
Touch·Sensitive is a haptic apparel that allows massage therapy to be diffused, customized and controlled by people while on the move. It provides individuals with a sensory cocoon. Made of modular garments, Touch·Sensitive applies personalized stimuli. We present the design process and a series of low fidelity prototypes that lead us to the Touch·Sensitive Apparel.
Jabberstamp: Embedding sound and voice in traditional drawings
Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In Proceedings of the 6th international Conference on interaction Design and Children (Aalborg, Denmark, June 06 – 08, 2007). IDC ‘07. ACM, New York, NY, 137-144.
DOI: http://doi.acm.org/10.1145/1297277.1297306
Abstract
We introduce Jabberstamp, the first tool that allows children to synthesize their drawings and voices. To use Jabberstamp, children create drawings, collages or paintings on normal paper. They press a special rubber stamp onto the page to record sounds into their drawings. When children touch the marks of the stamp with a small trumpet, they can hear the sounds playback, retelling the stories they created.
We describe our design process and analyze the mechanism between the act of drawing and the one of telling, defining interdependencies between the two activities. In a series of studies, children ages 4—8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children’s narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children’s emergent literacy.
Remix and Robo: Improvisational performance and competition with modular robotic building toys
Raffle, H., Yip, L., and Ishii, H. 2007. Remix and robo: sampling, sequencing and real-time control of a tangible robotic construction system. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 – 09, 2007). SIGGRAPH ‘07. ACM, New York, NY, 35.
DOI: http://doi.acm.org/10.1145/1282040.1282077
Abstract
We present Remix and Robo, new composition and performance based tools for robotics control. Remix is a tangible interface used to sample, organize and manipulate gesturally-recorded robotic motions. Robo is a modified game controller used to capture robotic motions, adjust global motion parameters and execute motion recordings in real-time. Children use Remix and Robo to engage in (1) character design and (2) competitive endeavors with Topobo, a constructive assembly system with kinetic memory.
Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.
Mechanical Constraints as Computational Constraints in Tabletop Tangible Interfaces
Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 809-818.
DOI: http://doi.acm.org/10.1145/1240624.1240746
Abstract
This paper presents a new type of human-computer interface called Pico (Physical Intervention in Computational Optimization) based on mechanical constraints that combines some of the tactile feedback and affordances of mechanical systems with the abstract computational power of modern computers. The interface is based on a tabletop interaction surface that can sense and move small objects on top of it. The positions of these physical objects represent and control parameters inside a software application, such as a system for optimizing the configuration of radio towers in a cellular telephone network. The computer autonomously attempts to optimize the network, moving the objects on the table as it changes their corresponding parameters in software. As these objects move, the user can constrain their motion with his or her hands, or many other kinds of physical objects. The interface provides ample opportunities for improvisation by allowing the user to employ a rich variety of everyday physical objects as mechanical constraints. This approach leverages the user’s mechanical intuition for how objects respond to physical forces. As well, it allows the user to balance the numerical optimization performed by the computer with other goals that are difficult to quantify. Subjects in an evaluation were more effective at solving a complex spatial layout problem using this system than with either of two alternative interfaces that did not feature actuation.
Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain
LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 801-804.
DOI: http://doi.acm.org/10.1145/1240624.1240744
Abstract
We present Senspectra, a computationally augmented physical modeling toolkit designed for sensing and visualization of structural strain. Senspectra seeks to explore a new direction in computational materiality, incorporating the material quality of malleable elements of an interface into its digital control structure. The system functions as a decentralized sensor network consisting of nodes, embedded with computational capabilities and a full spectrum LED, and flexible joints. Each joint functions as an omnidirectional bend sensing mechanism to sense and communicate mechanical strain between neighboring nodes.
Using Senspectra, a user incrementally assembles and refines a physical 3D model of discrete elements with a real-time visualization of structural strain. While the Senspectra infrastructure provides a flexible modular sensor network platform, its primary application derives from the need to couple physical modeling techniques utilized in architecture and design disciplines with systems for structural engineering analysis. This offers direct manipulation augmented with visual feedback for an intuitive approach to physical real-time finite element analysis, particularly for organic forms.
Reflecting on Tangible User Interfaces: Three Issues Concerning Domestic Technology
Zigelbaum, J., and Csikszentmihályi, C. Reflecting on Tangible User Interfaces: Three Issues Concerning
Domestic Technology. CHI 2007 Workshop on Tangible User Interfaces in Context and Theory (2007).
Abstract
As tangible interface design continues to gain currency within the mainstream HCI community and further manifests within the space of consumer electronics how will its impact be realized and how as designers of new technologies can we shape that impact? In this paper we examine the question of choice in technology design from the perspective of the social sciences and then reflect on ways that TUI designers could use these insights within their own practices. Of particular interest to this work is the repurposing and transplantation of current technologies into the domestic environment. The home has been a focus for much of the new work in HCI and in the near future we will see a continuation and increase in the development of domestic technologies. Much of the current work developing connected homes and ubiquitous systems for domestic use is compelling, though it seems to run directly counter to insights gained from the social sciences and philosophy of technology. In particular computer scientists, designers, anthropologists, and historians all offer very different points of departure concerning commercialization of domestic space and privacy versus data sharing. These differences may indicate a fertile area for research. We’ve identified three issues for domestic technology design: 1) context and the differentiation of constraints, 2) the privitization of space, and 3) the perception of control. These issues are not original to this work, nor are they exhaustive. Our work here is to discuss them within the context of tangible interface and domestic technology design as a means for critical reflection.
Jabberstamp: embedding sound and voice in traditional drawings
Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 – 09, 2007). SIGGRAPH ‘07. ACM, New York, NY, 32.
DOI: http://doi.acm.org/10.1145/1282040.1282074
Abstract
Children in our culture are accustomed to creating people and things and places – with implied context – in their drawings. Since the first days they draw, parents will ask “who is that? Where are they? What are they doing?” From early on, children have learned through drawing to provide the information necessary for an audience to understand the story that is going on in their drawing. Conversely, learning how to contextualize an oral or written story in the absence of images is a much slower learning process for children, and children’s ability to use language to communicate when and where their story takes place is considered a milestone in literacy development.
Jabberstamp is the first tool that allows children to synthesize their drawings and voices. To use Jabberstamp, children create drawings, collages or paintings on normal paper. They press a special rubber stamp onto the page to record sounds into their drawings. When children touch the marks of the stamp with a small trumpet, they can hear the sounds playback, retelling the stories they created. In a series of studies, children ages 4-8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children’s narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children’s emergent literacy.
Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system
Hayes Raffle, Hiroshi Ishii, and Laura Yip. 2007. Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system. In Proceedings of the 6th international conference on Interaction design and children (IDC ‘07). ACM, New York, NY, USA, 89-96.
DOI: http://dx.doi.org/10.1145/1297277.1297295
Abstract
We present Remix and Robo, new composition and performance based tools for robotics control. Remix is a tangible interface used to sample, organize and manipulate gesturally-recorded robotic motions. Robo is a modified game controller used to capture robotic motions, adjust global motion parameters and execute motion recordings in real-time. Children use Remix and Robo to engage in (1) character design and (2) competitive endeavors with Topobo, a constructive assembly system with kinetic memory.
Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.
2006
Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces
Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 – 16, 2006). ACE ‘06, vol. 266. ACM, New York, NY, 27.
DOI: http://doi.acm.org/10.1145/1178823.1178856
Abstract
We present a set of interaction techniques for electronic musical performance using a tabletop tangible interface. Our system, the Audiopad, tracks the positions of objects on a tabletop surface and translates their motions into commands for a musical synthesizer. We developed and refi ned these interaction techniques through an iterative design process, in which new interaction techniques were periodically evaluated through performances and gallery installations. Based on our experience refi ning the design of this system, we conclude that tabletop interfaces intended for collaborative use should use interaction techniques designed to be legible to onlookers. We also conclude that these interfaces should allow users to spatially reconfi gure the objects in the interface in ways that are personally meaningful.
Glume: Exploring Materiality in a Soft Augmented Modular Modeling
Parkes, A., LeClerc, V., and Ishii, H. 2006. Glume: exploring materiality in a soft augmented modular modeling system. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 1211-1216.
DOI: http://doi.acm.org/10.1145/1125451.1125678
Abstract
This paper presents Glume, a system modular
primitives – six silicone bulbs, embedded with
sculptable gel and a full spectrum LED- attached to a
central processing “nucleus.” The nodes communicate
capacitively to their neighbors to determine a network
topology taking advantage of the novel conductive
characteristics of hairgel. As a modular, scalable
platform, Glume provides a system with discrete
internal structure coupled with a soft organic form, like
the skeleton defi nes the structure of a body, to provide
a means for expression and investigation of structures
and processes not possible with existing systems.
BodyBeats: Whole-Body, Musical
Interfaces for Children
Zigelbaum, J., Millner, A., Desai, B., and Ishii, H. 2006. BodyBeats: whole-body, musical interfaces for children. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 1595-1600.
DOI: http://doi.acm.org/10.1145/1125451.1125742
Abstract
This work in progress presents the BodyBeats Suite— three prototypes built to explore the interaction between children and computational musical instruments by using sound and music patterns. Our goals in developing the BodyBeats prototypes are (1) to help children engage their whole bodies while interacting with computers, (2) foster collaboration and pattern learning, and (3) provide a playful interaction for creating sound and music. We posit that electronic instruments for children that incorporate whole-body movement can provide active ways for children to play and learn with technology (while challenging a growing rate of childhood obesity). We describe how we implemented our current BodyBeats prototypes and discuss how users interact with them. We then highlight our plans for future work in the fields of whole-body interaction design, education, and music.
3D and Sequential Representations of Spatial Relationships among Photos
Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 472-477.
DOI: http://doi.acm.org/10.1145/1125451.1125555
Abstract
This paper proposes automatic representations of spatial relationships among photos for structure analysis and review of a photographic subject. Based on camera tracking, photos are shown in a 3D virtual reality space to represent global spatial relationships. At the same time, the spatial relationships between two of the photos are represented in slide show sequences. This proposal allows people to organize photos quickly in spatial representations with qualitative meaning.
Mechanical Constraints as Common Ground between People and Computers
Patten, J. M. 2006 Mechanical Constraints as Common Ground between People and Computers. Doctoral Thesis. UMI Order Number: AAI0808956., Massachusetts Institute of Technology.
Abstract
This thesis presents a new type of human-computer interface
based on mechanical constraints that combines some of the
tactile feedback and affordances of mechanical systems with
the abstract computational power of modern computers. The
interface is based on a tabletop interaction surface that can
sense and move small objects on top of it. Computation is
merged with dynamic physical processes on the tabletop that
are exposed to and modified by the user in order to accomplish
his or her task. The system places mechanical constraints and
mathematical constraints on the same level, allowing users to
guide simulations and optimization processes by constraining
the motion of physical objects on the interaction surface. The
interface provides ample opportunities for improvisation by
allowing the user to employ a rich variety of everyday physical
objects as interface elements. Subjects in an evaluation were
more effective at solving a complex spatial layout problem using
this system than with either of two alternative interfaces that did
not feature actuation.
SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface
Vincent Leclerc. SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Abstract
Senspectra is a computationally augmented physical modeling toolkit designed for sensing and
visualization of structural strain. The system functions as a distributed sensor network
consisting of nodes, embedded with computational capabilities and a full spectrum LED, which
communicate to neighbor nodes to determine a network topology through a system of flexible
joints. Each joint, while serving as a data and power bus between nodes, also integrates an
omnidirectional bend sensing mechanism, which uses a simple optical occlusion technique to
sense and communicate mechanical strain between neighboring nodes. Using Senspectra, a user
incrementally assembles and refines a physical 3D model of discrete elements with a real-time
visualization of structural strain.
While the Senspectra infrastructure provides a flexible modular sensor network platform, its
primary application derives from the need to couple physical modeling techniques utilized in
the architecture and industrial design disciplines with systems for structural engineering
analysis, offering an intuitive approach for physical real-time finite element analysis. Utilizing
direct manipulation augmented with visual feedback, the system gives users valuable insights
on the global behavior of a constructed system defined as a network of discrete elements.
The Texture of Light
Vaucelle, C. 2006. The texture of light. In ACM SIGGRAPH 2006 Research Posters (Boston, Massachusetts, July 30 – August 03, 2006). SIGGRAPH ‘06. ACM, New York, NY, 27.
DOI: http://doi.acm.org/10.1145/1179622.1179651
Abstract
The Texture of Light is research on lighting principles and the exploration of life feed video metamorphosis in the public space using reflection of light on transparent materials. The Texture of Light is an attempt to fight the boredom of everyday life. This project employs the simple use of chemistry, Plexiglas, and plastic patterns to form a reconstruction of reality, giving it a texture and an expressive form. The transformation of life feed video comes from physical, plastic circles that act as different masks of reality. These masks can be moved around and swapped by the public, enabling collective expression. This metamorphosis of the public space is presented in real time as a moving painting and is projected on city walls. The public can record video clips of their ‘moving painting’ and project them back onto different city locations.
Affective TouchCasting
Bonanni, L. and Vaucelle, C. 2006. Affective TouchCasting. In ACM SIGGRAPH 2006 Sketches (Boston, Massachusetts, July 30 – August 03, 2006). SIGGRAPH ‘06. ACM, New York, NY, 35.
DOI: http://doi.acm.org/10.1145/1179849.1179893
Abstract
The sense of touch is not only informative: certain kinds of touch are directly related to emotions. Haptics can enrich the experience of broadcast media through tactile stimulus that is mapped to emotional response and distributed over the body. This sketch applies affective touch research to haptic broadcast in a wearable device that can record, distribute and play back touch information. TouchCasting augments broadcast media with affective haptics that can be experienced in public as a new form of art.
Collaborative Simulation Interface for Planning Disaster Measures
Kobayashi, K., Narita, A., Hirano, M., Kase, I., Tsuchida, S., Omi, T., Kakizaki, T., and Hosokawa, T. 2006. Collaborative simulation interface for planning disaster measures. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 977-982.
PlayPals: Tangible Interfaces for Remote Communication and Play
Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. PlayPals: tangible interfaces for remote communication and play. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 574-579.
DOI: http://doi.acm.org/10.1145/1125451.1125572
Abstract
PlayPals are a set of wireless figurines with their electronic accessories that provide children with a playful way to communicate between remote locations. PlayPals is designed for children aged 5-8 to share multimedia experiences and virtual co-presence. We learned from our pilot study that embedding digital communication into existing play pattern enhances both remote play and communication.
TapTap: A Haptic Wearable for Asynchronous Distributed Touch Therapy
Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. TapTap: a haptic wearable for asynchronous distributed touch therapy. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 580-585.
DOI: http://doi.acm.org/10.1145/1125451.1125573
Abstract
TapTap is a wearable haptic system that allows nurturing human touch to be recorded, broadcast and played back for emotional therapy. Haptic input/output modules in a convenient modular scarf provide affectionate touch that can be personalized. We present a working prototype informed by a pilot study.
3D and Sequential Representations of Spatial Relationships among Photos
Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 472-477.
DOI: http://doi.acm.org/10.1145/1125451.1125555
Abstract
This paper proposes automatic representations of spatial relationships among photos for structure analysis and review of a photographic subject. Based on camera tracking, photos are shown in a 3D virtual reality space to represent global spatial relationships. At the same time, the spatial relationships between two of the photos are represented in slide show sequences. This proposal allows people to organize photos quickly in spatial representations with qualitative meaning.
Beyond Record and Play: Backpacks: Tangible Modulators for Kinetic Behavior
Raffle, H., Parkes, A., Ishii, H., and Lifton, J. 2006. Beyond record and play: backpacks: tangible modulators for kinetic behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ‘06. ACM, New York, NY, 681-690.
DOI: http://doi.acm.org/10.1145/1124772.1124874
Abstract
Digital Manipulatives embed computation in familiar children’s toys and provide means for children to design behavior. Some systems use “record and play” as a form of programming by demonstration that is intuitive and easy to learn. With others, children write symbolic programs with a GUI and download them into a toy, an approach that is conceptually extensible, but is inconsistent with the physicality of educational manipulatives. The challenge we address is to create a tangible interface that can retain the immediacy and emotional engagement of “record and play” and incorporate a mechanism for real time and direct modulation of behavior during program execution.We introduce the Backpacks, modular physical components that children can incorporate into robotic creations to modulate frequency, amplitude, phase and orientation of motion recordings. Using Backpacks, children can investigate basic kinematic principles that underly why their specific creations exhibit the specific behaviors they observe. We demonstrate that Backpacks make tangible some of the benefits of symbolic abstraction, and introduce sensors, feedback and behavior modulation to the record and play paradigm. Through our review of user studies with children ages 6-15, we argue that Backpacks extend the conceptual limits of record and play with an interface that is consistent with both the physicality of educational manipulatives and the local-global systems dynamics that are characteristic of complex robots.
Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces
Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 – 16, 2006). ACE ‘06, vol. 266. ACM, New York, NY, 27.
DOI: http://doi.acm.org/10.1145/1178823.1178856
Abstract
We present a set of interaction techniques for electronic musical performance using a tabletop tangible interface. Our system, the Audiopad, tracks the positions of objects on a tabletop surface and translates their motions into commands for a musical synthesizer. We developed and refined these interaction techniques through an iterative design process, in which new interaction techniques were periodically evaluated through performances and gallery installations. Based on our experience refining the design of this system, we conclude that tabletop interfaces intended for collaborative use should use interaction techniques designed to be legible to onlookers. We also conclude that these interfaces should allow users to spatially reconfigure the objects in the interface in ways that are personally meaningful.
2005
Designing the “World as your Palette”
Ryokai, K., Marti, S., and Ishii, H. 2005. Designing the world as your palette. In CHI ‘05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA, April 02 – 07, 2005). CHI ‘05. ACM, New York, NY, 1037-1049.
DOI: http://doi.acm.org/10.1145/1056808.1056816
Abstract
“The World as your Palette” is our ongoing effort to design and develop tools to allow artists to create visual art projects with elements (specifically, the color, texture, and moving patterns) extracted directly from their personal objects and their immediate environment. Our tool called “I/O Brush” looks like a regular physical paintbrush, but contains a video camera, lights, and touch sensors. Outside of the drawing canvas, the brush can pick up colors, textures, and movements of a brushed surface. On the canvas, artists can draw with the special “ink” they just picked up from their immediate environment. We describe the evolution and development of our system, from kindergarten classrooms to an art museum, as well as the reactions of our users to the growing expressive capabilities of our brush, as an iterative design process.
The world as a palette : painting with attributes of the environment
Kimiko Ryokai. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
2004
Bottles: A Transparent Interface as a Tribute to Mark Weiser
Hiroshi Ishii, “Bottles: A Transparent Interface as a Tribute to Mark Weiser.” IEICE TRANSACTIONS on Information and Systems Vol.E87-D No.6 pp.1299-1311
Abstract
This paper first discusses the misinterpretation of the concept of “ubiquitous computing” that Mark Weiser originally proposed in 1991. Weiser’s main message was not the ubiquity of computers, but thetransparency of interface that determines users’ perception of digital technologies embedded in our physical environment seamlessly. To explore Weiser’s philosophy of transparency in interfaces, this paper presents the design of an interface that uses glass bottles as “containers” and “controls” for digital information. The metaphor is a perfume bottle: Instead of scent, the bottles have been filled with music — classical, jazz, and techno music. Opening each bottle releases the sound of a specific instrument accompanied by dynamic colored light. Physical manipulation of the bottles — opening and closing — is the primary mode of interaction for controlling their musical contents. The bottlesillustrates Mark Weiser’s vision of the transparent (or invisible) interface that weaves itself into the fabric of everyday life. The bottles also exploits the emotional aspects of glass bottles that are tangible and visual, and evoke the smell of perfume and the taste of exotic beverages. This paper describes the design goals of the bottle interface, the arrangement of musical content, the implementation of the wireless electromagnetic tag technology, and the feedback from users who have played with the system.
Topobo: A 3-D Constructive Assembly System with Kinetic Memory
Hayes Solos Raffle. Topobo: A 3-D Constructive Assembly System with Kinetic Memory. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.
Tangible User Interfaces (TUIs): A Novel Paradigm for GIS
Ratti, C., Wang, Y., Ishii, H., Piper, B. Frenchman, D., “Tangible User Interfaces (TUIs): A Novel Paradigm for GIS,” Trans. GIS, vol. 8, no. 4, 2004, pp. 407–421.
DOI: http://dx.doi.org/10.1111/j.1467-9671.2004.00193.x
Abstract
In recent years, an increasing amount of effort has gone into the design of GIS user interfaces. On the one hand, Graphical User Interfaces (GUIs) with a high degree of sophistication have replaced line-driven commands of first-generation GIS. On the other hand, a number of alternative approaches have been suggested, most notably those based on Virtual Environments (VEs). In this paper we discuss a novel interface for GIS, which springs from recent work carried out in the field of Tangible User Interfaces (TUIs). The philosophy behind TUIs is to allow people to interact with computers via familiar tangible objects, therefore taking advantage of the richness of the tactile world combined with the power of numerical simulations. Two experimental systems, named Illuminating Clay and SandScape, are described here and their applications to GIS are examined. Conclusions suggest that these interfaces might streamline the landscape design process and result in a more effective use of GIS, especially when distributed decision-making and discussion with non-experts are involved.
egaku: Enhancing the Sketching Process
Yoon, J., Ryokai, K., Dyner, C., Alonso, J., and Ishii, H. 2004. egaku: enhancing the sketching process. In ACM SIGGRAPH 2004 Posters (Los Angeles, California, August 08 – 12, 2004). R. Barzel, Ed. SIGGRAPH ‘04. ACM, New York, NY, 42.
DOI: http://doi.acm.org/10.1145/1186415.1186464
Abstract
egaku is a tabletop user interface designed to enhance the ideation process with seamless image management tools. Designers sketch ideas as the system captures high-resolution images of the sketches and organizes them in a transparent image management structure. The system’s ability to determine and recognize layer associations allows users to quickly and intuitively visualize, retrieve, navigate through, and switch between layers of information without the hassle of traversing through multiple sheets of paper.
With its strong emphasis on maintaining and enhancing the natural affordances of physical tracing paper, egaku allows users to overlay multiple digital translucent images to compose and compare different designs.
Phoxel-Space: an Interface for Exploring Volumetric Data with Physical Voxels
Ratti, C., Wang, Y., Piper, B., Ishii, H., and Biderman, A. 2004. PHOXEL-SPACE: an interface for exploring volumetric data with physical voxels. In Proceedings of the 5th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Cambridge, MA, USA, August 01 – 04, 2004). DIS ‘04. ACM, New York, NY, 289-296.
DOI: http://doi.acm.org/10.1145/1013115.1013156
Abstract
Phoxel-Space is an interface to enable the exploration of voxel data through the use of physical models and materials. Our goal is to improve the means to intuitively navigate and understand complex 3-dimensional datasets. The system works by allowing the user to define a free form geometry that can be utilized as a cutting surface with which to intersect a voxel dataset. The intersected voxel values are projected back onto the surface of the physical material. The paper describes how the interface approach builds on previous graphical, virtual and tangible interface approaches and how Phoxel-Space can be used as a representational aid in the example application domains of biomedicine, geophysics and fluid dynamics simulation
I/O Brush: Drawing with Everyday Objects as Ink
Ryokai, K., Marti, S., and Ishii, H. 2004. I/O brush: drawing with everyday objects as ink. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ‘04. ACM, New York, NY, 303-310.
DOI: http://doi.acm.org/10.1145/985692.985731
Abstract
We introduce I/O Brush, a new drawing tool aimed at young children, ages four and up, to explore colors, textures, and movements found in everyday materials by “picking up” and drawing with them. I/O Brush looks like a regular physical paintbrush but has a small video camera with lights and touch sensors embedded inside. Outside of the drawing canvas, the brush can pick up color, texture, and movement of a brushed surface. On the canvas, children can draw with the special “ink” they just picked up from their immediate environment. In our study with kindergarteners, we found that children not only produced complex works of art using I/O Brush, but they also engaged in explicit talk about patterns and features available in their environment. I/O Brush invites children to explore the transformation from concrete and familiar raw material into abstract concepts about patterns of colors, textures and movements.
Topobo: A Constructive Assembly System with Kinetic Memory
Raffle, H. S., Parkes, A. J., and Ishii, H. 2004. Topobo: a constructive assembly system with kinetic memory. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ‘04. ACM, New York, NY, 647-654.
DOI: http://doi.acm.org/10.1145/985692.985774
Abstract
We introduce Topobo, a 3D constructive assembly system embedded with kinetic memory, the ability to record and playback physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of Passive (static) and Active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons with Topobo, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly.
Bringing clay and sand into digital design — continuous tangible user interfaces
Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A., and Ben-Joseph, E. 2004. Bringing Clay and Sand into Digital Design — Continuous Tangible user Interfaces. BT Technology Journal 22, 4 (Oct. 2004), 287-299.
DOI: http://dx.doi.org/10.1023/B:BTTJ.0000047607.16164.16
Abstract
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits.
Our goal in TUI development is to empower collaboration, learning, and decision-making by using digital technology and at the same time
taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper presents a new generation of
TUIs that enable dynamic sculpting and computational analysis using digitally augmented continuous physical materials. These new types
of TUI, which we have termed ‘Continuous TUIs’, offer rapid form giving in combination with computational feedback. Two experimental
systems and their applications in the domain of landscape architecture are discussed here, namely ‘Illuminating Clay’ and ‘SandScape’.
Our results suggest that by exploiting the physical properties of continuous soft materials such as clay and sand, it is possible to
bridge the division between physical and digital forms and potentially to revolutionise the current design process.
Super Cilia Skin: A Textural Interface
Raffe, H., Tichenor, J., Ishii, H. 2004. Super Cilia Skin: A Textural Interface. Textile, Volume 2, Issue 3, pp. 1–19
Abstract
Super Cilia Skin is a literal membrane separating a
computer from its environment. Like our skin, it is
haptic I/O membrane that can sense and simulate
movement and wind flow. Our intention is to have it
be universally applied to sheath any surface. As a
display, it can mimic another person’s gesture over a
distance via a form of tangible telepresence. A
hand-sized interface covered with Super Cilia Skin
would produce subtle changes in surface texture that
feel much like a telepresent “butterfly kiss.”
Topobo: A Gestural Design Tool with Kinetic Memory
Amanda Parkes. Topobo: A Gestural Design Tool with Kinetic Memory. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.
DOI: http://hdl.handle.net/1721.1/28768
Abstract
The modeling of kinetic systems, both in physical materials and virtual simulations, provides a methodology to better understand and explore the forces and dynamics of our physical environment. The need to experiment, prototype and model with programmable kinetic forms is becoming increasingly important as digital technology becomes more readily embedded in physical structures and provides real-time variable data the capacity to transform the structures themselves. This thesis introduces Topobo, a gestural design tool embedded with kinetic memory—the ability to record, playback, and transform physical motion in three dimensional space. As a set of kinetic building blocks, Topobo records and repeats the body’s gesture while the system’s peer-to-peer networking scheme provides the capability to pass and transform q gesture. This creates a means to represent and understand algorithmic simulations in a physical material, providing a physical demonstration of how a simple set of rules can lead to complex form and behavior. Topobo takes advantage of the editability of computer data combined with the physical immediacy of a tangible model to provide a means for expression and investigation of kinetic patterns and processes not possible with existing materials.
PINS : a haptic computer interface system.
Bradley Carter Kaanta. PINS : a haptic computer interface system. Thesis (M. Eng. and S.B.)—Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
2003
musicBottles Manual for Samsung Exhibition July 2003
Tangible Query Interfaces: Physically Constrained Tokens for Manipulating Database Queries
Ullmer B, Ishii H, Jacob R.J.K (2003) Tangible query interfaces: physically constrained tokens for manipulating database queries. In: Proceedings of the 9th IFIP international conference on human-computer interaction (INTERACT 2003), Zurich, Switzerland, September 2003.
Abstract
We present a new approach for using physically constrained tokens to express, manipulate, and
visualize parameterized database queries. This method extends tangible interfaces to enable interaction with
large aggregates of information. We describe two interface prototypes that use physical tokens to represent
database parameters. These tokens are manipulated upon physical constraints, which map compositions of
tokens onto interpretations including database queries, views, and Boolean operations. We propose a framework
for “token + constraint” interfaces, and compare one of our prototypes with a comparable graphical interface in a
preliminary user study.
Super Cilia Skin: An Interactive Membrane
Raffle, H., Joachim, M. W., and Tichenor, J. 2003. Super cilia skin: an interactive membrane. In CHI ‘03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 – 10, 2003). CHI ‘03. ACM, New York, NY, 808-809.
DOI: http://doi.acm.org/10.1145/765891.766004
Abstract
Super Cilia Skin is a literal membrane separating a computer from its environment. Like our skin, it is haptic I/O membrane that can sense and simulate movement and wind flow. Our intention is to have it be universally applied to sheath any surface. As a display, it can mimic another person’s gesture over a distance via a form of tangible telepresence. A hand-sized interface covered with Super Cilia Skin would produce subtle changes in surface texture that feel much like a telepresent “butterfly kiss.”
Applications of Computer-Controlled Actuation in
Workbench Tangible User Interfaces
Daniel Maynes-Aminzade. Applications of Computer-Controlled Actuation in
Workbench Tangible User Interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.
The actuated workbench : 2D actuation in tabletop tangible interfaces
Gian Antonio Pangaro. The actuated workbench : 2D actuation in tabletop tangible interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.
DOI: http://hdl.handle.net/1721.1/17620
Abstract
The Actuated Workbench is a new actuation mechanism that
uses magnetic forces to control the two-dimensional movement
of physical objects on flat surfaces. This mechanism is
intended for use with existing tabletop Tangible User Interfaces,
providing computer-controlled movement of the physical
objects on the table, and creating an additional feedback
layer for Human Computer Interaction (HCI). Use of this actuation
technique makes possible new kinds of physical interactions
with tabletop interfaces, and allows the computer to
maintain consistency between the physical and digital states
of data objects in the interface. This thesis focuses on the
design and implementation of the actuation mechanism as an
enabling technology, introduces new techniques for motion
control, and discusses practical and theoretical implications
of computer-controlled movement of physical objects in tabletop
tangible interfaces.
2002
The Actuated Workbench: Computer-Controlled Actuation in Tabletop Tangible Interfaces
Pangaro, G., Maynes-Aminzade, D., Ishii, H. 2003. The Actuated Workbench: Computer-Controlled Actuation in Tabletop Interfaces. ACM Trans. Graph. 22, 3 (Jul. 2003), 699-699.
DOI: http://doi.acm.org/10.1145/882262.882330
Abstract
The Actuated Workbench is a device that uses magnetic forces to move objects on a table in two dimensions. It is intended for use with existing tabletop tangible interfaces, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer’s inability to move objects on the table.
ComTouch: A Vibrotactile Communication Device
Angela Chang. ComTouch: A Vibrotactile Mobile Communication
Device. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
Tangible Interfaces for Manipulating Aggregates of Digital Information
Brygg Ullmer. Tangible Interfaces for Manipulating Aggregates of Digital Information. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis
Ben Piper. The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
Hover: Conveying Remote Presence
Maynes-Aminzade, D., Tan, B., Goulding, K., and Vaucelle, C. 2002. Hover: conveying remote presence. In ACM SIGGRAPH 2002 Conference Abstracts and Applications (San Antonio, Texas, July 21 – 26, 2002). SIGGRAPH ‘02. ACM, New York, NY, 194-194.
DOI: http://doi.acm.org/10.1145/1242073.1242207
Abstract
This sketch presents Hover, a device that enhances remote telecommunication by providing a sense of the activity and presence of remote users. The motion of a remote persona is manifested as the playful movements of a ball floating in midair. Hover is both a communication medium and an aesthetic object.
Audiopad: A Tag Based Interface for Musical Performance
Patten, J., Recht, B., and Ishii, H. 2002. Audiopad: a tag-based interface for musical performance. In Proceedings of the 2002 Conference on New interfaces For Musical Expression (Dublin, Ireland, May 24 – 26, 2002). E. Brazil, Ed. New Interfaces For Musical Expression. National University of Singapore, Singapore, 1-6.
Abstract
We present Audiopad, an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces. The performer’s manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process
Illuminating Clay: A Tangible Interface with potential GRASS applications
Piper B., Ratti C., Ishii H., 2002, Illuminating Clay: a tangible interface with potential GRASS applications. Proceedings of the open-source GIS – GRASS users conference, Trento, Italy, September 2002.
Abstract
This paper introduces Illuminating Clay, an alternative interface for manipulating and navigating landscape representations that has been designed according to the specific needs of the landscape analyst
ComTouch: A Vibrotactile Communication Device
Chang, A., O’Modhrain, S., Jacob, R., Gunther, E., and Ishii, H. 2002. ComTouch: design of a vibrotactile communication device. In Proceedings of the 4th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (London, England, June 25 – 28, 2002). DIS ‘02. ACM, New York, NY, 312-320.
DOI: http://doi.acm.org/10.1145/778712.778755
Abstract
We describe the design of ComTouch, a device that augments remote voice communication with touch, by converting hand pressure into vibrational intensity between users in real-time. The goal of this work is to enrich inter-personal communication by complementing voice with a tactile channel. We present preliminary user studies performed on 24 people to observe possible uses of the tactile channel when used in conjunction with audio. By recording and examining both audio and tactile data, we found strong relationships between the two communication channels. Our studies show that users developed an encoding system similar to that of Morse code, as well as three original uses: emphasis, mimicry, and turn-taking. We demonstrate the potential of the tactile channel to enhance the existing voice communication channel.
Bottles: Design of Transparent Interface for Accessing Digital Information
Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis
Piper, B., Ratti, C., and Ishii, H. 2002. Illuminating clay: a 3-D tangible interface for landscape analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 355-362.
DOI: http://doi.acm.org/10.1145/503376.503439
Abstract
This paper describes a novel system for the real-time computational analysis of landscape models. Users of the system – called Illuminating Clay – alter the topography of a clay landscape model while the changing geometry is captured in real-time by a ceiling-mounted laser scanner. A depth image of the model serves as an input to a library of landscape analysis functions. The results of this analysis are projected back into the workspace and registered with the surfaces of the model.We describe a scenario for which this kind of tool has been developed and we review past work that has taken a similar approach. We describe our system architecture and highlight specific technical issues in its implementation.We conclude with a discussion of the benefits of the system in combining the tangible immediacy of physical models with the dynamic capabilities of computational simulations.
A Tangible Interface for Organizing Information Using a Grid
Jacob, R. J., Ishii, H., Pangaro, G., and Patten, J. 2002. A tangible interface for organizing information using a grid. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 339-346.
DOI: http://doi.acm.org/10.1145/503376.503437
Abstract
The task of organizing information is typically performed either by physically manipulating note cards or sticky notes or by arranging icons on a computer with a graphical user interface. We present a new tangible interface platform for manipulating discrete pieces of abstract information, which attempts to combine the benefits of each of these two alternatives into a single system. We developed interaction techniques and an example application for organizing conference papers. We assessed the effectiveness of our system by experimentally comparing it to both graphical and paper interfaces. The results suggest that our tangible interface can provide a more effective means of organizing, grouping, and manipulating data than either physical operations or graphical computer interaction alone
Dolltalk: a computational toy to enhance children’s creativity
Vaucelle, C. and Jehan, T. 2002. Dolltalk: a computational toy to enhance children’s creativity. In CHI ‘02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 776-777.
DOI: http://doi.acm.org/10.1145/506443.506592
Abstract
This paper presents a novel approach and interface for encouraging children to tell and act out original stories. Dolltalk is a toy that simulates speech recognition by capturing the gestures and speech of a child. The toy then plays back a child’s pretend-play speech in altered voices representing the characters of the child’s story. Dolltalk’s tangible interface and ability to retell a child’s story may enhance a child’s creativity in narrative elaboration.
Dolltalk: A computational toy to enhance narrative perspective-taking
Cati Vaucelle. Dolltalk:
A computational toy to enhance narrative perspective-taking. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation
Hiroshi Ishii, Eran Ben-Joseph, John Underkoffler, Luke Yeung, Dan Chak, Zahra Kanji, and Ben Piper. 2002. Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation. In Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR ‘02). IEEE Computer Society, Washington, DC, USA, 203-.
DOI: http://doi.ieeecomputersociety.org/10.1109/ISMAR.2002.1115090
Abstract
There is a problem in the spatial and temporal separation between the varying forms of representation used in urban design. Sketches, physical models, and more recently computational simulation, while each serving a useful purpose, tend to be incompatible forms of representation. The contemporary designer is required assimilate these divergent media into a single mental construct and in so doing is distracted from the central process of design. We propose an augmented reality workbench called “Luminous Table” that attempts to address this issue by integrating multiple forms of physical and digital representations. 2D drawings, 3D physical models, and digital simulation are overlaid into a single information space in order to support the urban design process. We describe how the system was used in a graduate design course and discuss how the simultaneous use of physical and digital media allowed for a more holistic design approach. We also discuss the need for future technical improvements.
2001
Urban Simulation and the Luminous Planning Table
Ben-Joseph, E., Ishii, H., Underkoffler, J., Piper, B. & Yeung, L. 2001. Urban Simulation and the Luminous Planning Table: Bridging the Gap between the Digital and the Tangible , Journal of Planning Education and Research, 21, 195-202
DOI: http://dx.doi.org/10.1177/0739456X0102100207
Abstract
Multi-layered manipulative platforms that integrate digital and physical representations will have a significant impact on urban design and planning processes in the future. The usefulness of these platforms will be in their ability to combine and update digital and tangible data in seamless ways to enhance the design process of the professional and the communication process with the public. The Luminous Planning Table is one of the first prototypes that use a tangible computerized interface. The use of this system is unique in the design and presentation process in which, at the moment, the activity of viewing physical models and the viewing of animation and computerized simulations are separate. This ability to engage and provide an integrated medium for information delivery and understanding is promising in its pedagogical, professional, and public engagement outcomes.
Pinwheels: Visualizing Information Flow in an Architectural Space
Hiroshi Ishii, Sandia Ren, and Phil Frei. 2001. Pinwheels: visualizing information flow in an architectural space. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 111-112. DOI=10.1145/634067.634135 http://doi.acm.org/10.1145/634067.634135
DOI: http://doi.acm.org/10.1145/634067.634135
Abstract
We envision that the architectural spaces we inhabit will become an interface between humans and online digital information. We have been designing ambient information displays to explore the use of kinetic physical objects to present information at the periphery of human perception.
This paper reports the design of a large-scale Pinwheels installation made of 40 computer-controlled pinwheel units in a museum context. The Pinwheels spin in a “wind of bits” that blows from cyberspace. The array of spinning pinwheels presents information within an architectural space through subtle changes in movement and sound.
We describe the iterative design and implementation of the Pinwheels, and discuss design issues.
Bottles as a minimal interface to access digital information
Hiroshi Ishii, Ali Mazalek, and Jay Lee. 2001. Bottles as a minimal interface to access digital information. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 187-188. DOI=10.1145/634067.634180 http://doi.acm.org/10.1145/634067.634180
DOI: http://dx.doi.org/10.1145/634067.634180
Abstract
We present the design of a minimal interface to access digital information using glass bottles as “containers” and “controls”. The project illustrates our attempt to explore the transparency of an interface that weaves itself into the fabric of everyday life, and exploits the emotional aspects of glass bottles that are both tangible and visual. This paper describes the design of the bottle interface, and the implementation of the musicBottles installation, in which the opening of each bottle releases the sound of a specific instrument.
Sensetable: A Wireless Object tracking platform for tangible user interfaces.
James Patten. Sensetable: A Wireless Object tracking platform for tangible user interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
Tangible Interfaces for Interactive Point-of-View Narratives
Alexandra Mazalek. Tangible Interfaces for Interactive Point-of-View Narratives. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
Telling Tales: A new way to encourage written literacy through oral language
Mike Ananny. Telling Tales: A new way to encourage written literacy through oral language. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
genieBottles: An Interactive Narrative in Bottles
A. Mazalek, A. Wood, and H. Ishii. Geniebottles: An interactive narrative in bottles. In Conference Abstracts and Applications SIGGRAPH 2001, page 189, Los Angeles, California USA, August 2001.
LumiTouch: An Emotional Communication Device
Chang, A., Resner, B., Koerner, B., Wang, X., and Ishii, H. 2001. LumiTouch: an emotional communication device. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 313-314.
DOI: http://doi.acm.org/10.1145/634067.634252
Abstract
We present the Lumitouch system consisting of a pair of interactive picture frames. When one user touches her picture frame, the other picture frame lights up. This touch is translated to light over an Internet connection. We introduce a semi-ambient display that can transition seamlessly from periphery to foreground in addition to communicating emotional content. In addition to enhancing the communication between loved ones, people can use LumiTouch to develop a personal emotional language.Based upon prior work on telepresence and tangible interfaces, LumiTouch explores emotional communication in tangible form. This paper describes the components, interactions, implementation and design approach of the LumiTouch system.
The HomeBox: A Web Content Creation Tool for The Developing World
Piper, B. and Hwang, R. E. 2001. The HomeBox: a web content creation tool for the developing world. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 145-146.
DOI: http://doi.acm.org/10.1145/634067.634156
Abstract
This paper describes the implementation and testing of the HomeBox, a prototype that seeks to provide a cost effective and scalable means for allowing users in the developing world to publish on the Web. It identifies the key requirements for such a design by drawing lessons from a variety of sources including two studies of networked community projects in Africa and South America. It. It ends with a discussion of possible design developments and plans for field trails in the Dominican Republic
Strata/ICC: Physical Models as Computational Interfaces
Ullmer, B., Kim, E., Kilian, A., Gray, S., and Ishii, H. 2001. Strata/ICC: physical models as computational interfaces. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 373-374.
DOI: http://doi.acm.org/10.1145/634067.634287
Abstract
We present Strata/ICC: a computationally-augmented physical model of a 54-story skyscraper that serves as an interactive display of electricity consumption, water consumption, network utilization, and other kinds of infrastructure. Our approach pushes information visualizations into the physical world, with a vision of transforming large-scale physical models into new kinds of interaction workspaces.
Designing Touch-based Communication Devices
Chang, A., Kanji, Z., Ishii, H. 2001. Designing Touch-based Communication Devices. CHI 2001 Workshop: Universal design: Towards universal access in the Information Society
DataTiles: A Modular Platform for Mixed Physical and Graphical Interactions
Rekimoto, J., Ullmer, B., and Oba, H. 2001. DataTiles: a modular platform for mixed physical and graphical interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI ‘01. ACM, New York, NY, 269-276.
DOI: http://doi.acm.org/10.1145/365024.365115
Abstract
The DataTiles system integrates the benefits of two major interaction paradigms: graphical and physical user interfaces. Tagged transparent tiles are used as modular construction units. These tiles are augmented by dynamic graphical information when they are placed on a sensor-enhanced flat panel display. They can be used independently or can be combined into more complex configurations, similar to the way language can express complex concepts through a sequence of simple words. In this paper, we discuss our design principles for mixing physical and graphical interface techniques, and describe the system architecture and example applications of the DataTiles system.
Sensetable: A Wireless Object Tracking Platform for Tangible User Interfaces
Patten, J., Ishii, H., Hines, J., and Pangaro, G. 2001. Sensetable: a wireless object tracking platform for tangible user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI ‘01. ACM, New York, NY, 253-260.
DOI: http://doi.acm.org/10.1145/365024.365112
Abstract
In this paper we present a system that electromagnetically tracks the positions and orientations of multiple wireless objects on a tabletop display surface. The system offers two types of improvements over existing tracking approaches such as computer vision. First, the system tracks objects quickly and accurately without susceptibility to occlusion or changes in lighting conditions. Second, the tracked objects have state that can be modified by attaching physical dials and modifiers. The system can detect these changes in real-time.We present several new interaction techniques developed in the context of this system. Finally, we present two applications of the system: chemistry and system dynamics simulation
GeoSCAPE: designing a reconstructive tool for field archaeological excavation.
Jay Lee, Hiroshi Ishii, Blair Duun, Victor Su, and Sandia Ren. 2001. GeoSCAPE: designing a reconstructive tool for field archaeological excavation. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 35-36.
DOI: http://dx.doi.org/10.1145/634067.634093
Abstract
We introduce GeoSCAPE, a “reconstructive” tool for capturing measurement data in field archaeology and facilitating a 3D visualization of an excavation rendered in computer graphics. This project is carried out by extending a recently developed an orientation-aware digital measuring tape, called HandSCAPE that has been examined to address the efficiency of bridging measuring and modeling for on-site application areas [2]. In this paper, we present the GeoSCAPE system using the same digital tape measure interacting with an enhancing archaeological-specific 3D visualizations the goal is to provide visual reconstruction methods by acquiring accurate field measurements and visualizing the complex work of an archaeologist during the course of on-site excavation.
2000
Emerging Frameworks for Tangible User Interfaces
Ullmer, B. and Ishii, H. 2000. Emerging frameworks for tangible user interfaces. IBM Syst. J. 39, 3-4 (Jul. 2000), 915-931.
Abstract
We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.
A Comparison of Spatial Organization Strategies in Graphical and Tangible User Interfaces
Patten, J. and Ishii, H. 2000. A comparison of spatial organization strategies in graphical and tangible user interfaces. In Proceedings of DARE 2000 on Designing Augmented Reality Environments (Elsinore, Denmark). DARE ‘00. ACM, New York, NY, 41-50.
DOI: http://doi.acm.org/10.1145/354666.354671
Abstract
We present a study comparing how people use space in a Tangible User Interface (TUI) and in a Graphical User Interface (GUI). We asked subjects to read ten summaries of recent news articles and to think about the relationships between them. In our TUI condition, we bound each of the summaries to one of ten visually identical wooden blocks. In our GUI condition, each summary was represented by an icon on the screen. We asked subjects to indicate the location of each summary by pointing to the corresponding icon or wooden block. Afterward, we interviewed them about the strategies they used to position the blocks or icons during the task.
We observed that TUI subjects performed better at the location recall task than GUI subjects. In addition, some TUI subjects used the spatial relationship between specific blocks and parts of the environment to help them remember the content of those blocks, while GUI subjects did not do this. Those TUI subjects who reported encoding information using this strategy tended to perform better at the recall task than those who did not.
curlybot: Designing a New Class of Computational Toys
Frei, P., Su, V., Mikhak, B., and Ishii, H. 2000. curlybot: designing a new class of computational toys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands, April 01 – 06, 2000). CHI ‘00. ACM, New York, NY, 129-136. DOI= http://doi.acm.org/10.1145/332040.332416
DOI: http://doi.acm.org/10.1145/332040.332416
Abstract
We introduce an educational toy, called curlybot, as the basis for a new class of toys aimed at children in their early stages of development – ages four and up. curlybot is an autonomous two-wheeled vehicle with embedded electronics that can record how it has been moved on any flat surface and then play back that motion accurately and repeatedly. Children can use curlybot to develop intuitions for advanced mathematical and computational concepts, like differential geometry, through play away from a traditional computer.
HandSCAPE: a vectorizing tape measure for on-site measuring applications
Jay Lee, Victor Su, Sandia Ren, and Hiroshi Ishii. 2000. HandSCAPE: a vectorizing tape measure for on-site measuring applications. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ‘00). ACM, New York, NY, USA, 137-144.
DOI: http://dx.doi.org/10.1145/332040.332417
Abstract
We introduce HandSCAPE, an orientation-aware digital tape measure, as an input device for digitizing field measurements, and visualizing the volume of the resulting vectors with computer graphics. Using embedded orientation-sensing hardware, HandSCAPE captures relevant vectors on each linear measurements and transmits this data wirelessly to a remote computer in real-time. To guide us in design, we have closely studied the intended users, their tasks, and the physical workplaces to extract the needs from real worlds. In this paper, we first describe the potential utility of HandSCAPE for three on-site application areas: archeological surveys, interior design, and storage space allocation. We then describe the overall system which includes orientation sensing, vector calculation, and primitive modeling. With exploratory usage results, we conclude our paper for interface design issues and future developments.
1999
musicBottles
Abstract
musicBottles introduces a tangible interface that deploys bottles as containers and controls for digital information.The system consists of a specially designed table and three corked bottles that “contain” the sounds of the violin, the cello, and the piano in Édouard Lalo’s Piano Trio in C Minor, Op. 7. Custom-designed electromagnetic tags embedded in the bottles enable each one to be wirelessly identified. When a bottle is placed onto the stage area of the table and the cork is removed, the corresponding instrument becomes audible. A pattern of colored light is rear-projected onto the table’s translucent surface to reflect changes in pitch and volume.The interface allows users to structure the experience of the musical composition by physically manipulating the different sound tracks.
The Design of Personal Ambient Displays
Craig Wisneski. The Design of Personal Ambient Displays. Thesis (S.M.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.
The Design and Implementation of inTouch: A Distributed, Haptic Communication System
Victor Su. The Design and Implementation of inTouch: A Distributed, Haptic Communication System. Thesis (M. Eng. and S.B.)—Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
The I/O Bulb and the Luminous Room
John Underkoffler, The I/O Bulb and the Luminous Room, Thesis (Ph.D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts & Sciences, 1999.
Emancipated Pixels: Real-World Graphics in the Luminous Room
Underkoffler, J., Ullmer, B., and Ishii, H. 1999. Emancipated pixels: real-world graphics in the luminous room. In Proceedings of the 26th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 385-392.
DOI: http://doi.acm.org/10.1145/311535.311593
Abstract
We describe a conceptual infrastructure the Luminous Room for providing graphical display and interaction at each of an interior architectural space’s various surfaces, arguing that pervasive environmental output and input is one natural heir to today’s rather more limited notion of spatially-confined, output-only display (the CRT). We discuss the requirements of such real-world graphics, including computational & networking demands; schemes for spatially omnipresent capture and display; and issues of design and interaction that emerge under these new circumstances. These discussions are both illustrated and motivated by five particular applications that have been built for a real, experimental Luminous Room space, and by details of the current technical approach to its construction (involving a two-way optical transducer called an I/O Bulb that projects and captures pixels).
Urp: A Luminous-Tangible Workbench for Urban Planning and Design
Underkoffler, J. and Ishii, H. 1999. Urp: a luminous-tangible workbench for urban planning and design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: the CHI Is the Limit (Pittsburgh, Pennsylvania, United States, May 15 – 20, 1999). CHI ‘99. ACM, New York, NY, 386-393.
DOI: http://doi.acm.org/10.1145/302979.303114
Abstract
We introduce a system for urban planning – called Urp -that integrates functions addressing a broad range of the fields concerns into a single, physically based workbench setting. The I/O Bulb infrastructure on which the application is based allows physical architectural models placed on an ordinary table surface to cast shadows accurate for arbitrary times of day; to throw reflections off glass facade surfaces; to affect a real-time and visually coincident simulation of pedestrian-level windflow; and so on.
We then use comparisons among Urp and several earlier I/O Bulb applications as the basis for an understanding of luminous-tangible interactions, which result whenever an interface distributes meaning and functionality between physical objects and visual information projectively coupled to those objects. Finally, we briefly discuss two issues common to all such systems, offering them as informal thought-tools for the design and analysis of luminous-tangible interfaces.
Towards the Distributed Visualization
of Usage History
Paul Yarin. Towards the Distributed Visualization
of Usage History. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.
Curlybot
Phil Frei, Victor Su, and Hiroshi Ishii. 1999. Curlybot. In ACM SIGGRAPH 99 Conference abstracts and applications (SIGGRAPH ‘99). ACM, New York, NY, USA, 173-. DOI=10.1145/311625.311972 http://doi.acm.org/10.1145/311625.311972
PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play
Hiroshi Ishii, Craig Wisneski, Julian Orbanes, Ben Chun, and Joe Paradiso. 1999. PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play. In Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit (CHI ‘99). ACM, New York, NY, USA, 394-401. DOI=10.1145/302979.303115
1998
The Last Farewell: Traces of Physical Presence
Ishii, H. 1998. Reflections: “The last farewell”: traces of physical presence. interactions 5, 4 (Jul. 1998), 56-ff.
DOI: http://doi.acm.org/10.1145/278465.278474
Abstract
In the Spring of 1995, I was finally able to realize a dream that I’d held for quite a number
of years; I was able to visit Hanamaki village, the home of the famous author Miazawa
Kenji. Before leaving Japan, I had wanted to see Kenji’s “World of Efertobe” once with my
own eyes.
Designing Kinetic Objects for Digital Information Display
Andy Dahley. Designing Kinetic Objects for Digital Information Display. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1998.
Abstract
We have access to more and more information from computer
networks. However, the means of monitoring this changing
information is limited by its access through the narrow window of a
computer screen. The interactions between people and digital
information are now almost entirely confined to the conventional
GUI (Graphical User Interface) comprised of a keyboard, monitor,
and mouse, largely ignoring the richness of the physical world.
As a critical step in moving beyond current interface limitations, this
research is attempts to use many parts of our environment to
convey information in a variety of ways. Rather than adding more
video terminals into an environment, this thesis examines how to
move information off the screen into our physical environment,
where it is manifested in a more physical and kinetic manner. The
thesis explores how these kinetic objects can be used to display
information on a more visceral cognitive level than afforded by the
interfaces of generalized information appliances like the computer.
The approach in this thesis is through several exploratory design
studies. A geography of the design space of kinetic objects as digital
information displays was developed through this series of design
studies so that it can be used in the development of future kinetic
displays.
Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects
Matthew Gorbet. Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.
DOI: http://hdl.handle.net/1721.1/29138
Abstract
This work introduces the concept of physical-digital objects: physical
objects which allow people to interact with digital information as though
it were tangible. I treat the design of physical-digital objects as a new
field, and establish a conceptual framework within which to approach
this design task. My design strategy treats objects as having both a
physical and a digital identity, related to one another by three design
principles: coupling, transparency, and mapping. With these principles
as a guide, designers can take advantage of emerging digital technologies
to create entirely new physical-digital objects. This new design
perspective encourages a conceptual shift away from discrete input and
output devices as gateways to a digital world, and towards a more
seamless interaction with information, enabled by our knowledge and
understanding of the physical world. I illustrate this by introducing and
discussing seven actual physical-digital object systems, including two
which I developed: Bottles and Triangles.
Tangible Interfaces for Remote Communication and Collaboration
Scott Brave. Tangible Interfaces for Remote Communication and Collaboration. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.
Abstract
This thesis presents inTouch, a new device enabling long distance communication
through touch. inTouch is based on a concept called Synchronized Distributed Physical
Objects, which employs telemanipulation technology to create the illusion that distant
users are interacting with a shared physical object. I discuss the design and prototype
implementation of inTouch, along with control strategies for extending the physical link
over an arbitrary distance. User reactions to the prototype system suggest many
similarities to direct touch interactions while, at the same time, point to new possibilities
for object-mediated touch communication. I also present two initial experiments that
begin to explore more objective properties of the haptic communication channel provided
by inTouch and develop analysis techniques for future investigations.
mediaBlocks: Physical Containers,Transports, and Controls for Online Media
Ullmer, B., Ishii, H., and Glas, D. 1998. mediaBlocks: physical containers, transports, and controls for online media. In Proceedings of the 25th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH ‘98. ACM, New York, NY, 379-386.
DOI: http://doi.acm.org/10.1145/280814.280940
Abstract
We present a tangible user interface based upon media-Blocks: small, electronically tagged wooden blocks that serve as physical icons (cicons”) for the containment, transport, and manipulation of online media. MediaBlocks interface with media input and output devices such as video cameras and projectors, allowing digital media to be rapidly cpied” from a media source and pasted into a media display. Media-Blocks are also compatible with traditional GUIs, providing seamless gateways between tangible and graphical interfaces. Finally, mediaBlocks act as physical cntrols” in tangible interfaces for tasks such as se-quencing collections of media elements.
Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information
Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P. Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. CoBuild 1998.
Abstract
We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these “ambient media”.
Tangible Interfaces for Remote Collaboration and Communication
Scott Brave, Hiroshi Ishii, and Andrew Dahley. 1998. Tangible interfaces for remote collaboration and communication. In Proceedings of the 1998 ACM conference on Computer supported cooperative work (CSCW ‘98). ACM, New York, NY, USA, 169-178.
DOI: http://dl.acm.org/citation.cfm?doid=289444.289491
Abstract
Current systems for real-time distributed CSCW are largely rooted in traditional GUI-based groupware and voice/video conferencing methodologies. In these approaches, interactions are limited to visual and auditory media, and shared environments are confined to the digital world. This paper presents a new approach to enhance remote collaboration and communication, based on the idea of Tangible Interfaces, which places a greater emphasis on touch and physicality. The approach is grounded in a concept called Synchronized Distributed Physical Objects, which employs telemanipulation technology to create the illusion that distant users are interacting with shared physical objects. We describe two applications of this approach: PSyBench, a physical shared workspace, and inTouch, a device for haptic interpersonal communication.
ambientROOM: Integrating Ambient Media with Architectural Space
Ishii, H., Wisneski, C., Brave, S., Dahley, A., Gorbet, M., Ullmer, B., and Yarin, P. 1998. ambientROOM: integrating ambient media with architectural space. In CHI 98 Conference Summary on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). CHI ‘98. ACM, New York, NY, 173-174.
DOI: http://doi.acm.org/10.1145/286498.286652
Abstract
We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these “ambient media”.
Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface
Underkoffler, J. and Ishii, H. 1998. Illuminating light: an optical design tool with a luminous-tangible interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 542-549.
DOI: http://doi.acm.org/10.1145/274644.274717
Abstract
We describe a novel system for rapid prototyping of laser-
based optical and holographic layouts. Users of this optical
prototyping tool – called the Illuminating Light system –
move physical representations of various optical elements
about a workspace, while the system tracks these compo-
nents and projects back onto the workspace surface the
simulated propagation of laser light through the evolving
layout. This application is built atop the Luminous Room
infrastrncture, an aggregate of interlinked, computer-con-
trolled projector-camera units called Z/O Bulbs. Philosophi-
cally, the work embodies the emerging ideas of the
Luminous Room and builds on the notions of ‘graspable
media’.
We briefly introduce the VO Bulb and Luminous Room con-
cepts and discuss their current implementations. After an
overview of the optical domain that the Illuminating Light
system is designed to address, we present the overall sys-
tem design and implementation, including that of an inter-
mediary toolkit called voodoo which provides a general
facility for object identification and tracking. beam that continues through the beamsplitter.
Triangles: Tangible Interface for Manipulation and Exploration of Digital Information Topography
Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56.
DOI: http://doi.acm.org/10.1145/274644.274652
Abstract
This paper presents a system for interacting with digital
information, called Triangles. The Triangles system is a
physictidigital construction kit, which allows users to use
two hands to grasp and manipulate complex digital
information. The kit consists of a set of identical flat,
plastic triangles, each with a microprocessor inside and
magnetic edge connectors. The connectors enable the
Triangles to be physically connected to each other and
provide tactile feedback of these connections. The
connectors
also pass electricity, allowing the Triangles to
communicate digital information to each other and to a
,desktop
computer. When the pieces contact one another,
specific connection information is sent back to a computer
that keeps
track of the configuration of the system.
1997
The metaDESK: Models and Prototypes for Tangible User Interfaces
Ullmer, B. and Ishii, H. 1997. The metaDESK: models and prototypes for tangible user interfaces. In Proceedings of the 10th Annual ACM Symposium on User interface Software and Technology (Banff, Alberta, Canada, October 14 – 17, 1997). UIST ‘97. ACM, New York, NY, 223-232.
DOI: http://doi.acm.org/10.1145/263407.263551
Abstract
The metaDESK is our first platform for exploring the design of tangible user interfaces. The metaDESK integrates multiple 2D and 3D graphic displays with an assortment of physical objects and instruments, sensed by an array of optical, mechanical, and electromagnetic field sensors. The metaDESK “brings to life” these physical objects and instruments as tangible interfaces to a range of graphically-intensive applications.
Using the metaDESK platform, we are studying issues such as a) the physical embodiment of GUI (graphical user interface) widgets such as icons, handles, and windows; b) the coupling of everyday physical objects with the digital information that pertains to them.
Models and Mechanisms for Tangible User Interfaces
Brygg Ullmer. Models and Mechanisms for Tangible User Interfaces. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.
Abstract
Current human-computer interface design is dominated by the graphical user
interface approach, where users interact with graphical abstractions of virtual
interface devices through a few general-purpose input “peripherals.” The thesis
develops models and mechanisms for “tangible user interfaces” – user interfaces
which use physical objects, instruments, surfaces, and spaces as physical
interfaces to digital information. Prototype applications on three platforms – the
metaDESK, transBOARD, and ambientROOM – are introduced as examples of this
approach. These instances are used to generalize the “GUI widgetry,” “optical,”
and “containers and conduits” interface metaphors. The thesis also develops
engineering mechanisms called proxy-distributed or “proxdist” computation, which
provide a layered approach for integrating physical objects with diverse sensing,
display, communication, and computation capabilities into coherent interface
implementations. The combined research provides a vehicle for moving beyond
the keyboard, monitor, and pointer of current computer interfaces towards use of
the physical world itself as a kind of computationally-augmented interface.
Triangles: Design of a Physical/Digital Construction Kit
Gorbet, M. G. and Orth, M. 1997. Triangles: design of a physical/digital construction kit. In Proceedings of the 2nd Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Amsterdam, The Netherlands, August 18 – 20, 1997). S. Coles, Ed. DIS ‘97. ACM, New York, NY, 125-128.
DOI: http://doi.acm.org/10.1145/263552.263592
Abstract
This paper describes the design process and philosophy behind Triangles, a new physical computer interface in the form of a construction kit of identical, flat, plastic triangles. The triangles connect together both mechanically and electrically with magnetic, conducting connectors. When the pieces contact one another, information about the specific connection is passed through the conducting connectors to the computer. In this way, users can create both two and three-dimensional objects whose exact configuration is known by the computer. The physical connection of any two Triangles can also trigger specific events in the computer, creating a simple but powerful means for physically interacting with digital information. This paper will describe the Triangles system, its advantages and applications. It will also highlight the importance of collaborative and multidisciplinarian design teams in the creation of new digital objects that bridge electrical engineering, industrial design, and software design—such as the Triangles
inTouch: A Medium for Haptic Interpersonal Communication
Brave, S. and Dahley, A. 1997. inTouch: a medium for haptic interpersonal communication. In CHI ‘97 Extended Abstracts on Human Factors in Computing Systems: Looking To the Future (Atlanta, Georgia, March 22 – 27, 1997). CHI ‘97. ACM, New York, NY, 363-364.
DOI: http://doi.acm.org/10.1145/1120212.1120435
Abstract
In this paper, we introduce a new approach for applying
haptic feedback technology to interpersonal
communication. We present the design of our prototype
inTouch system which provides a physical link between
users separated by distance.
Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms
Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, United States, March 22 – 27, 1997). S. Pemberton, Ed. CHI ‘97. ACM, New York, NY, 234-241.
DOI: http://doi.acm.org/10.1145/258549.258715
Abstract
This paper presents our vision of Human Computer
Interaction (HCI): “Tangible Bits.” Tangible Bits allows
users to “grasp & manipulate” bits in the center of users’
attention by coupling the bits with everyday physical
objects and architectural surfaces. Tangible Bits also
enables users to be aware of background bits at the
periphery of human perception using ambient display media
such as light, sound, airflow, and water movement in an
augmented space. The goal of Tangible Bits is to bridge
the gaps between both cyberspace and the physical
environment, as well as the foreground and background of
human activities.
This paper describes three key concepts of Tangible Bits:
interactive surfaces; the coupling of bits with graspable
physical objects; and ambient media for background
awareness. We illustrate these concepts with three
prototype systems – the metaDESK, transBOARD and
ambientROOM – to identify underlying research issues.
1995
Bricks: Laying the Foundations for Graspable User Interfaces
Fitzmaurice, G. W., Ishii, H., and Buxton, W. A. 1995. Bricks: laying the foundations for graspable user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Denver, Colorado, United States, May 07 – 11, 1995). I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 442-449
DOI: http://doi.acm.org/10.1145/223904.223964
Abstract
We introduce the concept of Graspable User Interfaces that
allow direct control of electronic or virtual objects through
physical handles for control. These physical artifacts, which
we call “bricks,” are essentially new input devices that can
be tightly coupled or “attached” to virtual objects for
manipulation or for expressing action (e.g., to set parameters
or for initiating processes). Our bricks operate on top of a
large horizontal display surface known as the “ActiveDesk.”
We present four stages in the development of Graspable UIs:
(1) a series of exploratory studies on hand gestures and
grasping; (2) interaction simulations using mock-ups and
rapid prototyping tools; (3) a working prototype and sample
application called GraspDraw; and (4) the initial integrating
of the Graspable UI concepts into a commercial application.
Finally, we conclude by presenting a design space for Bricks
which lay the foundation for further exploring and
developing Graspable User Interfaces
1994
Iterative Design of Seamless Collaboration Media
Hiroshi Ishii, Minoru Kobayashi, and Kazuho Arita. 1994. Iterative design of seamless collaboration media. Commun. ACM 37, 8 (August 1994), 83-97. DOI=10.1145/179606.179687 http://doi.acm.org/10.1145/179606.179687
DOI: DOI=10.1145/179606.179687 http://doi.acm.org/10.1145/179606.179687
1993
Integration of interpersonal space and shared workspace: ClearBoard design and experiments
Ishii, H., Kobayashi, M., and Grudin, J. 1993. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Trans. Inf. Syst. 11, 4 (Oct. 1993), 349-375. DOI= http://doi.acm.org/10.1145/159764.159762
1992
ClearBoard: a seamless medium for shared drawing and conversation with eye contact
Ishii, H. and Kobayashi, M. 1992. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Monterey, California, United States, May 03 – 07, 1992). P. Bauersfeld, J. Bennett, and G. Lynch, Eds. CHI ‘92. ACM, New York, NY, 525-532. DOI= http://doi.acm.org/10.1145/142750.142977
DOI: http://doi.acm.org/10.1145/142750.142977
Abstract
This paper introduces a novel shared drawing medium called ClearBoard. It realizes (1) a seamless shared drawing space and (2) eye contact to support realtime and remote collaboration by two users. We devised the key metaphor: “talking through and drawing on a transparent glass window” to design ClearBoard. A prototype of ClearBoard is implemented based on the “Drafter-Mirror” architecture. This paper first reviews previous work on shared drawing support to clarify the design goals. We then examine three methaphors that fulfill these goals. The design requirements and the two possible system architectures of ClearBoard are described. Finally, some findings gained through the experimental use of the prototype, including the feature of “gaze awareness”, are discussed.
Integration of inter-personal space and shared workspace: ClearBoard design and experiments
Ishii, H., Kobayashi, M., and Grudin, J. 1992. Integration of inter-personal space and shared workspace: ClearBoard design and experiments. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, Canada, November 01 – 04, 1992). CSCW ‘92. ACM, New York, NY, 33-42. DOI= http://doi.acm.org/10.1145/143457.143459
DOI: http://doi.acm.org/10.1145/143457.143459
Abstract
This paper describes the evolution of a novel shared drawing medium that permits co-workers in two different locations to draw with color markers or with electronic pens and software tools while maintaining direct eye contact and the ability to employ natural gestures. We describe the evolution from ClearBoard-1 (based on a video drawing technique) to ClearBoard-2 (which incorporates TeamPaint, a multi-user paint editor). Initial observations based on use and experimentation are reported. Further experiments are conducted with ClearBoard-0 (a simple mockup), with ClearBoard-1, and with an actual desktop as a control. These experiments verify the increase of eye contact and awareness of collaborator’s gaze direction in ClearBoard environments where workspace and co-worker images compete for attention.
1991
Toward An Open Shared Workspace: Computer and Video Fusion Approach of TeamWorkStation
Hiroshi Ishii and Naomi Miyake. 1991. Toward an open shared workspace: computer and video fusion approach of TeamWorkStation. Commun. ACM 34, 12 (December 1991), 37-50. DOI=10.1145/125319.125321 http://doi.acm.org/10.1145/125319.125321
DOI: DOI=10.1145/125319.125321 http://doi.acm.org/10.1145/125319.125321
Abstract
Groupware is intended to create a shared workspace that supports dynamic collaboration in a work group over space and time constraints. To gain the collective benefits of groupware use, the groupware must be accepted by a majority of workgroup members as a common tool. Groupware must overcome the hurdle of critical mass.
1990
TeamWorkStation: towards a seamless shared workspace
H. Ishii. 1990. TeamWorkStation: towards a seamless shared workspace. In Proceedings of the 1990 ACM conference on Computer-supported cooperative work (CSCW ‘90). ACM, New York, NY, USA, 13-26. DOI=10.1145/99332.99337 http://doi.acm.org/10.1145/99332.99337
DOI: DOI=10.1145/99332.99337 http://doi.acm.org/10.1145/99332.99337