Papers

2015

TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week

Hiroshi Ishii, Daniel Leithinger, Sean Follmer, Amit Zoran, Philipp Schoessler, and Jared Counts, “TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

TRANSFORM fuses technology and design to celebrate the transformation from a piece of static furniture to a dynamic machine driven by streams of data and energy. TRANSFORM aims to inspire viewers with unexpected transformations, as well as the aesthetics of a complex machine in motion. This paper describes the concept, engine, product, and motion design of TRANSFORM, which was first exhibited at LEXUS DESIGN AMAZING 2014 MILAN in April 2014.

TRANSFORM

THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens

Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2015. THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘15). ACM, New York, NY, USA, 89-96.

The huge influx of mobile display devices is transforming computing into multi-device interaction, demanding a fluid mechanism for using multiple devices in synergy. In this paper, we present a novel interaction system that allows a collocated large display and a small handheld device to work together. The smartphone acts as a physical interface for near-surface interactions on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. As a result, the smartphone can directly interact with data displayed on the host computer, with precisely aligned visual feedback from both devices. The possible interactions are described and classified in a framework, which we exemplify on the basis of several implemented applications. Finally, we present a technical evaluation and describe how our system is unique compared to other existing near-surface interaction systems. The proposed technique can be implemented on existing devices without the need for additional hardware, promising immediate integration into existing systems.

THAW

Cord UIs: Controlling Devices with Augmented Cables

Philipp Schoessler, Sang-won Leigh, Krithika Jagannath, Patrick van Hoof, and Hiroshi Ishii. 2015. Cord UIs: Controlling Devices with Augmented Cables. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Em

Cord UIs are sensorial augmented cords that allow for simple metaphor-rich interactions to interface with their connected devices. Cords offer a large underexplored space for interactions as well as unique properties and a diverse set of metaphors that make them potentially interesting tangible interfaces. We use cords as input devices and explore different interactions like tying knots, stretching, pinching and kinking to control the flow of data and/or electricity. We also look at ways to use objects in combination with augmented cords to manipulate data or certain properties of a device. For instance, placing a clamp on a cable can obstruct the audio signal to the headphones. To test and evaluate our ideas, we built five working prototypes that showcase some of the interactions described in this paper as well as special materials such as piezo copolymer cables and stretchable cords.

Cord UIs

Sticky Actuator: Free-Form Planar Actuators for Animated Objects

Niiyama, R., Sun, X., Yao, L., Ishii, H., Rus, D., and Kim, S. Sticky Actuator: Free-Form Planar Actuators for Animated Objects. International Conference on Tangible, Embedded, and Embodied Interaction (TEI), ACM Press (2015), 77–84.

We propose soft planar actuators enhanced by free-form fabrication that are suitable for making everyday objects move. The actuator consists of one or more inflatable pouches with an adhesive back. We have developed a machine for the fabrication of free-from pouches; squares, circles and ribbons are all possible. The deformation of the pouches can provide linear, rotational, and more complicated motion corresponding to the pouch’s geometry. We also provide a both manual and programmable control system. In a user study, we organized a hands-on workshop of actuated origami for children. The results show that the combination of the actuator and classic materials can enhance rapid prototyping of animated objects.

Sticky Actuator

Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging

Viirj Kan, Katsuya Fujii, Judith Amores, Chang Long Zhu Jin, Pattie Maes, and Hiroshi Ishii. 2015. Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘15). ACM, New York, NY, USA, 619-624.

Wearable commodities are able to extend beyond the temporal span of a particular community event, offering omnipresent vehicles for producing icebreaking interaction opportunities. We introduce a novel platform, which generates social affordances to facilitate community organizers in aggregating social interaction among unacquainted, collocated members beyond initial hosted gatherings. To support these efforts, we present functional work-in-progress prototypes for Social Textiles, wearable computing textiles which enable social messaging and peripheral social awareness on non-emissive digitally linked shirts. The shirts serve as catalysts for different social depths as they reveal common interests (mediated by community organizers), based on the physical proximity of users. We provide 3 key scenarios, which demonstrate the user experience envisioned with our system. We present a conceptual framework, which shows how different community organizers across domains such as universities, brand communities and digital self-organized communities can benefit from our technology.

Social Textiles

bioLogic: Natto Cells as Nanoactuators for Shape
Changing Interfaces

Lining Yao, Jifei Ou, Chin-Yi Cheng, Helene Steiner, Wen Wang, Guanyun Wang, and Hiroshi Ishii. 2015. bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15). ACM, New York, NY, USA, 1-10. DOI=10.1145/2702123.2702611 http://doi.acm.org/10.1145/2702123.2702611

Through scientific research in collaboration with biologists,we found natto cells can contract and expand with the change of relative humidity. In this paper, we firstly
describe the scientific discovery of natto cells as a biological actuator. Next, we expand on the technological developments which enables the translation between the nanoscale actuators and the macroscale interface design: the development of the composite biofilm, the development of
the responsive structures, the control setup for actuating biofilms, a simulation and fabrication platform. Finally, we provide a variety of application designs, with and without computer control to demonstrate the potential of our bioactuators. Through this paper, we intend to encourage the use of natto cells and our platform technologies for the design of shape changing interfaces, and more generally, the use and research of biological materials in HCI.

BioLogic

TRANSFORM as Adaptive and Dynamic Furniture

Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp Schoessler, Amit Zoran and Hiroshi Ishii, “TRANSFORM as Adaptive and Dynamic Furniture,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

TRANSFORM is an exploration of how shape display technology can be integrated into our everyday lives as interactive, shape changing furniture. These interfaces not only serve as traditional computing devices, but also support a variety of physical activities. By creating shapes on demand or by moving objects around, TRANSFORM changes the ergonomics, functionality and aesthetic dimensions of furniture. The video depicts a story with various scenarios of how TRANSFORM shape shifts to support a variety of use cases in the home and in the work environment: It holds and moves objects like fruits, game tokens, office supplies and tablets; creates dividers on demand; and generates interactive sculptures to convey messages and audio.

TRANSFORM as Dynamic and Adaptive Furniture

Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick

Ken Nakagaki, Chikara Inamura, Pasquale Totaro, Thariq Shihipar, Chantine Akiyama, Yin Shuang and Hiroshi Ishii, “Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick,” CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

We use sticks as tools for a variety of activities, everything from conducting music to playing sports or even engage in combat. However, these experiences are inherently physical and are poorly conveyed through traditional digital mediums such as video. Linked-Stick is a shape-changing stick that can mirror the movements of another person’s stick-shape tool. We explore how this can be used to experience and learn music, sports and fiction in a more authentic manner. Our work attempts to expand the ways in which we interact with and learn to use tools.

Linked-Stick

2014

Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2014. Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST ‘14). ACM, New York, NY, USA, 461-470. DOI=10.1145/2642918.2647377 http://doi.acm.org/10.1145/2642918.2647377

We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user’s body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.

Presentation:

inFORM Physical Telepresence

AnnoScape: Remote Collaborative Review Using Live Video Overlay in Shared 3D Virtual Workspace

Austin Lee, Hiroshi Chigira, Sheng Kai Tang, Kojo Acquah, and Hiroshi Ishii. 2014. AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI ‘14). ACM, New York, NY, USA, 26-29. DOI=10.1145/2659766.2659776 http://doi.acm.org/10.1145/2659766.2659776

We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system’s virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape’s potential to facilitate effective communication during remote 3D data reviews.

AnnoScape

T(ether): Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation

David Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin, and Hiroshi Ishii. 2014. T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI ‘14). ACM, New York, NY, USA, 90-93. DOI=10.1145/2659766.2659785 http://doi.acm.org/10.1145/2659766.2659785

T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users’ heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand’s position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)’s potential for embodied viewport control and 3D modeling interactions.

T(ether)

bioPrint: An automatic deposition system for Bacteria
Spore Actuators

Jifei Ou, Lining Yao, Clark Della Silva, Wen Wang, and Hiroshi Ishii. 2014. bioPrint: an automatic deposition system for bacteria spore actuators. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 121-122.

We propose an automatic deposition method of bacteria spores, which deform thin soft materials under environmental humidity change. We describe the process of two-dimensional printing the spore solution as well as a design application. This research intends to contribute to the understanding of the control and pre-programming the transformation of future interfaces.

bioPrint

THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens

Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2014. THAW: tangible interaction with see-through augmentation for smartphones on computer screens. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 55-56

In this paper, we present a novel interaction system that allows a collocated large display and small handheld devices to seamlessly work together. The smartphone acts both as a physical interface and as an additional graphics layer for near-surface interaction on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. The proposed technique can be implemented on existing devices without the need for additional hardware.

THAW

Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion

Xiao Xiao, Basheer Tome, and Hiroshi Ishii. 2014. Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion. In Proceedings of the 14th International Conference on New Interfaces for Musical Expression (NIME ‘14). Goldsmiths University of London. London, UK.

We present Andante, a representation of music as animated characters walking along the piano keyboard that appear to play the physical keys with each step. Based on a view of music pedagogy that emphasizes expressive, full- body communication early in the learning process, Andante promotes an understanding of music rooted in the body, taking advantage of walking as one of the most fundamental human rhythms. We describe three example visualizations on a preliminary prototype as well as applications extending our examples for practice feedback, improvisation and composition. Through our project, we reflect on some high level considerations for the NIME community.

Andante

jamSheets: Thin Interfaces with Tunable Stiffness Enabled by Layer Jamming

Jifei Ou, Lining Yao, Daniel Tauber, Jürgen Steimle, Ryuma Niiyama, and Hiroshi Ishii. 2014. jamSheets: thin interfaces with tunable stiffness enabled by layer jamming. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI ‘14). ACM, New York, NY, USA, 65-72. DOI=10.1145/2540930.2540971 http://doi.acm.org/10.1145/2540930.2540971

This works introduces layer jamming as an enabling technology for designing deformable, stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable stiffness properties can yield dynamic haptic feedback and shape deformation capabilities. In comparison to the particle jamming, layer jamming allows for constructing thin and lightweight form factors of an interface. We propose five layer structure designs and an approach which composites multiple materials to control the deformability of the interfaces. We also present methods to embed different types of sensing and pneumatic actuation layers on the layer-jamming unit. Through three application prototypes we demonstrate the benefits of using layer jamming in interface design. Finally, we provide a survey of materials that have proven successful for layer jamming.

jamSheets

Weight and Volume Changing Device with Liquid Metal Transfer

Ryuma Niiyama, Lining Yao, and Hiroshi Ishii. 2014. Weight and volume changing device with liquid metal transfer. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI ‘14). ACM, New York, NY, USA, 49-52. DOI=10.1145/2540930.2540953 http://doi.acm.org/10.1145/2540930.2540953

This paper presents a weight-changing device based on the transfer of mass. We chose liquid metal (Ga-In-Tin eutectic) and a bi-directional pump to control the mass that is injected into or removed from a target object. The liquid metal has a density of 6.44g/cm3, which is about six times heavier than water, and is thus suitable for effective mass transfer. We also combine the device with a dynamic volume-changing function to achieve programmable mass and volume at the same time. We explore three potential applications enabled by weight-changing devices: density simulation of different materials, miniature representation of planets with scaled size and mass, and motion control by changing gravity force. This technique opens up a new design space in human-computer interactions.

Integrating Optical Waveguides for Display and Sensing
on Pneumatic Soft Shape Changing Interfaces

Lining Yao, Jifei Ou, Daniel Tauber, and Hiroshi Ishii. 2014. Integrating optical waveguides for display and sensing on pneumatic soft shape changing interfaces. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST’14 Adjunct). ACM, New York, NY, USA, 117-118.

We introduce the design and fabrication process of integrating optical fiber into pneumatically driven soft composite shape changing interfaces. Embedded optical waveguides can provide both sensing and illumination, and add one more building block to the design of designing soft pneumatic shape changing interfaces.

2013

pdf-iconDownload

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. inFORM: Dynamic Physical Affordances and Constraints
through Shape and Object Actuation. To appear in Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ‘13). ACM, New York, NY, USA.

Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.

inFORM

FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adaptive Presentation for Video Conferencing

Lining Yao, Anthony DeVincenzi, Anna Pereira, and Hiroshi Ishii. 2013. FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing. In Proceedings of the 1st symposium on Spatial user interaction (SUI ‘13). ACM, New York, NY, USA, 73-76. DOI=10.1145/2491367.2491377 http://doi.acm.org/10.1145/2491367.2491377

We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users’ focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.

FocalSpace

Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

Daniel Leithinger, Sean Follmer, Alex Olwal, Samuel Luescher, Akimitsu Hogge, Jinha Lee, and Hiroshi Ishii. 2013. Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays. In Proceedings of the 2013 ACM annual conference on Human factors in computing systems (CHI ‘13). ACM, New York, NY, USA, 1441-1450.

Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video see-through AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.

Sublimate

Beyond Visualization – Designing Interfaces to Contextualize Geospatial Data

Samuel Luescher

The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering.

The main contribution of this thesis is a concrete application scenario and initial prototype of a “Designed Environment” where we can explore the relationship between the surface of Japan’s islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the “Spatial Web”, bringing us one step closer to a tangible “dashboard of the Earth.”

synchroLight: Three-dimensional Pointing System for Remote Video Communication

Jifei Ou, Sheng Kai Tang, and Hiroshi Ishii. 2013. synchroLight: three-dimensional pointing system for remote video communication. In CHI ‘13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘13). ACM, New York, NY, USA, 169-174. DOI=10.1145/2468356.2468387 http://doi.acm.org/10.1145/2468356.2468387

Although the image quality and transmission speed of current remote video communication systems have vastly improved in recent years, its interactions still remain detached from the physical world. This causes frustration and lowers working efficiency, especially when both sides are referencing physical objects and space. In this paper, we propose a remote pointing system named synchroLight that allows users to point at remote physical objects with synthetic light. The system extends the interaction of the existing remote pointing systems from two-dimensional surfaces to three-dimensional space. The goal of this project is to approach a seamless experience in video communication.

synchroLight

 

exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality

Shunichi Kasahara, Ryuma Niiyama, Valentin Heun, and Hiroshi Ishii. 2013. exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality. In Proceedings of the seventh international conference on Tangible, embedded, and embodied interaction (TEI ‘13). ACM, Barcelona, Spain.

As domestic robots and smart appliances become increasingly common, they require a simple, universal interface to control their motion. Such an interface must support a simple selection of a connected device, highlight its capabilities and allow for an intuitive manipulation. We propose “exTouch”, an embodied spatially-aware approach to touch and control devices through an augmented reality mediated mobile interface. The “exTouch” system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object. We demonstrate that the system can be used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room.

exTouch

 

MirrorFugue III: Conjuring the Recorded Pianist

Xiao Xiao, Anna Pereira and Hiroshi Ishii. 2013. MirrorFugue III: Conjuring the Recorded Pianist. In Proceedings of 13th conference on New Interfaces for Musical Expression (NIME ‘13). KAIST. Daejeon, South Korea.

The body channels rich layers of information when playing music, from intricate manipulations of the instrument to vivid personifications of expression. But when music is captured and replayed across distance and time, the performer’s body is too often trapped behind a small screen or absent entirely.
This paper introduces MirrorFugue III, an interface to conjure the recorded performer by combining the moving keys of a player piano with life-sized projection of the pianist’s hands and upper body. Inspired by reflections on a lacquered grand piano, our interface evokes the sense that the virtual pianist is playing the physically moving keys.
Through MirrorFugue III, we explore the question of how to viscerally simulate a performer’s presence to create immersive experiences. We discuss design choices, outline a space of usage scenarios and report reactions from users.

MirrorFugue

 

PneUI: Pneumatically Actuated Soft Composite Materials for
Shape Changing Interfaces

Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, and Hiroshi Ishii. 2013. PneUI: pneumatically actuated soft composite materials for shape changing interfaces. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ‘13). ACM, New York, NY, USA, 13-22. DOI=10.1145/2501988.2502037 http://doi.acm.org/10.1145/2501988.2502037

This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites’ multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.

PneUI

 

2012

Towards Radical Atoms – Form-giving to
Transformable Materials

Dávid Lakatos, Hiroshi Ishii. 2012. Towards Radical Atoms — Form-giving to transformable materials. In proceedings of Cognitive Infocommunications (CogInfoCom), 2012 IEEE 3rd International Conference, Kosice, Slovakia

Form, as the externalization of an idea has been present in our civilization for several millennia. Humans have used their hands and tools to directly manipulate and alter/deform the shape of physical materials. Concurrently, we have been inventing tools in the digital domains that allow us to freely manipulate digital information. The next step in the evolution of form-giving is toward shape- changing materials, with tight coupling between their shape and an underlying digital model. In this paper we compare approaches for interaction design of these shape-shafting entities that we call Radical Atoms. We use three projects to elaborate on appropriate interaction techniques for both the physical and the virtual domains.

Second surface: multi-user spatial collaboration system based on augmented reality

Shunichi Kasahara, Valentin Heun, Austin S. Lee, and Hiroshi Ishii. 2012. Second surface: multi-user spatial collaboration system based on augmented reality. In SIGGRAPH Asia 2012 Emerging Technologies (SA ‘12). ACM, New York, NY, USA, , Article 20 , 4 pages. DOI=10.1145/2407707.2407727 http://doi.acm.org/10.1145/2407707.2407727

An environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environment. We introduce Second Surface, a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup.

Second Surface

 

Jamming User Interfaces: Programmable Particle Stiffness and Sensing for Malleable and Shape-Changing Devices.

Sean Follmer, Daniel Leithinger, Alex Olwal, Nadia Cheng, and Hiroshi Ishii. 2012. Jamming user interfaces: programmable particle stiffness and sensing for malleable and shape-changing devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST ‘12). ACM, New York, NY, USA, 519-528.

Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming’s potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.

Jamming User Interfaces

Point and share: from paper to whiteboard

Misha Sra, Austin Lee, Sheng-Ying Pao, Gonglue Jiang, and Hiroshii Ishii. 2012. Point and share: from paper to whiteboard. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ‘12). ACM, New York, NY, USA, 23-24. DOI=10.1145/2380296.2380309 http://doi.acm.org/10.1145/2380296.2380309

Traditional writing instruments have the potential to enable new forms of interactions and collaboration though digital enhancement. This work specifically enables the user to utilize pen and paper as input mechanisms for content to be displayed on a shared interactive whiteboard. We introduce a pen cap with an infrared led, an actuator and a switch.
Pointing the pen cap at the whiteboard allows users to select and position a “canvas” on the whiteboard to display handwritten text while the actuator enables resizing the canvas and the text. It is conceivable that anything one can
write on paper anywhere, could be displayed on an interactive whiteboard.

rainBottles: gathering raindrops of data from the cloud

Jinha Lee, Greg Vargas, Mason Tang, and Hiroshi Ishii. 2012. rainbottles: gathering raindrops of data from the cloud. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA ‘12). ACM, New York, NY, USA, 1901-1906.

This paper introduces a design for a new way of managing the flow of information in the age of overflow. The device, rainBottles, collects virtual data and converts it into a virtual liquid that fills up specially designed glass bottles. The bottles then serve as an ambient interface displaying the quantity of information in a queue as well as a tangible controller for opening the applications associated with the data in the bottles. With customizable data relevance metrics, the bottles can also serve as filters by letting less relevant data overflow out of the bottle.

Point-and-Shoot Data

Stephanie Lin, Samuel Luescher, Travis Rich, Shaun Salzberg, and Hiroshi Ishii. 2012. Point-and-shoot data. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA ‘12). ACM, New York, NY, USA, 2027-2032.

We explore the use of visible light as a wireless communication medium for mobile devices. We discuss the advantages of a human perceptible communication medium in regards to user experience and create tools for direct manipulation of the communication channel.

KidCAD: digitally remixing toys through tangible tools

Sean Follmer and Hiroshi Ishii. 2012. KidCAD: digitally remixing toys through tangible tools. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ‘12). ACM, New York, NY, USA, 2401-2410.

Children have great facility in the physical world, and can skillfully model in clay and draw expressive illustrations. Traditional digital modeling tools have focused on mouse, keyboard and stylus input. These tools may be complicated and difficult for young users to easily and quickly create exciting designs. We seek to bring physical interaction to digital modeling, to allow users to use existing physical objects as tangible building blocks for new designs. We introduce KidCAD a digital clay interface for children to remix toys. KidCAD allows children to imprint 2.5D shapes from physical objects into their digital models by deforming a malleable gel input device, deForm. Users can mashup existing objects, edit and sculpt or draw new designs on a 2.5D canvas using physical objects, hands and tools as well as 2D touch gestures. We report on a preliminary user study with 13 children, ages 7 to 10, which provides feedback for our design and helps guide future work in tangible modeling for children.

deFORM

 

People in books: using a FlashCam to become part of an interactive book for connected reading

Sean Follmer, Rafael (Tico) Ballagas, Hayes Raffle, Mirjana Spasojevic, and Hiroshi Ishii. 2012. People in books: using a FlashCam to become part of an interactive book for connected reading. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW ‘12). ACM, New York, NY, USA, 685-694.

We introduce People in Books with FlashCam technology, a system that supports children and long-distance family members to act as characters in children’s storybooks while they read stories together over a distance. By segmenting the video chat streams of the child and remote family member from their background surroundings, we create the illusion that the child and adult reader are immersed among the storybook illustrations. The illusion of inhabiting a shared story environment helps remote family members feel a sense of togetherness and encourages active reading behaviors for children ages three to five. People In Books is designed to fit into families’ traditional reading practices, such as reading ebooks on couches or in bed via netbook or tablet computers. To accommodate this goal we implemented FlashCam, a computationally cost effective and physically small background subtraction system for mobile devices that allows users to move locations and change lighting conditions while they engage in background-subtracted video communications. A lab evaluation compared People in Books with a conventional remote reading application. Results show that People in Books motivates parents and children to be more performative readers and encourages open-ended play beyond the story, while creating a strong sense of togetherness.

Video Play

 

Radical Atoms: Beyond Tangible Bits, Toward Transformable Materials

Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (January 2012), 38-51.

“Radical Atoms” is our vision of human interaction with future dynamic materials that are computationally reconfigurable.

“Radical Atoms” was created to overcome the fundamental limitations of its precursor, the “Tangible Bits” vision. Tangible Bits – the physical embodiment of digital information and computation – was constrained by the rigidity of “atoms” in comparison with the fluidity of bits. This makes it difficult to represent fluid digital information in traditionally rigid physical objects, and inhibits dynamic tangible interfaces from being able to control or represent computational inputs and outputs.

In order to augment the vocabulary of Tangible User Interfaces or TUIs, we use dynamic representations such as co-located projections or “digital shadows”. However the physical objects on the tabletop stay static and rigid. To overcome these limitations, we began to experiment with a variety of actuated and kinetic tangibles, which can transform their physical positions or shapes as an additional output modality beyond the traditional manual input mode of TUI’s.

Our vision of “Radical Atoms” is based on hypothetical, extremely malleable and reconfigurable materials that can be described by real-time digital models so that dynamic changes in digital information can be reflected by a dynamic change in physical state and vice-versa. Bidirectional synchronization is key to making Radical Atoms a tangible but dynamic representation & control of digital information, and enabling new forms of Human Computer Interaction.

In this article, we review the original vision and limitations of Tangible Bits and introduce an array of actuated/kinetic tangibles that emerged in the past 10 years of Tangible Media Group’s research to overcome the issue of atoms’ rigidity. Then we illustrate our vision of interactions with Radical Atoms which do not exist today, but may be invented in next 100 years by atom hackers (material scientists, self-organizing nano-robot engineers, etc.) and speculate new interaction techniques and applications which would be enabled by the Radical Atoms.

Radical Atoms

 

MirrorFugue2: Embodied
Representation of Recorded Piano Performances

Xiao Xiao and Hiroshi Ishii. 2012. MirrorFugue2: Embodied Representation of Recorded Piano Performances. In Extended Abstracts of the 2012 international conference on Interactive Tabletops and Surfaces (ITS ‘12). ACM, New York, NY, USA.

We present MirrorFugue2, and interface for viewing recorded piano playing where video of the hands and upper body of a performer are projected on the surface of the instrument at full scale. Rooted in the idea that a performer’s body plays a key role in channeling musical expression, we introduce an upper body display, extending a previous prototype that demonstrated the benefits of a full-scale hands display for pedagogy.
We describe two prototypes of MirrorFugue2 and discuss how the interface can benefit pedagogy, watching performances and collaborative playing.

MirrorFugue II

2011

PingPong++: Community Customization in Games and Entertainment

Xiao Xiao, Michael S. Bernstein, Lining Yao, David Lakatos, Lauren Gust, Kojo Acquah, and Hiroshi Ishii. 2011. PingPong++: community customization in games and entertainment. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE ‘11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 24 , 6 pages.

In this paper, we introduce PingPong++, an augmented ping pong table that applies Do-It-Yourself (DIY) and community contribution principles to the world of physical sports and play. PingPong++ includes an API for creating new visualizations, easily recreateable hardware, an end-user interface for those without programming experience, and a crowd data API for replaying and remixing past games. We discuss a range of contribution domains for PingPong++ and share the design, usage, feedback, and lessons for each domain. We then reflect on our process and outline a design space for community-contributed sports.

PingPongPlusPlus

 

ZeroN: Mid-air Tangible Interaction Enabled by Computer-Controlled Magnetic Levitation

Jinha Lee, Rehmi Post, and Hiroshi Ishii. 2011. ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 327-336.

This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To accomplish this we developed a magnetic control system that can levitate and actuate a permanent magnet in a predefined 3D volume. This is combined with an optical track- ing and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology, interaction scenarios and challenges, dis- cuss initial observations, and outline future development.

ZeroN: Levitated Interaction Element

 

Rope Revolution: Tangible and Gestural Rope Interface for Collaborative Play

Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Rope Revolution: tangible and gestural rope interface for collaborative play. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE ‘11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 11 , 8 pages. DOI=10.1145/2071423.2071437 http://doi.acm.org/10.1145/2071423.2071437

In this paper we describe Rope Revolution, a rope-based gaming system for collaborative play. After identifying popular rope games and activities around the world, we developed a generalized tangible rope interface that includes a compact motion-sensing and force-feedback module that can be used for a variety of rope-based games. Rope Revolution is designed to foster both co-located and remote collaborative experiences by using actual rope to connect players in physical activities across virtual spaces. Results from this study suggest that a tangible user interface with rich metaphors and physical feedback help enhance the gaming experience in addition to helping remote players feel connected across distances. We use this design as an example to motivate discussion on how to take advantage of the various physical affordances of common objects to build a generalized tangible interface for remote play.

RopeRevolution

 

Sourcemap: eco-design, sustainable supply chains, and radical transparency

Leo Bonanni. 2011. Sourcemap: eco-design, sustainable supply chains, and radical transparency. XRDS 17, 4 (June 2011), 22-26.

Industry and consumers need tools to help make decisions that are
good for communities and for the environment

Duet for Solo Piano: MirrorFugue for Single User Playing with Recorded Performances

Xiao Xiao and Hiroshi Ishii. 2011. Duet for solo piano: MirrorFugue for single user playing with recorded performances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1285-1290.

MirrorFugue is an interface that supports symmetric, real-time collaboration on the piano using spatial metaphors to communicate the hand gesture of collaborators. In this paper, we present an extension of MirrorFugue to support single-user interactions with recorded material and outline usage scenarios focusing on practicing and self-reflection. Based on interviews with expert musicians, we discuss how single-user interactions on MirrorFugue relate to larger themes in music learning and suggest directions for future research.

MirrorFugue I

Multi-Jump: Jump Roping Over Distances

Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Multi-jump: jump roping over distances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1729-1734.

Jump roping, a game in which one or more people twirl a rope while others jump over the rope, promotes social interaction among children while developing their coordination skills and physical fitness. However, the traditional game requires that players be in the same physical location. Our ‘Multi-Jump’ jump-roping game platform builds on the traditional game by allowing players to participate remotely by employing an augmented rope system. The game involves full-body motion in a shared game space and is enhanced with live video feeds, player rewards and music. Our work aims to expand exertion interface gaming, or games that deliberately require intense physical effort, with genuine tangible interfaces connected to real-time shared social gaming environments.

RopeRevolution

Direct and Gestural Interaction with Relief: A 2.5D Shape Display

Daniel Leithinger, David Lakatos, Anthony DeVincenzi, Matthew Blackshaw, and Hiroshi Ishii. 2011. Direct and gestural interaction with relief: a 2.5D shape display. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 541-548.

Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of free hand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system.

Relief

Recompose

Kinected Conference: Augmenting Video Imaging with Calibrated Depth and Audio

Anthony DeVincenzi, Lining Yao, Hiroshi Ishii, and Ramesh Raskar. 2011. Kinected conference: augmenting video imaging with calibrated depth and audio. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW ‘11). ACM, New York, NY, USA, 621-624.

The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. In this paper we present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video. We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. We discuss specific scenarios that explore features such as synthetic refocusing, gesture activated privacy, and spatiotemporal graphic augmentation.

Kinected Conference

Shape-changing interfaces.

Marcelo Coelho and Jamie Zigelbaum. 2011. Shape-changing interfaces. Personal Ubiquitous Comput. 15, 2 (February 2011), 161-173.

The design of physical interfaces has been constrained by the relative akinesis of the material world. Current advances in materials science promise to change this. In this paper, we present a foundation for the design of shape-changing surfaces in human—-computer interaction. We provide a survey of shape-changing materials and their primary dynamic properties, define the concept of soft mechanics within an HCI context, and describe a soft mechanical alphabet that provides the kinetic foundation for the design of four design probes: Surflex, SpeakCup, Sprout I/O, and Shutters. These probes explore how individual soft mechanical elements can be combined to create large-scale transformable surfaces, which can alter their topology, texture, and permeability. We conclude by providing application themes for shape-changing materials in HCI and directions for future work.

SpeakCup

MirrorFugue: Communicating Hand Gesture in Remote Piano Collaboration

Xiao Xiao and Hiroshi Ishii. 2010. MirrorFugue: communicating hand gesture in remote piano collaboration. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI ‘11). ACM, New York, NY, USA, 13-20.

Playing a musical instrument involves a complex set of continuous gestures, both to play the notes and to convey expression. To learn an instrument, a student must learn not only the music itself but also how to perform these bodily gestures. We present MirrorFugue, a set of three interfaces on a piano keyboard designed to visualize hand gesture of a remote collaborator. Based their spatial configurations, we call our interfaces Shadow, Reflection, and Organ. We describe the configurations and detail studies of our designs on synchronous, remote collaboration, focusing specifically on remote lessons for beginners. Based on our evaluations, we conclude that displaying the to-scale hand gestures of a teacher at the locus of interaction can improve remote piano learning for novices.

MirrorFugue I

 

Recompose: Direct and Gestural
Interaction with an Actuated Surface

Matthew Blackshaw, Anthony DeVincenzi, David Lakatos, Daniel Leithinger, and Hiroshi Ishii. 2011. Recompose: direct and gestural interaction with an actuated surface. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA ‘11). ACM, New York, NY, USA, 1237-1242.

In this paper we present Recompose, a new system for manipulation of an actuated surface. By collectively utilizing the body as a tool for direct manipulation alongside gestural input for functional manipulation, we show how a user is afforded unprecedented control over an actuated surface. We describe a number of interaction techniques exploring the shared space of direct and gestural input, demonstrating how their combined use can greatly enhance creation and manipulation beyond unaided human capability.

Recompose

deFORM: An Interactive Malleable Surface For
Capturing 2.5D Arbitrary Objects, Tools and Touch

Sean Follmer, Micah Johnson, Edward Adelson, and Hiroshi Ishii. 2011. deForm: an interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ‘11). ACM, New York, NY, USA, 527-536.

We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications.

deFORM

2010

Virtual Guilds: Collective Intelligence and the Future of Craft

BONANNI, L. AND PARKES, A. Virtual Guilds: Collective Intelligence and the Future of Craft. The Journal of Modern Craft, Volume 3, Number 2, July 2010 , pp. 179-190(12)

In its most basic definition, craft refers to the skilled practice of making things, which is shaped as much by technological advancements as by cultural practices. Richard Sennett discusses the nature of craftsmanship as an enduring, basic human impulse, the desire to do a job well for its own sake.1 This encompasses a much broader context than skilled labor and promotes an objective standard of excellence which incorporates shapers of culture, policy, and technology as craftsmen. The emerging nature of craft is transdisciplinary in its formation and must consider how emerging materials, processes and cultures influence the objects we make and how the processes of design and production can be used to reflect new social values and to change cultural practices. In order to re-think the kind of objects we make, it is necessary to rethink the way we craft our objects.

Digital technologies and media are defining a new sort of craft, seamlessly blending technology, design, and production into the post-industrial landscape. As an early pioneer in redefining craftsmanship to include digital processes, Malcolm McCullough explored the computer as a craft medium inviting interpretation and subltleties, with the combined skill sets of the machine and the human (both mind and hands) providing a structured system of transformations resulting in a crafted object.2 The nature of digital technologies also allows craft to evolve into a form which is decentralized and distributed, and can give rise to excellence through a collective desire and a combined multiplicity of knowledge through community

Craft is inherently a social activity, shaped by communal resources and motivations. The collective approach of craft communities – or guilds – is characterized by the master-apprentice model, where practitioners devote significant time passing on their skills to the next generation. The open source software movement embodies the communal character and the highly skilled practices of craft guilds. Until recently skilled handicraft relied on hands-on teaching and access to local physical resources. Mass media and the internet make it possible to transmit skills and resources to isolated individuals, making possible entirely new kinds of distributed craft communities. These “Virtual Guilds” form at the margins of established domains, extending the reach of specialized knowledge and technology.

Virtual Guilds benefit from the free exchange of expert information to bring about innovation in sometimes neglected domains. The growth of open-source software projects provides the model by which dispersed, collective innovation becomes possible in other domains. Shared resources maintained by a socially motivated community form the backbone of these non-commercial efforts. Digital channels of communication can extend this free exchange of information to the domain of craft, so that specialized designs and processes can be shared among a wide audience. Online distribution provides access to rare materials and tools and provides a market for craft products.

Several Virtual Guilds exist today, and they are contributing important inventions and new domains to often neglected markets. These communities of skilled practitioners are characterized by their marginal nature, where the free and open exchange of ideas is carried forward for collective benefit. At the same time, the popularity of Virtual Guilds and the commercial success of their inventions endanger the free exchange of information on which they are built. The survival of collective craft communities is important to under-served groups and for technological innovation, so it is essential that more practitioners engage in collective action. The new generation of digital design and fabrication tools lays the groundwork for more skilled craftspeople to collectively expand on their practice.

Construction by replacement: a new approach to
simulation modeling

James Hines, Thomas Malone, Paulo Gonçalves, George Herman, John Quimby, Mary Murphy-Hoye, James Rice, James Patten, Hiroshi Ishii, Syst. Dyn. Rev. July 2010

Simulation modeling can be valuable in many areas of management science, but it is often costly, time consuming, and difficult to do. To reduce these problems, system dynamics researchers have previously developed standard pieces of model structure, called molecules, that can be reused in different models. However, the models assembled from these molecules often lacked feedback loops and generated few, if any, insights. This paper describes a new and more promising approach to using molecules in system dynamics modeling. The heart of the approach is a systematically organized library (or taxonomy) of predefined model components, or molecules, and a set of software tools for replacing one molecule with another. Users start with a simple generic model and progressively replace parts of the model with more specialized molecules from a systematically organized library of predefined components. These substitutions either create a new running model automatically or request further manual changes from the user. The paper describes our exploration using this approach to construct system dynamics models of supply chain processes in a large manufacturing company. The experiment included developing an innovative “tangible user interface” and a comprehensive catalog of system dynamics molecules. The paper concludes with a discussion of the benefits and limitations of this approach. Copyright © 2010 John Wiley & Sons, Ltd.
Syst. Dyn. Rev. (2009)

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1354665

Supply Chain Visualization

Small Business Applications of Sourcemap: A Web Tool for Sustainable Design and Supply Chain Transparency

Bonanni, L., Hockenberry, M., Zwarg, D., Csikszentmihalyi, C., and Ishii, H. 2010. Small business applications of sourcemap: a web tool for sustainable design and supply chain transparency. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ‘10. ACM, New York, NY, 937-946.

This paper introduces sustainable design applications for small businesses through the Life Cycle Assessment and supply chain publishing platform Sourcemap.org. This web-based tool was developed through a year-long participatory design process with five small businesses in Scotland and in New England. Sourcemap was used as a diagnostic tool for carbon accounting, design and supply chain management. It offers a number of ways to market sustainable practices through embedded and printed visualizations. Our experiences confirm the potential of web sustainability tools and social media to expand the discourse and to negotiate the diverse goals inherent in social and environmental sustainability.

Sourcemap

Beyond: collapsible tools and gestures for computational design

Jinha Lee and Hiroshi Ishii. 2010. Beyond: collapsible tools and gestures for computational design. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (CHI EA ‘10). ACM, New York, NY, USA, 3931-3936.

Since the invention of the personal computer, digital media has remained separate from the physical world, blocked by a rigid screen. In this paper, we present Beyond, an interface for 3-D design where users can directly manipulate digital media with physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen. The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending physical affordances to the digital space beyond the computer screen.

Beyond - Collapsible Input Device for 3D Direct Manipulation

Tangible Interfaces for Art Restoration

BONANNI, L., SERACINI, M., XIAO, X., HOCKENBERRY, M., COSTANZO, B.C., SHUM, A., TEIL, R., SPERANZA, A., AND ISHII, H.2010. Tangible Interfaces for Art Restoration. International Journal of Creative Interfaces and Computer Graphics 1, 54-66. DOI: 10.4018/jcicg.2010010105

Few people experience art the way a restorer does: as a tactile, multi-dimensional and ever-changing object.
The authors investigate a set of tools for the distributed analysis of artworks in physical and digital realms.
Their work is based on observation of professional art restoration practice and rich data available through
multi-spectral imaging. The article presents a multidisciplinary approach to develop interfaces usable by restorers,
students and amateurs. Several interaction techniques were built using physical metaphors to navigate the
layers of information revealed by multi-spectral imaging, prototyped using single- and multi-touch displays.
The authors built modular systems to accommodate the technical needs and resources of various institutions
and individuals, with the aim to make high-quality art diagnostics possible on different hardware platforms,
as well as rich diagnostic and historic information about art available for education and research through a
cohesive set of web-based tools instantiated in physical interfaces and public installations.

Wetpaint

Relief: A Scalable Actuated Shape Display

Leithinger, D. and Ishii, H. 2010. Relief: a scalable actuated shape display. In Proceedings of the Fourth international Conference on Tangible, Embedded, and Embodied interaction (Cambridge, Massachusetts, USA, January 24 – 27, 2010). TEI ‘10. ACM, New York, NY, 221-222.

Relief is an actuated tabletop display, which is able to render and animate three-dimensional shapes with a malleable surface. It allows users to experience and form digital models like geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of 120 motorized pins, which are controlled with a low-cost, scalable platform built upon open-source hardware and software tools. Each pin can be addressed individually and senses user input like pulling and pushing.

Relief

g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface.

Jamie Zigelbaum, Alan Browning, Daniel Leithinger, Olivier Bau, and Hiroshi Ishii. 2010. g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI ‘10). ACM, New York, NY, USA, 261-264.

In this paper we present g-stalt, a gestural interface for interacting with video. g-stalt is built upon the g-speak spatial operating environment (SOE) from Oblong Industries. The version of g-stalt presented here is realized as a three-dimensional graphical space filled with over 60 cartoons. These cartoons can be viewed and rearranged along with their metadata using a specialized gesture set. g- stalt is designed to be chirocentric, spatiotemporal, and telekinetic.

Gestural Interaction

g-stalt

Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.

Cati Vaucelle. Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.

Tangible Media as an area has not explored how the tangible handle is more than a marker or place-holder for digital data. Tangible Media can do more. It has the power to materialize and redefine our conception of space and content during the creative process. It can vary from an abstract token that represents a movie to an anthropomorphic plush that reflects the behavior of a sibling during play. My work begins by extending tangible concepts of representation and token-based interactions into movie editing and play scenarios. Through several design iterations and research studies, I establish tangible technologies to drive visual and oral perspectives along with finalized creative works, all during a child’s play and exploration.

I define the framework, Gesture Object Interfaces, expanding on the fields of Tangible User Interaction and Gesture Recognition. Gesture is a mechanism that can reinforce or create the anthropomorphism of an object. It can give the object life. A Gesture Object is an object in hand while doing anthropomorphized gestures. Gesture Object Interfaces engender new visual and narrative perspectives as part of automatic film assembly during children’s play. I generated a suite of automatic film assembly tools accessible to diverse users. The tools that I designed allow for capture, editing and performing to be completely indistinguishable from one another. Gestures integrated with objects become a coherent interface on top of natural play. I built a distributed, modular camera environment and gesture interaction to control that environment. The goal of these new technologies is to motivate children to take new visual and narrative perspectives.

In this dissertation I present four tangible platforms that I created as alternatives to the usual fragmented and sequential capturing, editing and performing of narratives available to users of current storytelling tools. I developed Play it by Eye, Frame it by hand, a new generation of narrative tools that shift the frame of reference from the eye to the hand, from the viewpoint (where the eye is) to the standpoint (where the hand is). In Play it by Eye, Frame it by Hand environments, children discover atypical perspectives through the lens of everyday objects. When using Picture This!, children imagine how an object would appear relative to the viewpoint of the toy. They iterate between trying and correcting in a world of multiple perspectives. The results are entirely new genres of child-created films, where children finally capture the cherished visual idioms of action and drama. I report my design process over the course of four tangible research projects that I evaluate during qualitative observations with over one hundred 4- to 14-year-old users. Based on these research findings, I propose a class of moviemaking tools that transform the way users interpret the world visually, and through storytelling.

Picture This!

WoW Pod

AFK cookset

Dolltalk

WOW pod

Vaucelle, C., Shada, S., and Jahn, M. 2010. WOW pod. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ‘10. ACM, New York, NY, 4813-4816.

WOW Pod is an immersive architectural solution for the advanced massive online role-playing gamer that provides and anticipates all life needs. Inside, the player finds him/herself comfortably seated in front of the computer screen with easy-to-reach water, pre-packaged food, and a toilet conveniently placed underneath a built-in throne.

WoW Pod

OnObject: programming of physical objects for gestural interaction

Keywon Chung. OnObject: programming of physical objects for gestural interaction. Thesis (MsC)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.

Tangible User Interfaces (TUIs) have fueled our imagination about the future of computational user experience by coupling physical objects and activities with digital information. Despite their conceptual popularity, TUIs are still difficult and time-consuming to construct, requiring custom hardware assembly and software programming by skilled individuals. This limitation makes it impossible for end users and designers to interactively build TUIs that suit their context or embody their creative expression. OnObject enables novice end users to turn everyday objects into gestural interfaces through the simple act of tagging. Wearing a sensing device, a user adds a behavior to a tagged object by grabbing the object, demonstrating a trigger gesture, and specifying a desired response. Following this simple Tag-Gesture-Response programming grammar, novice end users are able to transform mundane objects into gestural interfaces in 30 seconds or less. Instead of being exposed to low-level development tasks, users are can focus on creating an enjoyable mapping between gestures and media responses. The design of OnObject introduces a novel class of Human-Computer Interaction (HCI): gestural programming of situated physical objects. This thesis first outlines the research challenge and the proposed solution. It then surveys related work to identify the inspirations and differentiations from existing HCI and design research. Next, it describes the sensing and programming hardware and gesture event server architecture. Finally, it introduces a set of applications created with OnObject and gives observations from user participated sessions.

OnObject

2009

Trackmate: Large-Scale Accessibility of Tangible User Interfaces

Adam Kumpf. Trackmate: Large-Scale Accessibility of Tangible User Interfaces. Thesis (M.S.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.

There is a long history of Tangible User Interfaces (TUI) in the community of human-
computer interaction, but surprisingly few of these interfaces have made it beyond lab
and gallery spaces. This thesis explores how the research community may begin to
remedy the disconnect between modern TUIs and the everyday computing experience
via the creation and dissemination of Trackmate, an accessible (both ubiquitous and
enabling) tabletop tangible user interface that scales to a large number of users with
minimal hardware and configuration overhead. Trackmate is entirely open source and
designed: to be community-centric; to leverage common objects and infrastructure; to
provide a low floor, high ceiling, and wide walls for development; to allow user mod-
ifications and improvisation; to be shared easily via the web; and to work alongside
a broad range of existing applications and new research interface prototypes.

Trackmate

Wetpaint: Scraping Through Multi-Layered Images

Bonanni, L., Xiao, X., Hockenberry, M., Subramani, P., Ishii, H., Seracini, M., and Schulze, J. 2009. Wetpaint: scraping through multi-layered images. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ‘09. ACM, New York, NY, 571-574.

A work of art rarely reveals the history of creation and interpretation that has given it meaning and value. Wetpaint is a gallery interface based on a large touch screen that allows curators and museumgoers to investigate the hidden layers of a painting, and in the process contribute to the pluralistic interpretation of the piece, both locally and online. Inspired by traditional restoration and curatorial methods, we have designed a touch-based user interface for exhibition spaces that allows “virtual restoration” by scraping through the multi-spectral scans of a painting, and “collaborative curation” by leaving voice annotations within the artwork. The system functions through an online social image network for flexibility and to support rich and collaborative commentary for local and remote visitors

Wetpaint

Burn Your Memory Away: One-time Use Video Capture and Storage Device to Encourage Memory Appreciation

Chi, P., Xiao, X., Chung, K., and Chiu, C. 2009. Burn your memory away: one-time use video capture and storage device to encourage memory appreciation. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ‘09. ACM, New York, NY, 2397-2406.

Although modern ease of access to technology enables many of us to obsessively document our lives, much of the captured digital content is often disregarded and forgotten on storage devices, with no concerns of cost or decay. Can we design technology that helps people better appreciate captured memories? What would people do if they only had one more chance to relive past memories? In this paper, we present a prototype design, PY-ROM, a matchstick-like video recording and storage device that burns itself away after being used. This encourages designers to consider lifecycles and human-computer relationships by integrating physical properties into digitally augmenting everyday objects.

PY-ROM

Stress OutSourced: A Haptic Social Network via Crowdsourcing

Chung, K., Chiu, C., Xiao, X., and Chi, P. 2009. Stress outsourced: a haptic social network via crowdsourcing. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ‘09. ACM, New York, NY, 2439-2448.

Stress OutSourced (SOS) is a peer-to-peer network that allows anonymous users to send each other therapeutic massages to relieve stress. By applying the emerging concept of crowdsourcing to haptic therapy, SOS brings physical and affective dimensions to our already networked lifestyle while preserving the privacy of its members. This paper first describes the system, its three unique design choices regarding privacy model, combining mobility and scalability, and affective communication for an impersonal crowd, and contrasts them with other efforts in their respective areas. Finally, this paper describes future work and opportunities in the area of haptic social networks.

Stress OutSourced

Some Challenges for Designers of Shape Changing Interfaces

Zigelbaum, J., Labrune, J.B. Some Challenges for Designers of Shape Changing Interfaces. CHI 2009
Workshop on Transitive Materials (2009).

In this paper we describe some challenges we find in the design of shape changing user interfaces through our own work and thoughts on the current state of the art in HCI. Due to the large set of possibilities for shape changing materials we are faced with a too-large constraint system. Without a good understanding and the beginning of a standardization or physical language for shape change it will be hard to design interactions that make sense beyond those in very limited, one-off applications. We are excited by the challenge that this poses to researchers and look forward to understanding how to use programmable and shape changing materials in the future.

SpeakCup

Fusing Computation into Mega-Affordance Objects

Chung, K., Ishii, H., Fusing computation into mega-affordance objects. CHI 2009 Workshop on Transitive Materials (2009).

In this paper, I present the concept of “Mega-
Affordance Objects” (MAOs). An MAO is a common
object with a primitive form factor that exhibits multiple
affordances and can perform numerous improvised
functions in addition to its original one. In order to
broaden the reach of Tangible User Interfaces (TUIs)
and create compelling everyday applications, I propose
applying computational power to Mega-Affordance
Objects that are highly adaptable and frequently used.
This approach will leverage the capabilities of smart
materials and contribute to the principles of Organic
User Interface (OUI) design.

 

PDF DownloadDownload

Spime Builder: A Tangible Interface for Designing Hyperlinked Objects

Bonanni, L., Vargas, G., Chao, N., Pueblo, S., and Ishii, H. 2009. Spime builder: a tangible interface for designing hyperlinked objects. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 263-266.

Ubiquitous computing is fostering an explosion of physical artifacts that are coupled to digital information – so-called Spimes. We introduce a tangible workbench that allows for the placement of hyperlinks within physical models to couple physical artifacts with located interactive digital media. A computer vision system allows users to model three-dimensional objects and environments in real-time using physical materials and to place hyperlinks in specific areas using laser pointer gestures. We present a working system for real-time physical/digital exhibit design, and propose the means for expanding the system to assist Design for the Environment strategies in product design.

Proverbial Wallet: Tangible Interface for Financial
Awareness

Kestner, J., Leithinger, D., Jung, J., and Petersen, M. 2009. Proverbial wallet: tangible interface for financial awareness. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 55-56.

We propose a tangible interface concept for communicating personal financial information in an ambient and relevant manner. The concept is embodied in a set of wallets that provide the user with haptic feedback about personal financial metrics. We describe how such feedback can inform purchasing decisions and improve general financial awareness.

Stop-Motion Prototyping for Tangible Interfaces

Bonanni, L. and Ishii, H. 2009. Stop-motion prototyping for tangible interfaces. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ‘09. ACM, New York, NY, 315-316.

Stop-motion animation brings the constraints of the body, space and materials into video production. Building on the tradition of video prototyping for interaction design, stop motion is an effective technique for concept development in the design of Tangible User Interfaces. This paper presents a framework for stop-motion prototyping and the results of two workshops based on stop-motion techniques including pixillation, claymation and time-lapse photography. The process of stop-motion prototyping fosters collaboration, legibility and rapid iterative design in a physical context that can be useful to the early stages of
tangible interaction design.

 

Design of Haptic Interfaces for Psychotherapy

Vaucelle, C., Bonanni, L., and Ishii, H. 2009. Design of haptic interfaces for therapy. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ‘09. ACM, New York, NY, 467-470.

Touch is fundamental to our emotional well-being. Medical science is starting to understand and develop touch-based therapies for autism spectrum, mood, anxiety and borderline disorders. Based on the most promising touch therapy protocols, we are presenting the first devices that simulate touch through haptic devices to bring relief and assist clinical therapy for mental health. We present several haptic systems that enable medical professionals to facilitate the collaboration between patients and doctors and potentially pave the way for a new form of non-invasive treatment that could be adapted from use in care-giving facilities to public use. We developed these prototypes working closely with a team of mental health professionals.

Psychohaptics

Play-it-by-eye! Collect Movies and Improvise Perspectives with Tangible Video Objects.

Vaucelle, C. and Ishii, H. 2009. Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects. In Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), Special Issue: Tangible Interaction, 23, 305–316. Cambridge University Press.

We present an alternative video-making framework for children with tools that integrate video capture with movie production. We propose different forms of interaction with physical artifacts to capture storytelling. Play interactions as input to video editing systems assuage the interface complexities of film construction in commercial software. We aim to motivate young users in telling their stories, extracting meaning from their experiences by capturing supporting video to accompany their stories, and driving reflection on the outcomes of their movies. We report on our design process over the course of four research projects that span from a graphical user interface to a physical instantiation of video. We interface the digital and physical realms using tangible metaphors for digital data, providing a spontaneous and collaborative approach to video composition. We evaluate our systems during observations with 4- to 14-year-old users and analyze their different
approaches to capturing, collecting, editing, and performing visual and sound clips.

Picture This!

 

Cost-effective Wearable Sensor to Detect EMF

Vaucelle, C., Ishii, H., and Paradiso, J. A. 2009. Cost-effective wearable sensor to detect EMF. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI’09. ACM, New York, NY, 4309-4314.

In this paper we present the design of a cost-effective wearable sensor to detect and indicate the strength and other characteristics of the electric field emanating from a laptop display. Our Electromagnetic Field Detector Bracelet can provide an immediate awareness of electric fields radiated from an object used frequently. Our technology thus supports awareness of ambient background emanation beyond human perception. We discuss how detection of such radiation might help to “fingerprint” devices and aid in applications that require determination of indoor location.

EMF Bracelet

2008

Sculpting Behavior A Tangible Language for Hands-On Play and Learning

Hayes Raffle. Sculpting Behavior
A Tangible Language for Hands-On Play and Learning. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.

For over a century, educators and constructivist theorists have argued that children learn by actively forming and testing – constructing – theories about how the world works. Recent efforts in the design of “tangible user interfaces” (TUIs) for learning have sought to bring together interaction models like direct manipulation and pedagogical frameworks like constructivism to make new, often complex, ideas salient for young children. Tangible interfaces attempt to eliminate the distance between the computational and physical world by making behavior directly manipulable with one’s hands. In the past, systems for children to model behavior have been either intuitive-but-simple (e.g. curlybot) or complex-but-abstract, (e.g. LEGO Mindstorms). In order to develop a system that supports a user’s transition from intuitive-but-simple constructions to constructions that are complex-but-abstract, I draw upon constructivist educational theories, particularly Bruner’s theories of how learning progresses through enactive then iconic and then symbolic representations.

This thesis presents an example system and set of design guidelines to create a class of tools that helps people transition from simple-but-intuitive exploration to abstract-and-flexible exploration. The Topobo system is designed to facilitate mental transitions between different representations of ideas, and between different tools. A modular approach, with an inherent grammar, helps people make such transitions. With Topobo, children use enactive knowledge, e.g. knowing how to walk, as the intellectual basis to understand a scientific domain, e.g. engineering and robot locomotion. Queens, backpacks, Remix and Robo add various abstractions to the system, and extend the tangible interface. Children use Topobo to transition from hands-on knowledge to theories that can be tested and reformulated, employing a combination of enactive, iconic and symbolic representations of ideas.

SpeakCup: Simplicity, BABL, and Shape Change

Zigelbaum, J., Chang, A., Gouldstone, J., Monzen, J. J., and Ishii, H. 2008. SpeakCup: simplicity, BABL, and shape change. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ‘08. ACM, New York, NY, 145-146.

In this paper we present SpeakCup, a simple tangible interface that uses shape change to convey meaning in its interaction design. SpeakCup is a voice recorder in the form of a soft silicone disk with embedded sensors and actuators. Advances in sensor technology and material science have provided new ways for users to interact with computational devices. Rather than issuing commands to a system via abstract and multi-purpose buttons the door is open for more nuanced and application-specific approaches. Here we explore the coupling of shape and action in an interface designed for simplicity while discussing some questions that we have encountered along the way.

SpeakCup

Picture This! Film assembly using toy gestures

Vaucelle, C. and Ishii, H. 2008. Picture this!: film assembly using toy gestures. In Proceedings of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY, 350-359.

We present Picture This! a new input device embedded in children’s toys for video composition. It consists of a new form of interaction for children’s capturing of storytelling with physical artifacts. It functions as a video and storytelling performance system in that children craft videos with and about character toys as the system analyzes their gestures and play patterns. Children’s favorite props alternate between characters and cameramen in a film. As they play with the toys to act out a story, they conduct film assembly. We position our work as ubiquitous computing that supports children’s tangible interaction with digital materials. During user testing, we observed children ages 4 to 10 playing with Picture This!. We assess to what extent
gesture interaction with objects for video editing allows children to explore visual perspectives in storytelling. A new genre of Gesture Object Interfaces as exemplified by Picture This relies on the analysis of gestures coupled with objects to represent bits.

Picture This!

From Touch Sensitive to Aerial Jewelry

Cati Vaucelle. From Touch Sensitive to Aerial Jewelry (Book Chapter). In Fashionable Technology, The intersection of Design, Fashion, Science, and Technology. Editor Seymour, S., Springer-Verlag Wien New York, 2008

Now that we constantly travel by plane, use GIS, google map, satellite imagery, our vision is expanded. Our everyday objects have a language that adapts itself to our influences. On the other end, as much as the car has influenced painting and the representation of space and movement, we wanted to show how the use of new technologies can change our way to design personal objects as exemplified by Aerial Jewelry

Handsaw: Tangible Exploration of Volumetric Data
by Direct Cut-Plane Projection

Bonanni, L., Alonso, J., Chao, N., Vargas, G., and Ishii, H. 2008. Handsaw: tangible exploration of volumetric data by direct cut-plane projection. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 251-254.

Tangible User Interfaces are well-suited to handling three-dimensional data sets by direct manipulation of real objects in space, but current interfaces can make it difficult to look inside dense volumes of information. This paper presents the SoftSaw, a system that detects a virtual cut-plane projected by an outstretched hand or laser-line directly on an object or space and reveals sectional data on an adjacent display. By leaving the hands free and using a remote display, these techniques can be shared between multiple users and integrated into everyday practice. The SoftSaw has been prototyped for scientific visualizations in medicine, engineering and urban design. User evaluations suggest that using a hand is more intuitive while projected light is more precise than keyboard and mouse control, and the SoftSaw system has the potential to be used more effectively by novices and in groups.

Handsaw

Renaissance Panel: The Roles of
Creative Synthesis in Innovation

Hockenberry, M. and Bonanni, L. 2008. Renaissance panel: the roles of creative synthesis in innovation. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2237-2240.

The Renaissance ideal can be expressed as a creative synthesis between cultural disciplines, standing in stark contrast to our traditional focus on scientific specialization. This panel presents a number of experts
who approach the synthesis of art and science as the modus operandi for their work, using it as a tool for creativity, research, and practice. Understanding these
approaches allows us to identify the roles of synthesis in successful innovation and improve the implementation of interdisciplinary synthesis in research and practice.

Future Craft: How Digital Media is Transforming Product Design

Bonanni, L., Parkes, A., and Ishii, H. 2008. Future craft: how digital media is transforming product design. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2553-2564.

The open and collective traditions of the interaction community have created new opportunities for product designers to engage in the social issues around industrial production. This paper introduces Future Craft, a design methodology which applies emerging digital tools and processes to product design toward new objects that are socially and environmentally sustainable. We present the results of teaching the Future Craft curriculum at the MIT Media Lab including principal themes of public, local and personal design, resources, assignments and student work. Novel ethnographic methods are discussed with relevance to informing the design of physical products. We aim to create a dialogue around these themes for the product design and HCI communities.

Slurp: Tangibility, Spatiality, and an Eyedropper

Zigelbaum, J., Kumpf, A., Vazquez, A., and Ishii, H. 2008. Slurp: tangibility spatiality and an eyedropper. In CHI ‘08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 2565-2574.

The value of tangibility for ubiquitous computing is in its simplicity-when faced with the question of how to grasp a digital object, why not just pick it up? But this is problematic; digital media is powerful due to its extreme mutability and is therefore resistant to the constraints of static physical form. We present Slurp, a tangible interface for locative media interactions in a ubiquitous computing environment. Based on the affordances of an eyedropper, Slurp provides haptic and visual feedback while extracting and injecting pointers to digital media between physical objects and displays.

Slurp

Reality-Based Interaction: A Framework for Post-WIMP Interfaces

Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 201-210.

Abstract

Reality-Based Interaction

 

AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling (in Japanese)

Anabuki, M., Ishii, H. 2008. AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling. Transactions of the Virtual Reality Society of Japan, Special Issue on Mixed Reality 4 (Japanese Edition), Vol.13, No.2, 2008

Abstract

Tangible Bits: Beyond Pixels

Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ‘08. ACM, New York, NY, xv-xxv.

Abstract

Tangible Bits

 

Topobo in the Wild: Longitudinal Evaluations of Educators Appropriating a Tangible Interface

Parkes, A., Raffle, H., and Ishii, H. 2008. Topobo in the wild: longitudinal evaluations of educators appropriating a tangible interface. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ‘08. ACM, New York, NY, 1129-1138.

Abstract

Topobo

Topobo Backpacks

 

The Everyday Collector

Vaucelle, C. The Everyday Collector. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY.

Abstract

Electromagnetic Field Detector Bracelet

Vaucelle, C., Ishii, H. and Paradiso,.J. Electromagnetic Field Detector Bracelet. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 – 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY.

Abstract

EMF Bracelet

 

2007

TILTle: exploring dynamic balance

Modlitba, P., Offenhuber, D., Ting, M., Tsigaridi, D., and Ishii, H. 2007. TILTle: exploring dynamic balance. In Proceedings of the 2007 Conference on Designing Pleasurable Products and interfaces (Helsinki, Finland, August 22 – 25, 2007). DPPI ‘07. ACM, New York, NY, 466-472. DOI= http://doi.acm.org/10.1145/1314161.1314207

Abstract

The Sound of Touch

David Merrill and Hayes Raffle. 2007. The sound of touch. In ACM SIGGRAPH 2007 posters (SIGGRAPH ‘07). ACM, New York, NY, USA, , Article 138

Abstract

The Sound of Touch

 

SP3X: a six-degree of freedom device for natural model creation

Richard Whitney. SP3X: a six-degree of freedom device for natural model creation. Thesis (M.S.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.

Abstract

SP3X

 

The Sound of Touch: Physical Manipulation of Digital Sound.

David Merrill, Hayes Raffle, and Roberto Aimi. 2008. The sound of touch: physical manipulation of digital sound. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (CHI ‘08). ACM, New York, NY, USA, 739-742.

Abstract

The Sound of Touch

 

The Sound of Touch

David Merrill and Hayes Raffle. 2007. The sound of touch. In CHI ‘07 extended abstracts on Human factors in computing systems (CHI EA ‘07). ACM, New York, NY, USA, 2807-2812.

Abstract

The Sound of Touch

 

Simplicity in Interaction Design

Chang, A., Gouldstone, J., Zigelbaum, J., and Ishii, H. 2007. Simplicity in interaction design. In Proceedings of the 1st international Conference on Tangible and Embedded interaction (Baton Rouge, Louisiana, February 15 – 17, 2007). TEI ‘07. ACM, New York, NY, 135-138.

Abstract

SpeakCup

 

Zstretch: A Stretchy Fabric Music Controller

Chang, A. and Ishii, H. 2007. Zstretch: a stretchy fabric music controller. In Proceedings of the 7th international Conference on New interfaces For Musical Expression (New York, New York, June 06 – 10, 2007). NIME ‘07. ACM, New York, NY, 46-49.

Abstract

AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve

Anabuki, M.; Ishi, H., “AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve,” Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on , vol., no., pp.55-66, 13-16 Nov. 2007

Abstract

AR-Jig

 

Interfacing Video Capture, Editing and Publication in a Tangible Environment

Vaucelle, C. and Ishii, H. 2007. C. Interfacing Video Capture, Editing and Publication in a Tangible Environment. In. Baranauskas et al. (Eds.): INTERACT 2007, LNCS 4663, Lecture Notes in Computer Science, Part II, pp. 1 – 14, 2007. Springer Berlin / Heidelberg publisher.

Abstract

Keywords: Tangible User Interface – Video – Authorship – Mobile Technology – Digital Media – Video Jockey – Learning – Children – Collaboration

Picture This!

 

Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain

LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 801-804.

Abstract

Senspectra

 

Tug n Talk: A Belt Buckle for Tangible Tugging Communication

Adcock, M., Harry, D., Boch, M., Poblano, R.-D. and Harden, V. Tug n’ Talk: A Belt Buckle for Tangible Tugging Communication. Presented at alt.chi 2007

Abstract

Touch . Sensitive Apparel

Vaucelle, C. and Abbas, Y. 2007. Touch: sensitive apparel. In CHI ‘07 Extended Abstracts on Human Factors in Computing Systems (San Jose, CA, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 2723-2728.

Abstract

Touch·Sensitive

 

Jabberstamp: Embedding sound and voice in traditional drawings

Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In Proceedings of the 6th international Conference on interaction Design and Children (Aalborg, Denmark, June 06 – 08, 2007). IDC ‘07. ACM, New York, NY, 137-144.

Abstract

We describe our design process and analyze the mechanism between the act of drawing and the one of telling, defining interdependencies between the two activities. In a series of studies, children ages 4—8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children’s narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children’s emergent literacy.

Jabberstamp

 

Remix and Robo: Improvisational performance and competition with modular robotic building toys

Raffle, H., Yip, L., and Ishii, H. 2007. Remix and robo: sampling, sequencing and real-time control of a tangible robotic construction system. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 – 09, 2007). SIGGRAPH ‘07. ACM, New York, NY, 35.

Abstract

Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.

Topobo

Remix+Robo Topobo

 

Mechanical Constraints as Computational Constraints in Tabletop Tangible Interfaces

Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 809-818.

Abstract

PICO

 

Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain

LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ‘07. ACM, New York, NY, 801-804.

Abstract

Using Senspectra, a user incrementally assembles and refines a physical 3D model of discrete elements with a real-time visualization of structural strain. While the Senspectra infrastructure provides a flexible modular sensor network platform, its primary application derives from the need to couple physical modeling techniques utilized in architecture and design disciplines with systems for structural engineering analysis. This offers direct manipulation augmented with visual feedback for an intuitive approach to physical real-time finite element analysis, particularly for organic forms.

Senspectra

 

Reflecting on Tangible User Interfaces: Three Issues Concerning Domestic Technology

Zigelbaum, J., and Csikszentmihályi, C. Reflecting on Tangible User Interfaces: Three Issues Concerning
Domestic Technology. CHI 2007 Workshop on Tangible User Interfaces in Context and Theory (2007).

Abstract

Jabberstamp: embedding sound and voice in traditional drawings

Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 – 09, 2007). SIGGRAPH ‘07. ACM, New York, NY, 32.

Abstract

Jabberstamp is the first tool that allows children to synthesize their drawings and voices. To use Jabberstamp, children create drawings, collages or paintings on normal paper. They press a special rubber stamp onto the page to record sounds into their drawings. When children touch the marks of the stamp with a small trumpet, they can hear the sounds playback, retelling the stories they created. In a series of studies, children ages 4-8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children’s narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children’s emergent literacy.

Jabberstamp

 

Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system

Hayes Raffle, Hiroshi Ishii, and Laura Yip. 2007. Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system. In Proceedings of the 6th international conference on Interaction design and children (IDC ‘07). ACM, New York, NY, USA, 89-96.

Abstract

Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.

Remix+Robo Topobo

Topobo

 

2006

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces

Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 – 16, 2006). ACE ‘06, vol. 266. ACM, New York, NY, 27.

Abstract

Audiopad

 

Glume: Exploring Materiality in a Soft Augmented Modular Modeling

Parkes, A., LeClerc, V., and Ishii, H. 2006. Glume: exploring materiality in a soft augmented modular modeling system. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 1211-1216.

Abstract

Glume

 

BodyBeats: Whole-Body, Musical
Interfaces for Children

Zigelbaum, J., Millner, A., Desai, B., and Ishii, H. 2006. BodyBeats: whole-body, musical interfaces for children. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 1595-1600.

Abstract

BodyBeats

 

3D and Sequential Representations of Spatial Relationships among Photos

Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 472-477.

Abstract

Mechanical Constraints as Common Ground between People and Computers

Patten, J. M. 2006 Mechanical Constraints as Common Ground between People and Computers. Doctoral Thesis. UMI Order Number: AAI0808956., Massachusetts Institute of Technology.

Abstract

PICO

 

SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface

Vincent Leclerc. SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.

Abstract

While the Senspectra infrastructure provides a flexible modular sensor network platform, its
primary application derives from the need to couple physical modeling techniques utilized in
the architecture and industrial design disciplines with systems for structural engineering
analysis, offering an intuitive approach for physical real-time finite element analysis. Utilizing
direct manipulation augmented with visual feedback, the system gives users valuable insights
on the global behavior of a constructed system defined as a network of discrete elements.

Senspectra

 

The Texture of Light

Vaucelle, C. 2006. The texture of light. In ACM SIGGRAPH 2006 Research Posters (Boston, Massachusetts, July 30 – August 03, 2006). SIGGRAPH ‘06. ACM, New York, NY, 27.

Abstract

Affective TouchCasting

Bonanni, L. and Vaucelle, C. 2006. Affective TouchCasting. In ACM SIGGRAPH 2006 Sketches (Boston, Massachusetts, July 30 – August 03, 2006). SIGGRAPH ‘06. ACM, New York, NY, 35.

Abstract

Taptap

 

Collaborative Simulation Interface for Planning Disaster Measures

Kobayashi, K., Narita, A., Hirano, M., Kase, I., Tsuchida, S., Omi, T., Kakizaki, T., and Hosokawa, T. 2006. Collaborative simulation interface for planning disaster measures. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 977-982.

Disaster Simulation

 

PlayPals: Tangible Interfaces for Remote Communication and Play

Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. PlayPals: tangible interfaces for remote communication and play. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 574-579.

Abstract

Playpals

 

TapTap: A Haptic Wearable for Asynchronous Distributed Touch Therapy

Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. TapTap: a haptic wearable for asynchronous distributed touch therapy. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 580-585.

Abstract

Taptap

 

3D and Sequential Representations of Spatial Relationships among Photos

Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI ‘06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ‘06. ACM, New York, NY, 472-477.

Abstract

Beyond Record and Play: Backpacks: Tangible Modulators for Kinetic Behavior

Raffle, H., Parkes, A., Ishii, H., and Lifton, J. 2006. Beyond record and play: backpacks: tangible modulators for kinetic behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ‘06. ACM, New York, NY, 681-690.

Abstract

Topobo Backpacks

 

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces

Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 – 16, 2006). ACE ‘06, vol. 266. ACM, New York, NY, 27.

Abstract

Audiopad

 

2005

Designing the “World as your Palette”

Ryokai, K., Marti, S., and Ishii, H. 2005. Designing the world as your palette. In CHI ‘05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA, April 02 – 07, 2005). CHI ‘05. ACM, New York, NY, 1037-1049.

Abstract

The world as a palette : painting with attributes of the environment

Kimiko Ryokai. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.

2004

Bottles: A Transparent Interface as a Tribute to Mark Weiser

Hiroshi Ishii, “Bottles: A Transparent Interface as a Tribute to Mark Weiser.” IEICE TRANSACTIONS on Information and Systems Vol.E87-D No.6 pp.1299-1311

Abstract

musicBottles

bottlogues

 

Topobo: A 3-D Constructive Assembly System with Kinetic Memory

Hayes Solos Raffle. Topobo: A 3-D Constructive Assembly System with Kinetic Memory. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.

Topobo

 

Tangible User Interfaces (TUIs): A Novel Paradigm for GIS

Ratti, C., Wang, Y., Ishii, H., Piper, B. Frenchman, D., “Tangible User Interfaces (TUIs): A Novel Paradigm for GIS,” Trans. GIS, vol. 8, no. 4, 2004, pp. 407–421.

Abstract

Illuminating Clay

SandScape

 

egaku: Enhancing the Sketching Process

Yoon, J., Ryokai, K., Dyner, C., Alonso, J., and Ishii, H. 2004. egaku: enhancing the sketching process. In ACM SIGGRAPH 2004 Posters (Los Angeles, California, August 08 – 12, 2004). R. Barzel, Ed. SIGGRAPH ‘04. ACM, New York, NY, 42.

Abstract

egaku

 

Phoxel-Space: an Interface for Exploring Volumetric Data with Physical Voxels

Ratti, C., Wang, Y., Piper, B., Ishii, H., and Biderman, A. 2004. PHOXEL-SPACE: an interface for exploring volumetric data with physical voxels. In Proceedings of the 5th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Cambridge, MA, USA, August 01 – 04, 2004). DIS ‘04. ACM, New York, NY, 289-296.

Abstract

Phoxel Space

 

I/O Brush: Drawing with Everyday Objects as Ink

Ryokai, K., Marti, S., and Ishii, H. 2004. I/O brush: drawing with everyday objects as ink. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ‘04. ACM, New York, NY, 303-310.

Abstract

I/O Brush

 

Topobo: A Constructive Assembly System with Kinetic Memory

Raffle, H. S., Parkes, A. J., and Ishii, H. 2004. Topobo: a constructive assembly system with kinetic memory. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ‘04. ACM, New York, NY, 647-654.

Abstract

Topobo

 

Bringing clay and sand into digital design — continuous tangible user interfaces

Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A., and Ben-Joseph, E. 2004. Bringing Clay and Sand into Digital Design — Continuous Tangible user Interfaces. BT Technology Journal 22, 4 (Oct. 2004), 287-299.

Abstract

Illuminating Clay

SandScape

 

Super Cilia Skin: A Textural Interface

Raffe, H., Tichenor, J., Ishii, H. 2004. Super Cilia Skin: A Textural Interface. Textile, Volume 2, Issue 3, pp. 1–19

Abstract

Super Cilia Skin

 

Topobo: A Gestural Design Tool with Kinetic Memory

Amanda Parkes. Topobo: A Gestural Design Tool with Kinetic Memory. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.

Abstract

PINS : a haptic computer interface system.

Bradley Carter Kaanta. PINS : a haptic computer interface system. Thesis (M. Eng. and S.B.)—Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.

2003

musicBottles Manual for Samsung Exhibition July 2003

musicBottles

 

Tangible Query Interfaces: Physically Constrained Tokens for Manipulating Database Queries

Ullmer B, Ishii H, Jacob R.J.K (2003) Tangible query interfaces: physically constrained tokens for manipulating database queries. In: Proceedings of the 9th IFIP international conference on human-computer interaction (INTERACT 2003), Zurich, Switzerland, September 2003.

Abstract

Tangible query interfaces

 

IP Network Design Workbench

 

Super Cilia Skin: An Interactive Membrane

Raffle, H., Joachim, M. W., and Tichenor, J. 2003. Super cilia skin: an interactive membrane. In CHI ‘03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 – 10, 2003). CHI ‘03. ACM, New York, NY, 808-809.

Abstract

Super Cilia Skin

 

Applications of Computer-Controlled Actuation in
Workbench Tangible User Interfaces

Daniel Maynes-Aminzade. Applications of Computer-Controlled Actuation in
Workbench Tangible User Interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.

The actuated workbench : 2D actuation in tabletop tangible interfaces

Gian Antonio Pangaro. The actuated workbench : 2D actuation in tabletop tangible interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.

Abstract

2002

The Actuated Workbench: Computer-Controlled Actuation in Tabletop Tangible Interfaces

Pangaro, G., Maynes-Aminzade, D., Ishii, H. 2003. The Actuated Workbench: Computer-Controlled Actuation in Tabletop Interfaces. ACM Trans. Graph. 22, 3 (Jul. 2003), 699-699.

Abstract

Actuated Workbench

 

ComTouch: A Vibrotactile Communication Device

Angela Chang. ComTouch: A Vibrotactile Mobile Communication
Device. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

ComTouch

 

Tangible Interfaces for Manipulating Aggregates of Digital Information

Brygg Ullmer. Tangible Interfaces for Manipulating Aggregates of Digital Information. Thesis (Ph. D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

mediaBlocks

Tangible query interfaces

 

The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis

Ben Piper. The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

CADcast

Illuminating Clay

 

Hover: Conveying Remote Presence

Maynes-Aminzade, D., Tan, B., Goulding, K., and Vaucelle, C. 2002. Hover: conveying remote presence. In ACM SIGGRAPH 2002 Conference Abstracts and Applications (San Antonio, Texas, July 21 – 26, 2002). SIGGRAPH ‘02. ACM, New York, NY, 194-194.

Abstract

Hover

 

Audiopad: A Tag Based Interface for Musical Performance

Patten, J., Recht, B., and Ishii, H. 2002. Audiopad: a tag-based interface for musical performance. In Proceedings of the 2002 Conference on New interfaces For Musical Expression (Dublin, Ireland, May 24 – 26, 2002). E. Brazil, Ed. New Interfaces For Musical Expression. National University of Singapore, Singapore, 1-6.

Abstract

Audiopad

 

Illuminating Clay: A Tangible Interface with potential GRASS applications

Piper B., Ratti C., Ishii H., 2002, Illuminating Clay: a tangible interface with potential GRASS applications. Proceedings of the open-source GIS – GRASS users conference, Trento, Italy, September 2002.

Abstract

Illuminating Clay

SandScape

 

ComTouch: A Vibrotactile Communication Device

Chang, A., O’Modhrain, S., Jacob, R., Gunther, E., and Ishii, H. 2002. ComTouch: design of a vibrotactile communication device. In Proceedings of the 4th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (London, England, June 25 – 28, 2002). DIS ‘02. ACM, New York, NY, 312-320.

Abstract

ComTouch

 

Bottles: Design of Transparent Interface for Accessing Digital Information

musicBottles

genieBottles

 

PegBlocks

 

Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis

Piper, B., Ratti, C., and Ishii, H. 2002. Illuminating clay: a 3-D tangible interface for landscape analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 355-362.

Abstract

CADcast

Illuminating Clay

SandScape

 

A Tangible Interface for Organizing Information Using a Grid

Jacob, R. J., Ishii, H., Pangaro, G., and Patten, J. 2002. A tangible interface for organizing information using a grid. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 339-346.

Abstract

Senseboard

 

Dolltalk: a computational toy to enhance children’s creativity

Vaucelle, C. and Jehan, T. 2002. Dolltalk: a computational toy to enhance children’s creativity. In CHI ‘02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ‘02. ACM, New York, NY, 776-777.

Abstract

Dolltalk

 

Dolltalk: A computational toy to enhance narrative perspective-taking

Cati Vaucelle. Dolltalk:
A computational toy to enhance narrative perspective-taking. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Dolltalk

 

Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation

Hiroshi Ishii, Eran Ben-Joseph, John Underkoffler, Luke Yeung, Dan Chak, Zahra Kanji, and Ben Piper. 2002. Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation. In Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR ‘02). IEEE Computer Society, Washington, DC, USA, 203-.

Abstract

Urban Simulation

 

2001

Urban Simulation and the Luminous Planning Table

Ben-Joseph, E., Ishii, H., Underkoffler, J., Piper, B. & Yeung, L. 2001. Urban Simulation and the Luminous Planning Table: Bridging the Gap between the Digital and the Tangible , Journal of Planning Education and Research, 21, 195-202

Abstract

Urban Simulation

 

Pinwheels: Visualizing Information Flow in an Architectural Space

Hiroshi Ishii, Sandia Ren, and Phil Frei. 2001. Pinwheels: visualizing information flow in an architectural space. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 111-112. DOI=10.1145/634067.634135 http://doi.acm.org/10.1145/634067.634135

Abstract

pinwheels

 

Bottles as a minimal interface to access digital information

Hiroshi Ishii, Ali Mazalek, and Jay Lee. 2001. Bottles as a minimal interface to access digital information. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 187-188. DOI=10.1145/634067.634180 http://doi.acm.org/10.1145/634067.634180

Abstract

musicBottles

 

Sensetable: A Wireless Object tracking platform for tangible user interfaces.

James Patten. Sensetable: A Wireless Object tracking platform for tangible user interfaces. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Sensetable

 

Tangible Interfaces for Interactive Point-of-View Narratives

Alexandra Mazalek. Tangible Interfaces for Interactive Point-of-View Narratives. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

genieBottles

Tangible Viewpoints

 

Telling Tales: A new way to encourage written literacy through oral language

Mike Ananny. Telling Tales: A new way to encourage written literacy through oral language. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

TellTale

 

genieBottles: An Interactive Narrative in Bottles

A. Mazalek, A. Wood, and H. Ishii. Geniebottles: An interactive narrative in bottles. In Conference Abstracts and Applications SIGGRAPH 2001, page 189, Los Angeles, California USA, August 2001.

genieBottles

 

LumiTouch: An Emotional Communication Device

Chang, A., Resner, B., Koerner, B., Wang, X., and Ishii, H. 2001. LumiTouch: an emotional communication device. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 313-314.

Abstract

ComTouch

LumiTouch

 

The HomeBox: A Web Content Creation Tool for The Developing World

Piper, B. and Hwang, R. E. 2001. The HomeBox: a web content creation tool for the developing world. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 145-146.

Abstract

Strata/ICC: Physical Models as Computational Interfaces

Ullmer, B., Kim, E., Kilian, A., Gray, S., and Ishii, H. 2001. Strata/ICC: physical models as computational interfaces. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 – April 05, 2001). CHI ‘01. ACM, New York, NY, 373-374.

Abstract

Strata

 

Designing Touch-based Communication Devices

Chang, A., Kanji, Z., Ishii, H. 2001. Designing Touch-based Communication Devices. CHI 2001 Workshop: Universal design: Towards universal access in the Information Society

ComTouch

 

DataTiles: A Modular Platform for Mixed Physical and Graphical Interactions

Rekimoto, J., Ullmer, B., and Oba, H. 2001. DataTiles: a modular platform for mixed physical and graphical interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI ‘01. ACM, New York, NY, 269-276.

Abstract

Sensetable: A Wireless Object Tracking Platform for Tangible User Interfaces

Patten, J., Ishii, H., Hines, J., and Pangaro, G. 2001. Sensetable: a wireless object tracking platform for tangible user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI ‘01. ACM, New York, NY, 253-260.

Abstract

Sensetable

 

GeoSCAPE: designing a reconstructive tool for field archaeological excavation.

Jay Lee, Hiroshi Ishii, Blair Duun, Victor Su, and Sandia Ren. 2001. GeoSCAPE: designing a reconstructive tool for field archaeological excavation. In CHI ‘01 extended abstracts on Human factors in computing systems (CHI EA ‘01). ACM, New York, NY, USA, 35-36.

Abstract

GeoSCAPE

 

2000

Emerging Frameworks for Tangible User Interfaces

Ullmer, B. and Ishii, H. 2000. Emerging frameworks for tangible user interfaces. IBM Syst. J. 39, 3-4 (Jul. 2000), 915-931.

Abstract

Urban Simulation

mediaBlocks

 

A Comparison of Spatial Organization Strategies in Graphical and Tangible User Interfaces

Patten, J. and Ishii, H. 2000. A comparison of spatial organization strategies in graphical and tangible user interfaces. In Proceedings of DARE 2000 on Designing Augmented Reality Environments (Elsinore, Denmark). DARE ‘00. ACM, New York, NY, 41-50.

Abstract

We observed that TUI subjects performed better at the location recall task than GUI subjects. In addition, some TUI subjects used the spatial relationship between specific blocks and parts of the environment to help them remember the content of those blocks, while GUI subjects did not do this. Those TUI subjects who reported encoding information using this strategy tended to perform better at the recall task than those who did not.

mediaBlocks

 

curlybot: Designing a New Class of Computational Toys

Frei, P., Su, V., Mikhak, B., and Ishii, H. 2000. curlybot: designing a new class of computational toys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands, April 01 – 06, 2000). CHI ‘00. ACM, New York, NY, 129-136. DOI= http://doi.acm.org/10.1145/332040.332416

Abstract

curlybot

 

HandSCAPE: a vectorizing tape measure for on-site measuring applications

Jay Lee, Victor Su, Sandia Ren, and Hiroshi Ishii. 2000. HandSCAPE: a vectorizing tape measure for on-site measuring applications. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ‘00). ACM, New York, NY, USA, 137-144.

Abstract

HandSCAPE

 

1999

musicBottles

Abstract

musicBottles

 

The Design of Personal Ambient Displays

Craig Wisneski. The Design of Personal Ambient Displays. Thesis (S.M.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.

ambientROOM

Personal Ambient Display

 

The Design and Implementation of inTouch: A Distributed, Haptic Communication System

Victor Su. The Design and Implementation of inTouch: A Distributed, Haptic Communication System. Thesis (M. Eng. and S.B.)—Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.

inTouch

 

The I/O Bulb and the Luminous Room

John Underkoffler, The I/O Bulb and the Luminous Room, Thesis (Ph.D.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts & Sciences, 1999.

Urban Simulation

I/O Bulb and Luminous Room

 

Emancipated Pixels: Real-World Graphics in the Luminous Room

Underkoffler, J., Ullmer, B., and Ishii, H. 1999. Emancipated pixels: real-world graphics in the luminous room. In Proceedings of the 26th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 385-392.

Abstract

Urban Simulation

I/O Bulb and Luminous Room

 

Urp: A Luminous-Tangible Workbench for Urban Planning and Design

Underkoffler, J. and Ishii, H. 1999. Urp: a luminous-tangible workbench for urban planning and design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: the CHI Is the Limit (Pittsburgh, Pennsylvania, United States, May 15 – 20, 1999). CHI ‘99. ACM, New York, NY, 386-393.

Abstract

We then use comparisons among Urp and several earlier I/O Bulb applications as the basis for an understanding of luminous-tangible interactions, which result whenever an interface distributes meaning and functionality between physical objects and visual information projectively coupled to those objects. Finally, we briefly discuss two issues common to all such systems, offering them as informal thought-tools for the design and analysis of luminous-tangible interfaces.

I/O Bulb and Luminous Room

Urban Simulation

 

Towards the Distributed Visualization
of Usage History

Paul Yarin. Towards the Distributed Visualization
of Usage History. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.

Curlybot

Phil Frei, Victor Su, and Hiroshi Ishii. 1999. Curlybot. In ACM SIGGRAPH 99 Conference abstracts and applications (SIGGRAPH ‘99). ACM, New York, NY, USA, 173-. DOI=10.1145/311625.311972 http://doi.acm.org/10.1145/311625.311972

curlybot

 

PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play

Hiroshi Ishii, Craig Wisneski, Julian Orbanes, Ben Chun, and Joe Paradiso. 1999. PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play. In Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit (CHI ‘99). ACM, New York, NY, USA, 394-401. DOI=10.1145/302979.303115

PingPongPlus

PingPongPlusPlus

 

1998

The Last Farewell: Traces of Physical Presence

Ishii, H. 1998. Reflections: “The last farewell”: traces of physical presence. interactions 5, 4 (Jul. 1998), 56-ff.

Abstract

Designing Kinetic Objects for Digital Information Display

Andy Dahley. Designing Kinetic Objects for Digital Information Display. Thesis (S.M.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1998.

Abstract

inTouch

Expressive KineticObjects

 

Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects

Matthew Gorbet. Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.

Abstract

Triangles

 

Tangible Interfaces for Remote Communication and Collaboration

Scott Brave. Tangible Interfaces for Remote Communication and Collaboration. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.

Abstract

inTouch

 

mediaBlocks: Physical Containers,Transports, and Controls for Online Media

Ullmer, B., Ishii, H., and Glas, D. 1998. mediaBlocks: physical containers, transports, and controls for online media. In Proceedings of the 25th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH ‘98. ACM, New York, NY, 379-386.

Abstract

mediaBlocks

 

Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information

Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P. Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. CoBuild 1998.

Abstract

ambientROOM

Ambient Fixtures

pinwheels

 

Tangible Interfaces for Remote Collaboration and Communication

Scott Brave, Hiroshi Ishii, and Andrew Dahley. 1998. Tangible interfaces for remote collaboration and communication. In Proceedings of the 1998 ACM conference on Computer supported cooperative work (CSCW ‘98). ACM, New York, NY, USA, 169-178.

Abstract

inTouch

PSyBench

 

ambientROOM: Integrating Ambient Media with Architectural Space

Ishii, H., Wisneski, C., Brave, S., Dahley, A., Gorbet, M., Ullmer, B., and Yarin, P. 1998. ambientROOM: integrating ambient media with architectural space. In CHI 98 Conference Summary on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). CHI ‘98. ACM, New York, NY, 173-174.

Abstract

ambientROOM

 

Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface

Underkoffler, J. and Ishii, H. 1998. Illuminating light: an optical design tool with a luminous-tangible interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 542-549.

Abstract

We briefly introduce the VO Bulb and Luminous Room con-
cepts and discuss their current implementations. After an
overview of the optical domain that the Illuminating Light
system is designed to address, we present the overall sys-
tem design and implementation, including that of an inter-
mediary toolkit called voodoo which provides a general
facility for object identification and tracking. beam that continues through the beamsplitter.

I/O Bulb and Luminous Room

 

Triangles: Tangible Interface for Manipulation and Exploration of Digital Information Topography

Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56.

Abstract

Triangles

 

1997

The metaDESK: Models and Prototypes for Tangible User Interfaces

Ullmer, B. and Ishii, H. 1997. The metaDESK: models and prototypes for tangible user interfaces. In Proceedings of the 10th Annual ACM Symposium on User interface Software and Technology (Banff, Alberta, Canada, October 14 – 17, 1997). UIST ‘97. ACM, New York, NY, 223-232.

Abstract

Using the metaDESK platform, we are studying issues such as a) the physical embodiment of GUI (graphical user interface) widgets such as icons, handles, and windows; b) the coupling of everyday physical objects with the digital information that pertains to them.

metaDESK

Proxy-Distributed Computation

 

Models and Mechanisms for Tangible User Interfaces

Brygg Ullmer. Models and Mechanisms for Tangible User Interfaces. Thesis (M.S.)—Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.

Abstract

metaDESK

ambientROOM

transBOARD

 

Triangles: Design of a Physical/Digital Construction Kit

Gorbet, M. G. and Orth, M. 1997. Triangles: design of a physical/digital construction kit. In Proceedings of the 2nd Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Amsterdam, The Netherlands, August 18 – 20, 1997). S. Coles, Ed. DIS ‘97. ACM, New York, NY, 125-128.

Abstract

Triangles

 

inTouch: A Medium for Haptic Interpersonal Communication

Brave, S. and Dahley, A. 1997. inTouch: a medium for haptic interpersonal communication. In CHI ‘97 Extended Abstracts on Human Factors in Computing Systems: Looking To the Future (Atlanta, Georgia, March 22 – 27, 1997). CHI ‘97. ACM, New York, NY, 363-364.

Abstract

inTouch

 

Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms

Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, United States, March 22 – 27, 1997). S. Pemberton, Ed. CHI ‘97. ACM, New York, NY, 234-241.

Abstract

This paper describes three key concepts of Tangible Bits:
interactive surfaces; the coupling of bits with graspable
physical objects; and ambient media for background
awareness. We illustrate these concepts with three
prototype systems – the metaDESK, transBOARD and
ambientROOM – to identify underlying research issues.

metaDESK

ambientROOM

transBOARD

Tangible Bits

 

1995

Bricks: Laying the Foundations for Graspable User Interfaces

Fitzmaurice, G. W., Ishii, H., and Buxton, W. A. 1995. Bricks: laying the foundations for graspable user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Denver, Colorado, United States, May 07 – 11, 1995). I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 442-449

Abstract

Bricks

 

1994

Iterative Design of Seamless Collaboration Media

Hiroshi Ishii, Minoru Kobayashi, and Kazuho Arita. 1994. Iterative design of seamless collaboration media. Commun. ACM 37, 8 (August 1994), 83-97. DOI=10.1145/179606.179687 http://doi.acm.org/10.1145/179606.179687

TeamWorkStation

ClearBoard

 

1993

Integration of interpersonal space and shared workspace: ClearBoard design and experiments

Ishii, H., Kobayashi, M., and Grudin, J. 1993. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Trans. Inf. Syst. 11, 4 (Oct. 1993), 349-375. DOI= http://doi.acm.org/10.1145/159764.159762

ClearBoard

 

1992

ClearBoard: a seamless medium for shared drawing and conversation with eye contact

Ishii, H. and Kobayashi, M. 1992. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Monterey, California, United States, May 03 – 07, 1992). P. Bauersfeld, J. Bennett, and G. Lynch, Eds. CHI ‘92. ACM, New York, NY, 525-532. DOI= http://doi.acm.org/10.1145/142750.142977

Abstract

ClearBoard

 

Integration of inter-personal space and shared workspace: ClearBoard design and experiments

Ishii, H., Kobayashi, M., and Grudin, J. 1992. Integration of inter-personal space and shared workspace: ClearBoard design and experiments. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, Canada, November 01 – 04, 1992). CSCW ‘92. ACM, New York, NY, 33-42. DOI= http://doi.acm.org/10.1145/143457.143459

Abstract

ClearBoard

 

1991

Toward An Open Shared Workspace: Computer and Video Fusion Approach of TeamWorkStation

Hiroshi Ishii and Naomi Miyake. 1991. Toward an open shared workspace: computer and video fusion approach of TeamWorkStation. Commun. ACM 34, 12 (December 1991), 37-50. DOI=10.1145/125319.125321 http://doi.acm.org/10.1145/125319.125321

Abstract

TeamWorkStation

 

1990

TeamWorkStation: towards a seamless shared workspace

H. Ishii. 1990. TeamWorkStation: towards a seamless shared workspace. In Proceedings of the 1990 ACM conference on Computer-supported cooperative work (CSCW ‘90). ACM, New York, NY, USA, 13-26. DOI=10.1145/99332.99337 http://doi.acm.org/10.1145/99332.99337

This paper introduces TeamWorkStation (TWS), a new desktop real-time shared workspace characterized by reduced cognitive seams. TWS integrates two existing kinds of individual workspaces, computers and desktops, to create a virtual shared workspace. The key ideas are the overlay of individual workspace images in a virtual shared workspace and the creation of a shared drawing surface. Because each co-worker can continue to use his/her favorite application programs or manual tools in the virtual shared workspace, the cognitive discontinuity (seam) between the individual and shared workspaces is greatly reduced, and users can shuttle smoothly between these two workspaces. This paper discusses where the seams exist in the current CSCW environment to clarify the issue of shared workspace design. The new technique of fusing individual workspaces is introduced. The application of TWS to the remote teaching of calligraphy is presented to show its potential. The prototype system is described and compared with other comparable approaches.

TeamWorkStation