I investigate a future in which we carry around mobile fabrication machines allowing us to solve our mechanical problems on the go, pretty much the same way we do this with information problems (using mobile computers). I initiated this vision and am dealing with subsequent challenges that arise from such a future, like: modeling on the go, engineering hardware to make this happen and making models portable across fabrication machines.
Oliver Schneider, Jotaro Shigeyama, Robert Kovacs, Thijs Roumen, Sebastian Marwecki, Nico Boeckhoff, Daniel Amadeus Gloeckner, Jonas Bounama, Patrick Baudisch
In Proceedings of UIST '18 (full paper), .
We present a new haptic device that enables blind users to continuously track the absolute position of moving objects in spatial virtual environments, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand.paper video acm DL
Thijs Roumen, Willi Mueller and Patrick Baudisch
In Proceedings of CHI '18 (full paper).
We explore how to best support users in remixing a specific class of 3D printed objects, namely those that perform mechanical functions. In our survey, we found that makers remix such machines by manually extracting parts from one parent model and combine them with parts from a different parent model. This approach often puts axles made by one maker into bearings made by another maker or combines a gear by one maker with a gear by a different maker. This approach is problematic, however, as parts from different makers tend to fit poorly, which results in long series of tweaks and test-prints until all parts finally work together. We address this with our interactive system grafter. Grafter does two things. First, grafter largely automates the process of extracting and recombining mechanical elements from 3D printed machines. Second, it enforces a more efficient approach to reuse: it prevents users from extracting individual parts, but instead affords extracting groups of mechanical elements that already work together, such as axles and their bearings or pairs of gears. We call this mechanism-based re-mixing.paper video talk recording acm DL
In Proceedings of UIST '16 (full paper), .
We explore the future of fabrication, in particular the vision of mobile fabrication, which we define as “personal fabrication on the go”. We explore this vision with two surveys, two simple hardware prototypes, matching custom apps that provide users with access to a solution database, custom fabrication processes we designed specifically for these devices, and a user study conducted in situ on metro trains. Our findings suggest that mobile fabrication is a compelling next direction for personal fabrication. From our experience with the prototypes we derive the hardware requirements to make mobile fabrication technically feasible.paper video talk recording acm DL
In Proceedings of CHI '16 (full paper)
For visually impaired users, making sense of spatial information is difficult as they have to scan and memorize content before being able to analyze it. Even worse, any update to the displayed content invalidates their spatial memory, which can force them to manually rescan the entire display. Making display contents persist, we argue, is thus the highest priority in designing a sensemaking system for the visually impaired. We present a tactile display system designed with this goal in mind. The foundation of our system is a large tactile display (140x100cm, 23x larger than Hyperbraille), which we achieve by using a 3D printer to print raised lines of filament. The system’s software then uses the large space to minimize screen updates. Instead of panning and zooming, for example, our system creates additional views, leaving display contents intact and thus preserving user’s spatial memorypaper video talk recording acm DL
In Proceedings of UIST '15 (full paper)
TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called “human actuators” by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them.paper video recording of talk acm DL
In Proceedings of CHI '15 (short paper)
We conducted an empirical investigation of wearable interactive rings on the noticeability of four instantaneous notification channels (light, vibration, sound, poke) and a channel with gradually increased temperature (thermal) during five levels of physical activity (laying down, sitting, standing, walking, and running). Results showed that vibration was the most reliable and fastest channel to convey notification, followed by poke and sound which shared similar noticeability. The noticeability of these three channels was not affected by the level of physical activity. The other two channels, light and thermal, were less noticeable and were affected by the level of physical activity. Our post-experimental survey indicates that while noticeability has a significant influence on user preference, each channel has its own unique advantages that make it suitable for different notification scenarios.paper video recording of talk acm DL
In Proceedings of CHI '15 (full paper)
In this paper, we investigate how users perceive spatiotemporal vibrotactile patterns on the arm, palm, thigh, and waist. Results of the first two experiments indicate that precise recognition of either position or orientation is difficult across multiple body parts. Nonetheless, users were able to distinguish whether two vibration pulses were from the same location when played in quick succession. Based on this finding, we designed eight spatiotemporal vibrotactile patterns and evaluated them in two additional experiments.paper video recording of talk acm DL
2018-10-24 Dagstuhl Seminar on Computational Aspects of Fabrication
2019-01-17 Kolding Design School tech seminar
UIST'18 Local Arrangements Chair
CHI'16 Associate Chair for LBW
DesForm'19 PC member
Special Recongitions for Reviews: