Thijs Roumen

Mobile Fabrication

I investigate a future in which we carry around mobile fabrication machines allowing us to solve our mechanical problems on the go, pretty much the same way we do this with information problems (using mobile computers). I initiated this vision and am dealing with subsequent challenges that arise from such a future, like: modeling on the go, engineering hardware to make this happen and making models portable across fabrication machines.

Papers at CHI/UIST

[7] DualPanto: a haptic device that enables blind users to continuously interact with virtual worlds

Oliver Schneider, Jotaro Shigeyama, Robert Kovacs, Thijs Roumen, Sebastian Marwecki, Nico Boeckhoff, Daniel Amadeus Gloeckner, Jonas Bounama, Patrick Baudisch

In Proceedings of UIST '18, .

We present a new haptic device that enables blind users to continuously track the absolute position of moving objects in spatial virtual environments, as is the case in sports or shooter games. Users interact with DualPanto by operating the me handle with one hand and by holding on to the it handle with the other hand. Each handle is connected to a pantograph haptic input/output device. The key feature is that the two handles are spatially registered with respect to each other. When guiding their avatar through a virtual world using the me handle, spatial registration enables users to track moving objects by having the device guide the output hand.

paper video acm DL

[6] grafter: remixing 3D printed machines

Thijs Roumen, Willi Mueller and Patrick Baudisch

In Proceedings of CHI '18.

We explore how to best support users in remixing a specific class of 3D printed objects, namely those that perform mechanical functions. In our survey, we found that makers remix such machines by manually extracting parts from one parent model and combine them with parts from a different parent model. This approach often puts axles made by one maker into bearings made by another maker or combines a gear by one maker with a gear by a different maker. This approach is problematic, however, as parts from different makers tend to fit poorly, which results in long series of tweaks and test-prints until all parts finally work together. We address this with our interactive system grafter. Grafter does two things. First, grafter largely automates the process of extracting and recombining mechanical elements from 3D printed machines. Second, it enforces a more efficient approach to reuse: it prevents users from extracting individual parts, but instead affords extracting groups of mechanical elements that already work together, such as axles and their bearings or pairs of gears. We call this mechanism-based re-mixing.

paper video talk recording acm DL

[5] Mobile Fabrication

Thijs Roumen, Bastian Kruck , Tobias Duerschmid , Tobias Nack and Patrick Baudisch

In Proceedings of UIST '16, .

We explore the future of fabrication, in particular the vision of mobile fabrication, which we define as “personal fabrication on the go”. We explore this vision with two surveys, two simple hardware prototypes, matching custom apps that provide users with access to a solution database, custom fabrication processes we designed specifically for these devices, and a user study conducted in situ on metro trains. Our findings suggest that mobile fabrication is a compelling next direction for personal fabrication. From our experience with the prototypes we derive the hardware requirements to make mobile fabrication technically feasible.

paper video talk recording acm DL

[4] Linespace a sensemaking platform for the blind

Saiganesh Swaminathan , Thijs Roumen, Robert Kovacs , David Stangl , Stefanie Mueller and Patrick Baudisch

In Proceedings of CHI '16

For visually impaired users, making sense of spatial information is difficult as they have to scan and memorize content before being able to analyze it. Even worse, any update to the displayed content invalidates their spatial memory, which can force them to manually rescan the entire display. Making display contents persist, we argue, is thus the highest priority in designing a sensemaking system for the visually impaired. We present a tactile display system designed with this goal in mind. The foundation of our system is a large tactile display (140x100cm, 23x larger than Hyperbraille), which we achieve by using a 3D printer to print raised lines of filament. The system’s software then uses the large space to minimize screen updates. Instead of panning and zooming, for example, our system creates additional views, leaving display contents intact and thus preserving user’s spatial memory

paper video talk recording acm DL

[3] Turkdeck: Physical virtual reality based on people

Lung-Pan Chen, Thijs Roumen, Hannes Rantzsch , Sven Köhler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper and Patrick Baudisch

In Proceedings of UIST '15

TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called “human actuators” by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them.

paper video recording of talk acm DL

[2] NotiRing: A Comparative Study of Notification Channels for Wearable Interactive Rings

Thijs Roumen, Simon Perrault and Shengdong Zhao

In Proceedings of CHI '15

We conducted an empirical investigation of wearable interactive rings on the noticeability of four instantaneous notification channels (light, vibration, sound, poke) and a channel with gradually increased temperature (thermal) during five levels of physical activity (laying down, sitting, standing, walking, and running). Results showed that vibration was the most reliable and fastest channel to convey notification, followed by poke and sound which shared similar noticeability. The noticeability of these three channels was not affected by the level of physical activity. The other two channels, light and thermal, were less noticeable and were affected by the level of physical activity. Our post-experimental survey indicates that while noticeability has a significant influence on user preference, each channel has its own unique advantages that make it suitable for different notification scenarios.

paper video recording of talk acm DL

[1] OmniVib: Towards Cross-body Spatiotemporal Vibrotactile Notifications for Mobile Phones

Jessalyn Alvina, Simon Perrault , Thijs Roumen , Shengdong Zhao , Maryam Azh and Morten Fjeld

In Proceedings of CHI '15

In this paper, we investigate how users perceive spatiotemporal vibrotactile patterns on the arm, palm, thigh, and waist. Results of the first two experiments indicate that precise recognition of either position or orientation is difficult across multiple body parts. Nonetheless, users were able to distinguish whether two vibration pulses were from the same location when played in quick succession. Based on this finding, we designed eight spatiotemporal vibrotactile patterns and evaluated them in two additional experiments.

paper video recording of talk acm DL

Invited talks

2018-10-24 Dagstuhl Seminar on Computational Aspects of Fabrication
2019-01-17 Kolding Design School tech seminar

Volunteering at CHI/UIST

UIST2018 Local Arrangements Chair
CHI2016 Associate Chair for LBW

Reviewer:

  • CHI 19,18,17,16,15
  • UIST 17,16,15
  • and other conferences

Special Recongitions for Reviews:

  • [9] CHI 2018
  • [8] CHI 2018
  • [7] CHI 2016
  • [6] CHI 2016
  • [5] CSCW 2015
  • [4] CHI Play 2015
  • [3] CHI 2015
  • [2] CHI 2015
  • [1] CHI Play 2014