ACME /atlas/ en Minority Report-inspired interface allows us to explore time in new ways /atlas/minority-report-inspired-interface-allows-us-explore-time-new-ways <span>Minority Report-inspired interface allows us to explore time in new ways</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-09-26T08:48:13-06:00" title="Friday, September 26, 2025 - 08:48">Fri, 09/26/2025 - 08:48</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-09/Proteus%204%20small.jpeg?h=82f92a78&amp;itok=7N7ZG_V5" width="1200" height="800" alt="People standing in a dark theater with various time lapse videos projected around them"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/1097" hreflang="en">B2</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>Imagine a scene—a bird feeder on a summer afternoon, the dark of night descending over the Flatirons, a fall day on a university campus. Now imagine moving backwards and forwards through time on a single aspect of that setting while everything else remains. One ATLAS engineer is building technology that lets us experience multiple time scales all at once.</span></p><div class="ucb-box ucb-box-title-center ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title">The Proteus Team</div><div class="ucb-box-content"><p><em><span>David Hunter developed Proteus with expertise from ACME Lab members </span></em><a href="/atlas/suibi-che-chuan-weng" data-entity-type="external" rel="nofollow"><em><span>Suibi Weng</span></em></a><em><span>, </span></em><a href="/atlas/rishi-vanukuru" data-entity-type="external" rel="nofollow"><em><span>Rishi Vanukuru</span></em></a><em><span>, </span></em><a href="/atlas/anika-mahajan" data-entity-type="external" rel="nofollow"><em><span>Annika Mahajan</span></em></a><em><span>, </span></em><a href="/atlas/yi-ada-zhao" data-entity-type="external" rel="nofollow"><em><span>Ada Zhao</span></em></a><em><span> and </span></em><a href="/atlas/shih-yu-leo-ma" data-entity-type="external" rel="nofollow"><em><span>Leo Ma</span></em></a><em><span>, and advising from professor and lab director </span></em><a href="/atlas/ellen-yi-luen-do" data-entity-type="external" rel="nofollow"><em><span>Ellen Do</span></em></a><em><span>.&nbsp;</span></em></p><p><a href="/atlas/brad-gallagher-0" data-entity-type="external" rel="nofollow"><em><span>Brad Gallagher</span></em></a><em><span> and </span></em><a href="/atlas/chris-petillo" data-entity-type="external" rel="nofollow"><em><span>Chris Petillo</span></em></a><em><span> in the B2 Center for Media, Arts and Performance provided critical technical support to make the project come alive in the B2 Black Box Studio.</span></em></p></div></div></div><p dir="ltr"><span>“Proteus: Spatiotemporal Manipulations” by </span><a href="/atlas/david-hunter" data-entity-type="external" rel="nofollow"><span>David Hunter</span></a><span>, ATLAS PhD student, in collaboration with his ACME Lab colleagues, allows people to simultaneously observe different moments in time through a full-scale interactive experience combining video projection, motion capture, audio and cooperative elements.&nbsp;</span></p><p dir="ltr"><span>The project was shown through the creative residency program in the&nbsp;</span><a href="/atlas/b2-center-media-arts-performance" rel="nofollow"><span>B2 Center for Media, Art and Performance</span></a><span> at the Roser ATLAS Center. Housed in the B2 Black Box Studio, the project team used 270-degree video projections, a spatial audio array and motion capture technology, creating an often larger-than-life way to explore many different time scales at once. Simultaneous projections highlighted the lifecycle of a bacteria in a petri dish, a day in the life of a street corner on campus, and weather patterns at a global scale among other vignettes.</span></p><p dir="ltr"><span>This tangible manipulation of time within a space can feel disorienting at first. It takes a while to adjust to what is happening, a unique sensation that has an expansive, almost psychedelic quality. But it may also have practical applications.</span></p><p dir="ltr"><span>We spoke to Hunter about the inspiration behind Proteus, possible use cases and what comes next.&nbsp;</span></p><p dir="ltr"><em><span>This Q&amp;A has been lightly edited for length and clarity.</span></em></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/2025-09/Proteus%204%20small.jpeg?itok=qsCGqivd" width="750" height="500" alt="People standing in a dark theater with various time lapse videos projected around them"> </div> <span class="media-image-caption"> <p><em>Several people can explore different moments in time simultaneously.</em></p> </span> </div> <p dir="ltr"><span><strong>Tell us about the inspiration for Proteus.</strong></span></p><p dir="ltr"><span>There are scenes and settings where you want to see a place or situation at two distinct time frames, and for that comparison to not necessarily be hard-edged or side by side. You want it to be interpolated through time, so you can see patterns of change in space over that time.&nbsp;</span></p><p dir="ltr"><span><strong>Describe the early iterations of this project.</strong></span></p><p dir="ltr"><span>It was originally a tabletop setup with projection over small robots, and you could manipulate the robots to produce similar kinds of effects. There was a version running with a camera giving you a live video feed, but we switched it to curated videos as it was easier to understand what was happening with time manipulation. You're potentially making quite a confusing image for yourself, so the robots gave you something tangible to hold on to.&nbsp;</span></p><p dir="ltr"><span><strong>How did it evolve into the large-scale installation it is now?</strong></span></p><p dir="ltr"><span>We thought it would be interesting to see this at a really large scale in the B2, and the creative residencies made that possible. That's when we moved away from robots and it was like, “Well, how would you control it in this sort of space?”&nbsp;</span></p><p dir="ltr"><span>I took a projection mapping course last semester and worked on a large-scale projection, but then we changed it to hand interaction—gesture-based in the air, kind of like “Minority Report.”&nbsp;</span></p> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%201%20small.jpg?itok=FAO84sF0" width="1500" height="1001" alt="people in a dark theater with a huge projection of space around them"> </div> <span class="media-image-caption"> <p><em>As you closer or farther from the screen, you change the time scale of part of the scene.</em></p> </span> <p dir="ltr"><span><strong>How do you describe what takes place when people interact with Proteus?</strong></span></p><p dir="ltr"><span>The key is interaction—people can actually control the time lapse. Usually, time lapses are linear, not two dimensional, and we don't have control over them. Here, you can focus on what interests you across different time periods, or hold two points in time side by side to see patterns and relationships as they change across space and time. This also enables multiple visitors to find the things they are interested in; there isn't one controller of the scene, it is collaborative.</span></p><p dir="ltr"><span><strong>What are some of the use cases you’ve thought about for this technology?</strong></span></p><p dir="ltr"><span>Anything with a geospatial component—a complex scene where many things are happening at once. You might want to keep track of something happening in the past while still tracking something else happening at another time.&nbsp;</span></p><p dir="ltr"><span>You can use these portals to freeze multiple bits of action or set them up to visualize where things have gone at different points in time and space.&nbsp;</span></p><p dir="ltr"><span>It's always been about collaboration—situation awareness where lots of people are trying to interrogate one image and see what everyone else is doing at the same time.&nbsp;</span></p><p dir="ltr"><span>Then there’s film analysis: Can we put in a whole movie and perhaps you can find interesting relationships and compositions around that?&nbsp;This could be a fun way to spatially explore a narrative, too.</span></p><p dir="ltr"><span>We're also looking at how it could be used for mockup design. Let's say you're prototyping an app and you have 50 different variations. You could collapse those all into one space, interrogate them through “time,” then mix and match different portions of your designs to come up with new combinations. It also works with volumetric images like body scans, where we swap time for depth.</span></p><p dir="ltr"><span><strong>What are the creative influences that drove the visual style of the piece?&nbsp;</strong></span></p><p dir="ltr"><span>I've long been interested in time lapses, like skateboard photography where multiple snapshots are overlaid on the same space as a single image.</span></p><p dir="ltr"><span>There's all this work by Muybridge and Marey, who invented chronophotography. That's how they worked out that horses leave the ground while they run.&nbsp;</span></p><p dir="ltr"><span>David Hockney did a ton of Polaroid work. There's a famous one of people playing Scrabble, shot from his perspective. All these different Polaroids are stuck down next to each other, not as a true representation of space but as a way of capturing time within that space—breaking the unity of the image.&nbsp;</span></p><p dir="ltr"><span>Khronos Projector by Alvaro Cassinelli kicked off this research sprint and prompted me to look back at my interest in time and photography.</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%202%20small.jpg?itok=ydo01cdB" width="1500" height="1001" alt="Wrap-around screen projection various images"> </div> <span class="media-image-caption"> <p><em>Several time lapse videos are projected simultaneously on the Black Box Studio's 270-degree screen.</em></p> </span> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%203%20small.jpg?itok=jnFoaYNr" width="1500" height="1001" alt="Proteus controllers"> </div> <span class="media-image-caption"> <p><em>Proteus is controlled with an app on a mobile phone modified to work with motion capture.</em></p> </span> </div></div><p dir="ltr"><span><strong>It does force your brain to work in a way that it is not used to, which is a really cool thing to happen in a creative sense but also in a technical sense.</strong></span></p><p dir="ltr"><span>We aren't used to seeing anything with non-uniform time. Whenever we're watching a video, we want to find a point in the video, then we see the whole image rewind or fast forward. Of course that makes sense in a lot of situations, but there could be interesting use-cases for interactive non-uniform time.</span></p><p dir="ltr"><span><strong>What makes ATLAS an ideal place for this type of research?</strong></span></p><p dir="ltr"><span>There's all the people, whether it's research faculty who are interested in asking questions like, “How can we make a novel system or improve research that's going on in this area?” Or it's super strong technical expertise, which is like, “Let's have a go at making this work in a projection environment.”</span></p><p dir="ltr"><span><strong>What’s next for the Proteus project?</strong></span></p><p dir="ltr"><span>At the moment, I can only compare by changing the time on a region of space and I can compare a region of space against another region at different times. But it might be interesting to be able to break the image and say, “Actually, I want to clone this region and see it from a different time period.” Can I reconstitute the image in some way?</span></p><div class="row ucb-column-container"><div class="col ucb-column"><div class="ucb-box ucb-box-title-left ucb-box-alignment-none ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title">Experience Proteus during <a href="/researchinnovation/week" data-entity-type="external" rel="nofollow">Research &amp; Innovation Week</a></div><div class="ucb-box-content"><p><span>Proteus will be running in the B2 Black Box Studio as part of:&nbsp;</span><br><br><a href="/atlas/research-open-labs-2025" rel="nofollow"><span><strong>ATLAS Research Open Labs</strong></span></a><br><span>Roser ATLAS Center</span><br><span>October 10, 2025</span><br><span>3-5pm</span><br><span>FREE, no registration needed</span></p></div></div></div></div><div class="col ucb-column"><p>&nbsp;</p></div></div></div> </div> </div> </div> </div> <div>ATLAS PhD student David Hunter researches novel ways to interact with different moments in time across a single video stream.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 26 Sep 2025 14:48:13 +0000 Michael Kwolek 5139 at /atlas New research explores tinkering as a key classroom learning method /atlas/new-research-explores-tinkering-key-classroom-learning-method <span>New research explores tinkering as a key classroom learning method</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-07-16T10:21:34-06:00" title="Wednesday, July 16, 2025 - 10:21">Wed, 07/16/2025 - 10:21</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-07/Ranjan%20cartoonimator.jpg?h=71976bb4&amp;itok=lLOFZRwo" width="1200" height="800" alt="Kirthik Ranjan presents Cartoonimator"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>When kids tinker in the classroom, they get to build many useful skills from computing to collaboration to creativity and more.&nbsp;</span></p> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2025-07/Ranjan%20cartoonimator.jpg?itok=6ghrbn_w" width="375" height="281" alt="Kirthik Ranjan presents Cartoonimator"> </div> </div> <p dir="ltr"><a href="/atlas/krithik-ranjan" rel="nofollow"><span>Krithik Ranjan</span></a><span>, PhD student and member of the&nbsp;</span><a href="/atlas/acme-lab" rel="nofollow"><span>ACME Lab</span></a><span>, studies low-cost forms of human-computer interaction that enable more people to explore their creativity through technology. And tinkering plays a big part in that.</span></p><p dir="ltr"><span>Ranjan presented his work at&nbsp;</span><a href="https://constructionism2025.inf.ethz.ch/" rel="nofollow"><span>Constructionism 2025</span></a><span> conference in Zurich, Switzerland, which explores how constructionist ideas can inspire advancements in learning technologies and methodologies. He filled us in on what he presented.</span></p><p dir="ltr"><span><strong>Tell us about your research focus.</strong></span></p><p dir="ltr"><span>I've been trying to build ways for people to create with technology in a more open-ended, tinkering-friendly way. Tinkering is a way to learn where you can explore, you can experiment, you can playfully interact with things to learn a concept, whether it is computer science, physics, astronomy or anything like that.</span></p><p dir="ltr"><span><strong>Can you describe the intention behind your paper on “</strong></span><a href="https://constructionism.oapublishing.ch/index.php/con/article/view/26" rel="nofollow"><span><strong>The Design Space of Tangible Interfaces for Computational Tinkerability</strong></span></a><span><strong>”?</strong></span></p><p dir="ltr"><span>There are two elements to it. One is the tangible side where the idea is that you're interacting with computational elements in the physical space, either stuff like paper or robots or different components that you can put together. And the other aspect is computational tinkerability, that playful open-ended aspect about creating something computationally.</span></p><p dir="ltr"><span>I want to understand how people have previously developed design spaces, which is this concept of a framework to understand what people have done, what it means, and how we can analyze and categorize different types of projects in that space. I reviewed 33 different projects to figure out: What are kids tinkering with? What are children making? And how are they making it?</span></p><p dir="ltr"><span>This project was from the perspective of a designer to inform future designers who are going to create such interfaces and projects.</span></p><p dir="ltr"><span><strong>What did you discover in conducting this research?</strong></span></p><p dir="ltr"><span>We figured out there is a range of how tinkerable or how expressive an interface can be, so we try to categorize that based on a “spectrum of tinkerability.”&nbsp;</span></p><p dir="ltr"><span>The other important takeaway is the idea of expanding beyond code. There's so much work, both commercially and in research, around enabling students to code. But a lot of researchers also found that this line-by-line type of programming is also a bit discouraging to students from underrepresented groups. So there's a lot of work in expanding the ways you can create with computers to not just rewrite lines of code to program or make a game or make a 3D model. But more diverse ways that suit different interests.&nbsp;</span></p> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-07/tinkerability%20ranjan%20spectrum.jpeg?itok=W_xwSrPw" width="1500" height="1469" alt="Spectrum of Tinkerability chart"> </div> <p dir="ltr"><span><strong>Who might be the audience for this research and what might they do with it?</strong></span></p><p dir="ltr"><span>The goal was to categorize this space so people can refer to it and design based on that. The audience is other researchers and designers of interfaces for learning with computers. Based on the implications, they can better design ways that students learn by making [tools] more expressive, more open-ended, learner-driven, and catering to different interests instead of just code or just one type of way to interact.&nbsp;</span></p><p dir="ltr"><span><strong>A lot of your work is focused on using simple materials that are more accessible to students all over the world. How might this research help educators expand the tools they have access to for students?</strong></span></p><p dir="ltr"><span>In practical situations like classrooms and formal learning centers, there are always constraints with resources, with the number of people, with the kind of things you can get access to. And quite often we might find that projects, interfaces and tools in the market are usually one-off and they are a couple hundred dollars or more. So I was trying to look at these projects in terms of the kind of materials they use and how they enable people to interact with the material.</span></p><p dir="ltr"><span>There are some projects like [ATLAS PhD] Ruhan Yang's&nbsp;</span><a href="/atlas/pabo-bot-paper-box-robots-everyone" rel="nofollow"><span>Paper Robots</span></a><span> and a couple other projects that we looked at where the focus was DIY-based interfaces that educators can fabricate themselves for their classrooms. These projects stressed on publishing the plans and instructables and stuff like that online so that anybody can use them to build these interactive interfaces themselves.</span></p><p dir="ltr"><span>And part of this was also using platforms that are already available. Arduino, Microbit and Raspberry Pi are commonly used electronic platforms in education for many different purposes. There's a way to make these interfaces more accessible if you use those existing platforms instead of making custom electronics.</span></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/smart_cartoonimator.jpg?itok=gz074D4z" width="750" height="534" alt="Cartoonimator key frame components and smartphone app"> </div> </div> <p dir="ltr"><span><strong>How does a student who uses your&nbsp;</strong></span><a href="/atlas/cartoonimator" rel="nofollow"><span><strong>Cartoonimator</strong></span></a><span><strong> tool, for example, learn in the process of figuring out how to use and then make animations?</strong></span></p><p dir="ltr"><span>This idea of tinkering and learning by making is based on these learning philosophies of constructionism. The idea is that you're learning by building artifacts, building mental models yourself instead of being instructed by somebody else.</span></p><p dir="ltr"><span>A big part is the idea of being stuck and then trying to work through the problem, trying to figure out what's wrong, what needs to change and get something working. This way of learning is focused on the learner's motivation.</span></p><p dir="ltr"><span><strong>What's next for this research?</strong></span></p><p dir="ltr"><span>Cartoonimator is one example where I looked at these principles about expanding beyond code and working with more accessible materials to build an interface that's open ended and tinkering-friendly for learning something like animation.</span></p><p dir="ltr"><span>I'm looking further at how we can engage students with physical computing using paper, because that is more accessible, easier to expand on and gives you this space of creative exploration that you may not usually have with devices.&nbsp;</span></p><p dir="ltr"><span>If you're new with technology, you might be afraid of breaking something apart, but that's really a core part of tinkering, so I'm looking at how paper-based interfaces can foster the idea.</span></p></div> </div> </div> </div> </div> <div>PhD student Krithik Ranjan analyzed 33 student learning tools and developed a “spectrum of tinkerability” that offers designers new ways to think about teaching computational skills.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 16 Jul 2025 16:21:34 +0000 Michael Kwolek 5102 at /atlas ATLAS community presents new research on interactive systems at DIS 2025 /atlas/atlas-community-presents-latest-research-human-computer-interaction-dis-2025 <span>ATLAS community presents new research on interactive systems at DIS 2025</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-06-26T11:14:27-06:00" title="Thursday, June 26, 2025 - 11:14">Thu, 06/26/2025 - 11:14</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-06/DIS%202025%20logo_0.png?h=252f27fa&amp;itok=iTkbKstP" width="1200" height="800" alt="DIS 2025 conference"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/729" hreflang="en">alistar</a> <a href="/atlas/taxonomy/term/342" hreflang="en">devendorf</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/731" hreflang="en">living matter</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/376" hreflang="en">unstable</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><a href="https://dis.acm.org/2025/" rel="nofollow"> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2025-06/DIS%202025%20logo.png?itok=mbKo8dOI" width="375" height="179" alt="ACM designing interactive systems '25 Madeira, Portugal"> </div> </div> </a><p dir="ltr"><span>The 2025&nbsp;</span><a href="https://dis.acm.org/2025/" rel="nofollow"><span>ACM Designing Interactive Systems Conference</span></a><span> (DIS) in Madeira, Portugal, features work from ten ATLAS community members representing three labs. This year’s event has five focus areas: Critical Computing and Design Theory, Design Methods and Processes, Artifacts and Systems, Research Through Design, and AI and Design with an overall theme around “design that transcends human-centered perspectives.”</span></p><p dir="ltr"><span>ATLAS researchers study a broad range of topics, from human-computer interaction to biomaterials to woven forms.&nbsp;</span></p><p dir="ltr"><span>Ellen Do, professor and ACME director, explains what connects the work our community is presenting at the conference: “I think all of the papers and presentations we have are on designing interactive systems. Some of the systems could be physical, some could be digital, some could be human-and-people, human-and-physical objects. So I think the theme about interactive systems and how you make systems interactive, what kind of user experience or human experience or immersive experience with the object or system or even the ecosystem, or the human communication system—I think that's all there.”</span></p><h3>ATLAS research at DIS 2025</h3><p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200707" rel="nofollow"><span><strong>"Chaotic, Exciting, Impactful": Stories of Material-led Designers in Interdisciplinary Collaboration</strong></span></a><br><span>Gabrielle Benabdallah,&nbsp;</span><a href="/atlas/eldy-lazaro" rel="nofollow"><span>Eldy S. Lazaro Vasquez</span></a><span> (ATLAS PhD student),&nbsp;</span><a href="/atlas/laura-devendorf" rel="nofollow"><span>Laura Devendorf</span></a><span> (ATLAS Unstable Design Lab director, associate professor),&nbsp;</span><a href="/atlas/mirela-alistar" rel="nofollow"><span>Mirela Alistar</span></a><span> (ATLAS Living Matter Lab director, assistant professor)</span></p><p dir="ltr"><span>This paper explores the dynamics of interdisciplinary collaboration between designers, scientists, and engineers through ten stories as told from the perspective of material-led designers. These stories focus on material-led designers working in contexts like biodesign and smart textiles, where novel materials, fabrication methods, and technology often intersect, requiring cross-disciplinary collaboration. By including perspectives from designers within and adjacent to HCI, the study broadens the understanding of interdisciplinary teamwork that combines scientific, technical, and craft-based expertise. Our analysis highlights how designers navigate challenges like differing terminologies, epistemic hierarchies, and conflicting priorities. We discuss strategies such as material prototypes, attitudes of inquiry and openness, switching lexicons, and the value of interdisciplinary contexts. This research underscores designers as “translators” who mediate epistemological tensions, use tangible artifacts to communicate, and articulate possible applications. This research contributes ten stories as narrative resources for understanding strategies and fostering interdisciplinary spaces within HCI.</span><br>&nbsp;</p><p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200861" rel="nofollow"><span><strong>Towards Yarnier Interactive Textiles: Mapping a Design Journey through Hand Spun Conductive Yarns</strong></span></a><br><a href="/atlas/etta-sandry" rel="nofollow"><span>Etta W. Sandry</span></a><span> (ATLAS PhD student),&nbsp;</span><a href="/atlas/lily-gabriel" rel="nofollow"><span>Lily M. Gabriel</span></a><span> (ATLAS undergraduate student),&nbsp;</span><a href="/atlas/eldy-lazaro" rel="nofollow"><span>Eldy S. Lazaro Vasquez</span></a><span> (ATLAS PhD student),&nbsp;</span><a href="/atlas/laura-devendorf" rel="nofollow"><span>Laura Devendorf</span></a><span> (ATLAS Unstable Design Lab Director, associate professor)</span></p><p dir="ltr"><span>The ability to create a wide and varied set of interactive textiles depends on the materials that one has available. Currently, the range of yarns that can be used to bring interactivity to textiles is greatly limited, especially considering the diversity available in non-conductive yarns. This pictorial traces a design journey into hand spinning that seeks to address this limitation and contributes samples of techniques and materials that could be used to create conductive yarns along with reflection on design methods that enabled us to explore a wider range of aesthetic expressions. We advocate for an approach that reconnects with the textiles in e-textiles, embraces divergence, and prioritizes the material as the driver of a design concept. We offer pathways for readers and researchers to continue this exploration within varied domains and practices.</span></p> <div class="align-center image_style-large_image_style"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2024-12/spinningConductiveYarnBanner.jpg?itok=7PkmpUu3" width="1500" height="1000" alt="A table with a variety of different yarns varying in texture and size spread out."> </div> </div> <p dir="ltr">&nbsp;</p><p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200738" rel="nofollow"><span><strong>Connect! A Circuit-Driven Card Game</strong></span></a><br><a href="/atlas/ruhan-yang" rel="nofollow"><span>Ruhan Yang</span></a><span> (ATLAS PhD alum),&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span>Ellen Yi-Luen Do</span></a><span> (ATLAS ACME Lab director, professor)</span></p><p dir="ltr"><span>Hybrid physical-digital games often rely on screen-based interactions, which can detract from their tactile nature. We introduce Connect!, a card game that integrates paper circuits and real-time LED feedback, enabling players to construct functional circuits as part of gameplay. Unlike traditional hybrid games, Connect! embeds feedback directly into physical components while preserving material interaction. We conducted a user study comparing gameplay with and without electronic feedback. Our findings suggest that real-time feedback not only increased engagement but also altered players' behavior, encouraging rule exploration and emergent play. Our work contributes to tangible interaction and game-based learning, demonstrating the potential of low-cost electronics in enhancing interactive experiences.</span></p> <div class="align-center image_style-large_image_style"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-06/Connect%20Card%20Game.jpg?itok=IJZECkiT" width="1500" height="882" alt="Connect game cards"> </div> <span class="media-image-caption"> <p><em>Connect! game cards</em></p> </span> </div> <p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200557" rel="nofollow"><span><strong>From Data to Discussion: Interfaces for Collective Inquiry and Open-Ended Data Creation</strong></span></a><br><a href="/atlas/david-hunter" rel="nofollow"><span>David Hunter</span></a><span> (ATLAS PhD student)</span></p><p dir="ltr"><span>Data can enrich our understanding of the world and improve our society. However the datafication of our society comes with challenges for empowering communities. In designing systems for recording and representing data, a theme has emerged of these interfaces as the site of conversations and sense-making, and the participatory nature is valuable beyond the data itself. This insight has led me to investigate tools and experiences that enable open-ended data creation and exploration as a grounding for discussion and prompting action. The goal is to design interfaces and systems for exploring places and futures through data, to empower communities and supporting civic participation, learning and making, situational awareness, and scenario planning. In this pictorial I present five ongoing research projects investigating these ideas.</span></p> <div class="align-center image_style-large_image_style"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-06/How%20To%20Data%20Walk%20Hunter.jpg?itok=uoUZXzxJ" width="1500" height="1281" alt="Graphic depicting steps to data walking"> </div> <span class="media-image-caption"> <p><em>How to Data Walk</em></p> </span> </div> <p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200627" rel="nofollow"><span><strong>Knitting with unknown trees: assembling a more-than-human practice</strong></span></a><br><span>Doenja Oogjes, Ege Kökel,&nbsp;</span><a href="/atlas/netta-ofer" rel="nofollow"><span>Netta Ofer</span></a><span> (ATLAS PhD alum), Hsiang-Lin Kuo, Jasmijn Vugts, Troy Nachtigall,&nbsp;</span><a href="/atlas/torin-hopkins" rel="nofollow"><span>Torin Hopkins</span></a><span> (ATLAS PhD alum)</span></p><p dir="ltr"><span>In this pictorial, we explore alternative ways of knowing urban trees through a more-than-human lens. Using a municipal tree dataset, we focus on “unknown” trees—entries unclassified due to error, decay, or absence—highlighting the limits of quantification and fixed knowledge systems. Urban trees, while critical for ecosystems, are often shaped by technological interventions (e.g., GIS, IoT sensors, AI diagnostics) that prioritize their utility over other expressions. We engage in knitting as a material inquiry to foreground nonhuman agencies and relational entanglements. Through reflective shifts and compromises, this project questions normative design practices, seeking to amplify nonhuman participation. We make two contributions. Firstly, we offer insights into fostering alternative, relational engagements with urban ecologies. Secondly, we reflect on our process of surfacing and working with agentic capacities, articulating guidance for other design researchers. Through this, we advocate for fragmented approaches that embrace complicity and complexity in more-than-human design.</span><br>&nbsp;</p><p dir="ltr"><a href="https://programs.sigchi.org/dis/2025/program/content/200577" rel="nofollow"><span><strong>Designing Interfaces that Support Temporal Work Across Meetings with Generative AI</strong></span></a><br><a href="/atlas/rishi-vanukuru" rel="nofollow"><span>Rishi Vanukuru</span></a><span> (ATLAS PhD student), Payod Panda, Xinyue Chen, Ava Elizabeth Scott, Lev Tankelevitch, Sean Rintel</span></p><p dir="ltr"><span>Temporal work is an essential part of the modern knowledge workplace, where multiple threads of meetings and projects are connected across time by the acts of looking back (retrospection) and ahead (prospection). As we develop Generative AI interfaces to support knowledge work, this lens of temporality can help ground design in real workplace needs. Building upon research in routine dynamics and cognitive science, and an exploratory analysis of real recurring meetings, we develop a framework and a tool for the synergistic exploration of temporal work and the capabilities of Generative AI. We then use these to design a series of interface concepts and prototypes to better support work that spans multiple scales of time. Through this approach, we demonstrate how the design of new Generative AI tools can be guided by our understanding of how work really happens across meetings and projects.</span></p></div> </div> </div> </div> </div> <div>Members of three ATLAS labs show how interactive technology can create possibilities for new means of productivity, data analysis, creativity and play.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Thu, 26 Jun 2025 17:14:27 +0000 Michael Kwolek 5090 at /atlas Colorado-based Computer Graphics Professionals Make Their Mark at SIGGRAPH 2024 /atlas/2024/08/02/colorado-based-computer-graphics-professionals-make-their-mark-siggraph-2024 <span>Colorado-based Computer Graphics Professionals Make Their Mark at SIGGRAPH 2024</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-08-02T10:30:29-06:00" title="Friday, August 2, 2024 - 10:30">Fri, 08/02/2024 - 10:30</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/ruhan_yang_at_conference.jpeg?h=982fb0dd&amp;itok=dCtC-aIu" width="1200" height="800" alt="Ruhan Yang sits behind a table showing off paper circuits research at the conference"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/374" hreflang="en">phdstudent</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> <a href="/atlas/taxonomy/term/883" hreflang="en">yang</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> </div> </div> </div> </div> </div> </div> </div> <div>ATLAS community members, including professor Ellen Do and PhD student Ruhan Yang, presented at this year's conference in Denver.</div> <script> window.location.href = `https://www.koaa.com/news/covering-colorado/colorado-based-computer-graphics-professionals-make-their-mark-at-siggraph-2024`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 02 Aug 2024 16:30:29 +0000 Anonymous 4738 at /atlas ATLAS PhD student deploys papercraft to make engineering tangible and fun /atlas/2024/07/30/atlas-phd-student-deploys-papercraft-make-engineering-tangible-and-fun <span>ATLAS PhD student deploys papercraft to make engineering tangible and fun</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-07-30T13:50:41-06:00" title="Tuesday, July 30, 2024 - 13:50">Tue, 07/30/2024 - 13:50</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/cardboard_circuits_ruhanyang_20240131_jmp_053_copy.png.jpeg?h=a31ffb6c&amp;itok=4al4yaRp" width="1200" height="800" alt="Ruhan stands in the ACME Lab holding examples of her paper robots"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/532" hreflang="en">featurenews</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/374" hreflang="en">phdstudent</a> <a href="/atlas/taxonomy/term/883" hreflang="en">yang</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> </div> </div> </div> </div> </div> </div> </div> <div>ATLAS PhD student Ruhan Yang blends papercraft and circuit design to make engineering more tangible, accessible and fun for tinkerers of all ages. </div> <script> window.location.href = `/engineering/2024/06/18/technical-and-beautiful`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 30 Jul 2024 19:50:41 +0000 Anonymous 4736 at /atlas Public-private partnership drives attention for ATLAS research in augmented and mixed reality /atlas/2024/07/18/public-private-partnership-drives-attention-atlas-research-augmented-and-mixed-reality <span>Public-private partnership drives attention for ATLAS research in augmented and mixed reality</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-07-18T10:41:59-06:00" title="Thursday, July 18, 2024 - 10:41">Thu, 07/18/2024 - 10:41</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/suibi_ppi_award.jpg?h=68f59cd4&amp;itok=aZvQv4Zm" width="1200" height="800" alt="Suibi Che-Chuan Weng receives his award certificate "> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/374" hreflang="en">phdstudent</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div><p>Partnerships between universities and industry can yield important research and commercial breakthroughs. ATLAS professor Ellen Do has worked to cultivate relationships between CU Boulder and industry players, including as a&nbsp;member of the Pervasive Personalized Intelligence (PPI) Center, to support graduate students and enhance opportunities for commercialization of ATLAS research.</p><p>The <a href="https://www.ppicenter.org/" rel="nofollow">PPI Center</a>, which recently concluded its tenure, was founded “with a mission of bringing industry and university talent together to solve the intelligence challenges faced by software and computer engineers in Internet of Things systems." It operated under the supervision of the National Science Foundation and included members from NEC, Intel and Trimble.</p><blockquote><p><em>“It’s been such a good experience. We’ve learned a lot. Ellen Do and her team have helped to expand our thinking and encouraged us to explore new areas.”</em> - Dr. Haifeng Chen, Head of Data Science Department at NEC Laboratories, and his colleague Kai Ishikawa, Principal Researcher&nbsp;(PPI Center event recap)</p></blockquote><p>The PPI Center’s <a href="https://www.ppicenter.org/post/the-ppi-center-s-profound-impact-on-industry-faculty-students" rel="nofollow">Spring 2024 Industry Advisory Board Meeting</a> in Portland, OR, included a research poster session, and ATLAS students were honored with three of the four awards industry attendees voted on at the event.&nbsp;</p><ul><li><strong>Suibi Che-Chuan Weng</strong>, PhD student, won "Most Industry Ready" for <a href="/atlas/sites/default/files/attached-files/weng-editing_reality.pdf" rel="nofollow"><em>Editing Reality: Empowering Users to Manipulate Reality through Addition, Erasing, and Modification with Speech to Prompt in Mixed Reality</em></a>.</li><li><strong>Rishi Vanukuru</strong>, PhD student, won "Most Impactful" for <a href="/atlas/sites/default/files/attached-files/vanukuru-asynchronous_spatial_guidance.pdf" rel="nofollow"><em>Asynchronous spatial guidance using mobile devices and Augmented Reality</em></a>.</li><li><strong>Ada Zhao</strong>, MS student, won "Most Impactful" for <a href="/atlas/sites/default/files/attached-files/zhao-wizard_and_apprentice.pdf" rel="nofollow"><em>The WizARd and Apprentice: Augmented Reality Expert Capture for Training Novices</em></a>.</li></ul><p class="text-align-center"> </p><div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/ppi_suibi.jpg?itok=5E30jhrA" width="750" height="563" alt="Suibi Che-Chuan Weng receives his award certificate"> </div> .&nbsp;&nbsp; <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/ppi_rishi.jpg?itok=hz2hKwGz" width="750" height="563" alt="Rishi Vanukuru receives his award certificate"> </div> &nbsp; &nbsp; <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/ppi_zhao.jpg?itok=gD50IKdc" width="750" height="563" alt="Ada Zhao receives her award certificate"> </div> <p>2 more ATLAS PhD students participated: <strong>Krithik Ranjan</strong> presented <a href="/atlas/sites/default/files/attached-files/ranjan-puppet_guide.pdf" rel="nofollow"><em>PuppetGuide: Tangible Personalized Museum Tour Guides using LLMs</em></a> and <strong>David Hunter</strong> presented <a href="/atlas/sites/default/files/attached-files/hunter-tangible_interaction.pdf" rel="nofollow"><em>Tangible Interaction with Object Detection and Large Language Models</em></a>.</p><p>As for the experience participating in the PPI Center, Do says, “it is good to know that the industry is interested in supporting research and considers our research relevant.” She sees ways ATLAS could form partnerships within several industry sectors on a range of themes due to the multidisciplinary nature of the research conducted here.</p><p>Since their involvement in PPI started, Do and her team have had a series of meetings with mentors from global technology firms, discussing collaborative research opportunities.</p><p>Vanukuru is currently doing an internship at Microsoft Research Cambridge focused on spatial computing in its VR/AR group. Weng and Zhao are working on research in the ACME Lab this summer, extending the Editing Reality (and PuppetGuide), and WizARd and Apprentice projects with interns from the <a href="/engineering/students/research-opportunities/summer-program-undergraduate-research-cu-spur" rel="nofollow">CU SPUR program</a>. Zhao is also conducting a pilot study, interviewing laser cutter operating experts about how they would demonstrate operations and how they can annotate their demonstration using the WizARd prototype for novice learners. Hunter has embarked on an internship with Trimble this summer, while he and Ranjan are also working in the ACME Lab.</p></div> </div> </div> </div> </div> <div>ACME Lab members built relationships with industry players through the Pervasive Personalized Intelligence (PPI) Center by collaborating on solutions to challenges in building Internet of Things systems. Three ATLAS PhD students took home awards from the PPI Center's Spring 2024 Advisory Board Meeting.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Thu, 18 Jul 2024 16:41:59 +0000 Anonymous 4698 at /atlas ATLAS in Ireland: 12 community members present at TEI’24 /atlas/atlas-ireland-12-community-members-present-tei24 <span>ATLAS in Ireland: 12 community members present at TEI’24</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-02-09T12:05:23-07:00" title="Friday, February 9, 2024 - 12:05">Fri, 02/09/2024 - 12:05</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/screenshot_2024-02-09_at_12.09.34_pm.png?h=8681559e&amp;itok=KvBy9zBf" width="1200" height="800" alt="Art and Demo Exhibition Venue building on the harbor in Cork, Ireland"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/729" hreflang="en">alistar</a> <a href="/atlas/taxonomy/term/342" hreflang="en">devendorf</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/168" hreflang="en">feature</a> <a href="/atlas/taxonomy/term/514" hreflang="en">gyory</a> <a href="/atlas/taxonomy/term/731" hreflang="en">living matter</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/376" hreflang="en">unstable</a> <a href="/atlas/taxonomy/term/883" hreflang="en">yang</a> <a href="/atlas/taxonomy/term/641" hreflang="en">zheng</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/93b9319e-7438-f5ee-2a56-bc5dd1fd765d.png?itok=R-va1_rw" width="375" height="375" alt="TEI 2024 logo"> </div> </div> <p>ATLAS is well-represented at #TEI2024 - the 18th ACM International Conference on Tangible, Embedded and Embodied Interaction. This year’s conference, in Cork, Ireland, celebrates “cutting-edge scientific research and art that is on the edge of disciplines and on the edge of new unique developments and possibilities.”</p><p>Research from 12 members of the ATLAS community including faculty, alumni and students is featured at the conference. The work spans a range of disciplines, including weaving, biomaterials, mixed reality and robotics. In addition, ACME Lab director, Ellen Do, acted as Co-Chair of Graduate Student Consortium; PhD student, Sandra Bae, was an Associate Chair for Pictorials; and ATLAS PhD alum, Fiona Bell, was an Associate Chair for Papers.</p><p><strong>Research ATLAS PhD students presented at TEI’24</strong><br><br><a href="https://doi.org/10.1145/3623509.3633358" rel="nofollow"><strong>Loom Pedals: Retooling Jacquard Weaving for Improvisational Design Workflows</strong></a><br><a href="/atlas/shanel-wu" rel="nofollow"><strong>Shanel Wu</strong></a><strong>, </strong><a href="/atlas/xavier-corr" rel="nofollow"><strong>Xavier A Corr</strong></a><strong>, Xi Gao, </strong><a href="/atlas/sasha-de-koninck" rel="nofollow"><strong>Sasha De Koninck</strong></a><strong>, Robin Bowers, and</strong><a href="/atlas/laura-devendorf" rel="nofollow"><strong> Laura Devendorf</strong></a></p><p><strong>Abstract</strong>: We present the Loom Pedals, an open-source hardware/software interface for enhancing a weaver’s ability to create on-the-fly, improvised designs in Jacquard weaving. Learning from traditional handweaving and our own weaving experiences, we describe our process of designing, implementing, and using the prototype Loom Pedals system with a TC2 Digital Jacquard loom. The Loom Pedals include a set of modular, reconfigurable foot pedals which can be mapped to parametric Operations that generate and transform digital woven designs. Our novel interface integrates design and loom control, providing a customizable workflow for playful, improvisational Jacquard weaving. We conducted a formative evaluation of the prototype through autobiographical methods and collaboratively developed future Loom Pedals features. We contribute our prototype, design process, and conceptual reflections on weaving as a human-machine dialog between a weaver, the loom, and many other agents.</p><p><a href="https://doi.org/10.1145/3623509.3633386" rel="nofollow"><strong>Bio-Digital Calendar: Attuning to Nonhuman Temporalities for Multispecies Understanding</strong></a><br><a href="/atlas/fiona-bell" rel="nofollow"><strong>Fiona Bell</strong></a><strong>, </strong><a href="/atlas/joshua-coffie" rel="nofollow"><strong>Joshua Coffie</strong></a><strong>, and </strong><a href="/atlas/mirela-alistar" rel="nofollow"><strong>Mirela Alistar</strong></a></p><p><strong>Abstract</strong>:&nbsp;We explore how actively engaging with the temporalities of a nonhuman organism can lead to multispecies understanding. To do so, we design a bio-digital calendar that brings attention to the growth and health of kombucha SCOBY, a symbiotic culture of bacteria and yeast that lives in a tea medium. The non-invasive bio-digital calendar surrounds the kombucha SCOBY to track (via sensors) and enhance (via sound) its growth. As we looked at and listened to our kombucha SCOBY calendar on a daily basis, we became attuned to the slowness of kombucha SCOBY. This multisensory noticing practice with the calendar, in turn, destabilized our preconceived human-centered positionality, leading to a more humble, decentered relationship between us and the organism. Through our experiences with the bio-digital calendar, we gained a better relational multispecies understanding of temporalities based on care, which, in the long term, might be a solution to a more sustainable future.</p><p><a href="https://doi.org/10.1145/3623509.3633395" rel="nofollow"><strong>Wizard of Props: Mixed Reality Prototyping with Physical Props to Design Responsive Environments</strong></a><br><strong>Yuzhen Zhang, Ruixiang Han, </strong><a href="/atlas/ran-zhou" rel="nofollow"><strong>Ran Zhou</strong></a><strong>, </strong><a href="/atlas/peter-gyory" rel="nofollow"><strong>Peter Gyory</strong></a><strong>, </strong><a href="/atlas/clement-zheng" rel="nofollow"><strong>Clement Zheng</strong></a><strong>, Patrick C. Shih, </strong><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><strong>Ellen Yi-Luen Do</strong></a><strong>, Malte F Jung, Wendy Ju, and </strong><a href="/atlas/daniel-leithinger" rel="nofollow"><strong>Daniel Leithinger</strong></a></p><p><strong>Abstract</strong>:&nbsp;Driven by the vision of future responsive environments, where everyday surroundings can perceive human behaviors and respond through intelligent robotic actuation, we propose Wizard of Props (WoP): a human-centered design workflow for creating expressive, implicit, and meaningful interactions. This collaborative experience prototyping approach integrates full-scale physical props with Mixed Reality (MR) to support ideation, prototyping, and rapid testing of responsive environments. We present two design explorations that showcase our investigations of diverse design solutions based on varying technology resources, contextual considerations, and target audiences. Design Exploration One focuses on mixed environment building, where we observe fluid prototyping methods. In Design Exploration Two, we explore how novice designers approach WoP, and illustrate their design ideas and behaviors. Our findings reveal that WoP complements conventional design methods, enabling intuitive body-storming, supporting flexible prototyping fidelity, and fostering expressive environment-human interactions through in-situ improvisational performance.</p><p><a href="https://doi.org/10.1145/3623509.3634740" rel="nofollow"><strong>Making Biomaterials for Sustainable Tangible Interfaces</strong></a><br><a href="/atlas/fiona-bell" rel="nofollow"><strong>Fiona Bell</strong></a><strong>, </strong><a href="/atlas/shanel-wu" rel="nofollow"><strong>Shanel Wu</strong></a><strong>, Nadia Campo Woytuk, </strong><a href="/atlas/eldy-lazaro" rel="nofollow"><strong>Eldy S. Lazaro Vasquez</strong></a><strong>, </strong><a href="/atlas/mirela-alistar" rel="nofollow"><strong>Mirela Alistar</strong></a><strong>, and Leah Buechley</strong></p><p><strong>Abstract</strong>:&nbsp;In this studio, we will explore sustainable tangible interfaces by making a range of biomaterials that are bio-based and readily biodegradable. Building off of previous TEI studios that were centered around one specific biomaterial (i.e., bioplastics at TEI’22 and microbial cellulose at TEI’23), this studio will provide participants the ability to experience a wide variety of biomaterials from algae-based bioplastics, to food-waste-based bioclays, to gelatin-based biofoams. We will teach participants how to identify types of biomaterials that are applicable to their own research and how to make them. Through hands-on activities, we will demonstrate how to implement biomaterials in the design of sustainable tangible interfaces and discuss topics sensitized by biological media such as more-than-human temporalities, bioethics, care, and unmaking. Ultimately, our goal is to facilitate a space in which HCI researchers and designers can collaborate, create, and discuss the opportunities and challenges of working with sustainable biomaterials.</p><p><a href="https://dl.acm.org/doi/10.1145/3623509.3634899" rel="nofollow"><strong>Paper Modular Robot: Circuit, Sensation Feedback, and 3D Geometry</strong></a><br><a href="/atlas/ruhan-yang" rel="nofollow"><strong>Ruhan Yang</strong></a></p><p><strong>Abstract</strong>: Modular robots have proven valuable for STEM education. However, modular robot kits are often expensive, which makes them limited in accessibility. My research focuses on using paper and approachable techniques to create modular robots. The kit’s design encompasses three core technologies: paper circuits, sensation feedback mechanisms, and 3D geometry. I have developed proof-of-concept demonstrations of technologies for each aspect. I will integrate these technologies to design and build a paper modular robot kit. This kit includes various types of modules for input, output, and other functions. My dissertation will discuss the development of these technologies and how they are integrated. This research will address the considerations and techniques for paper as an interactive material, providing a guideline for future research and development of paper-based interaction.</p><p>&nbsp;</p></div> </div> </div> </div> </div> <div>Research from 12 members of the ATLAS community including faculty, alumni and students is featured at the 18th ACM International Conference on Tangible, Embedded and Embodied Interaction.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 09 Feb 2024 19:05:23 +0000 Anonymous 4676 at /atlas ATLAS PhD Students Present at ISMAR 2023 /atlas/2023/10/25/atlas-phd-students-present-ismar-2023 <span>ATLAS PhD Students Present at ISMAR 2023</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-10-25T16:44:26-06:00" title="Wednesday, October 25, 2023 - 16:44">Wed, 10/25/2023 - 16:44</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/img_1777.jpg?h=fc7e893f&amp;itok=C1nOEpy0" width="1200" height="800" alt="Hopkins, Vanukuru and Weng standing beside Sydney Harbor"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/168" hreflang="en">feature</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div><p>Billed as the premier conference for Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR), IEEE ISMAR was the perfect location for ATLAS community members to showcase their work this month.&nbsp;&nbsp;</p> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/logo_ismar.png?itok=txZBmy4U" width="375" height="138" alt="ISMAR logo linking to ISMAR website"> </div> </div> <p>ATLAS PhD students Rishi Vanukuru, Torin Hopkins and Suibi Che-Chuan Weng attended <a href="https://ismar23.org/" rel="nofollow">ISMAR 2023</a> in Sydney, Australia, from October 16-20, along with leading researchers in academia and industry.</p><p>Vanukuru presented his work on DualStream, a system for mobile phone-based spatial communication employing AR to give people more immersive tools to “share spaces and places.” He also participated in the “1st Joint Workshop on Cross Reality” with his research on using mobile devices to support collaboration.</p><p>Meanwhile, Hopkins and Weng displayed their respective research on improving ways for musicians to collaborate remotely.&nbsp;</p> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/img_1772.jpg?itok=VtOCJqqQ" width="375" height="281" alt="Vanukuru presenting his work on DualStream"> </div> <p>&nbsp;</p><p><strong>Research ATLAS PhD students presented at ISMAR 2023</strong></p><p><a href="https://arxiv.org/abs/2309.00842" rel="nofollow"><strong>DualStream: Spatially Sharing Selves and Surroundings using Mobile Devices and Augmented Reality</strong></a><br><a href="/atlas/rishi-vanukuru" rel="nofollow"><em>Rishi Vanukuru</em></a><em>, </em><a href="/atlas/suibi-che-chuan-weng" rel="nofollow"><em>Suibi Che-Chuan Weng</em></a><em>, </em><a href="/atlas/krithik-ranjan" rel="nofollow"><em>Krithik Ranjan</em></a><em>, </em><a href="/atlas/torin-hopkins" rel="nofollow"><em>Torin Hopkins</em></a><em>, </em><a href="/atlas/amy-banic" rel="nofollow"><em>Amy Banić</em></a><em>, </em><a href="/atlas/mark-d-gross" rel="nofollow"><em>Mark D. Gross</em></a><em>, </em><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><em>Ellen Yi-Luen Do</em></a></p><p><strong>Abstract</strong>: In-person human interaction relies on our spatial perception of each other and our surroundings. Current remote communication tools partially address each of these aspects. Video calls convey real user representations but without spatial interactions. Augmented and Virtual Reality (AR/VR) experiences are immersive and spatial but often use virtual environments and characters instead of real-life representations. Bridging these gaps, we introduce DualStream, a system for synchronous mobile AR remote communication that captures, streams, and displays spatial representations of users and their surroundings. DualStream supports transitions between user and environment representations with different levels of visuospatial fidelity, as well as the creation of persistent shared spaces using environment snapshots. We demonstrate how DualStream can enable spatial communication in real-world contexts, and support the creation of blended spaces for collaboration. A formative evaluation of DualStream revealed that users valued the ability to interact spatially and move between representations, and could see DualStream fitting into their own remote communication practices in the near future. Drawing from these findings, we discuss new opportunities for designing more widely accessible spatial communication tools, centered around the mobile phone.<br>&nbsp;</p><p><strong>Exploring the use of Mobile Devices as a Bridge for Cross-Reality Collaboration [</strong><a href="https://ieeexplore.ieee.org/document/10322212" rel="nofollow"><strong>Workshop Paper</strong></a><strong>]</strong><br><a href="/atlas/rishi-vanukuru" rel="nofollow"><em>Rishi Vanukuru</em></a><em>, </em><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><em>Ellen Yi-Luen Do</em></a><em>&nbsp;</em></p><p><strong>Abstract: </strong>Augmented and Virtual Reality technologies enable powerful forms of spatial interaction with a wide range of digital information. While AR and VR headsets are more affordable today than they have ever been, their interfaces are relatively unfamiliar, and a large majority of people around the world do not yet have access to such devices. Inspired by contemporary research towards cross-reality systems that support interactions between mobile and head-mounted devices, we have been exploring the potential of mobile devices to bridge the gap between spatial collaboration and wider availability. In this paper, we outline the development of a cross-reality collaborative experience centered around mobile phones. Nearly fifty users interacted with the experience over a series of research demo days in our lab. We use the initial insights gained from these demonstrations to discuss potential research directions for bringing spatial computing and cross-reality collaboration to wider audiences in the near future.<br>&nbsp;</p><p><strong>Investigating the Effects of Limited Field of View on Jamming Experience in Extended Reality [</strong><a href="https://ieeexplore.ieee.org/document/10322126" rel="nofollow"><strong>Poster Paper</strong></a><strong>]</strong><br><a href="/atlas/suibi-che-chuan-weng" rel="nofollow"><em>Suibi Che-Chuan Weng</em></a><em>, </em><a href="/atlas/torin-hopkins" rel="nofollow"><em>Torin Hopkins</em></a><em>, Shih-Yu Ma, </em><a href="/atlas/chad-tobin" rel="nofollow"><em>Chad Tobin</em></a><em>, </em><a href="/atlas/amy-banic" rel="nofollow"><em>Amy Banić</em></a><em>, </em><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><em>Ellen Yi-Luen Do</em></a></p><p><strong>Abstract</strong>: During musical collaboration, extra-musical visual cues are vital for communication between musicians. Extended Reality (XR) applications that support musical collaboration are often used with headmounted displays such as Augmented Reality (AR) glasses, which limit the field of view (FOV) of the players. We conducted a three part study to investigate the effects of limited FOV on co-presence. To investigate this issue further, we conducted a within-subjects user study (n=19) comparing an unrestricted FOV holographic setup to Nreal AR glasses with a 52◦ limited FOV. In the AR setup, we tested two conditions: 1) standard AR experience with 52◦-limited FOV, and 2) a modified AR experience, inspired by player feedback. Results showed that the holographic setup offered higher co-presence with avatars.<br>&nbsp;</p><p><strong>Networking AI-Driven Virtual Musicians in Extended Reality [Poster]</strong><br><a href="/atlas/torin-hopkins" rel="nofollow"><em>Torin Hopkins</em></a><em>, </em><a href="/atlas/rishi-vanukuru" rel="nofollow"><em>Rishi Vanukuru</em></a><em>, </em><a href="/atlas/suibi-che-chuan-weng" rel="nofollow"><em>Suibi Che-Chuan Weng</em></a><em>, </em><a href="/atlas/chad-tobin" rel="nofollow"><em>Chad Tobin</em></a><em>, </em><a href="/atlas/amy-banic" rel="nofollow"><em>Amy Banić</em></a><em>, </em><a href="/atlas/mark-d-gross" rel="nofollow"><em>Mark D. Gross</em></a><em>, </em><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><em>Ellen Yi-Luen Do</em></a></p><p><strong>Abstract</strong>: Music technology has embraced Artificial Intelligence as part of its evolution. This work investigates a new facet of this relationship, examining AI-driven virtual musicians in networked music experiences. Responding to an increased popularity due to the COVID-19 pandemic, networked music enables musicians to meet virtually, unhindered by many geographical restrictions. This work begins to extend existing research that has focused on networked human-human interaction by exploring AI-driven virtual musicians’ integration into online jam sessions. Preliminary feedback from a public demonstration of the system suggests that despite varied understanding levels and potential distractions, participants generally felt their partner’s presence, were task-oriented, and enjoyed the experience. This pilot aims to open opportunities for improving networked musical experiences with virtual AI-driven musicians and informs directions for future studies with the system.</p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="align-center image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/img_1778_0.jpg?itok=eaK_nTaX" width="375" height="522" alt="Weng standing with his poster on extended reality research"> </div> </div> </div><div class="col ucb-column"> <div class="align-center image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/img_1780_0.jpg?itok=e0LqHcOy" width="375" height="522" alt="Hopkins standing with his poster on extended reality research"> </div> </div> </div></div><p class="text-align-center">&nbsp; &nbsp;</p><p class="text-align-center">&nbsp;&nbsp;</p></div> </div> </div> </div> </div> <div>ATLAS PhD students Rishi Vanukuru, Torin Hopkins and Suibi Che-Chuan Weng attended ISMAR 2023 in Sydney in October to present research on AR, VR and MR.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 25 Oct 2023 22:44:26 +0000 Anonymous 4648 at /atlas Ellen Yi-Luen Do Presents Keynote on Fun with Creative Technology & Design at TaiCHI 2023 /atlas/2023/09/13/ellen-yi-luen-do-presents-keynote-fun-creative-technology-design-taichi-2023 <span>Ellen Yi-Luen Do Presents Keynote on Fun with Creative Technology &amp; Design at TaiCHI 2023</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-09-13T12:51:39-06:00" title="Wednesday, September 13, 2023 - 12:51">Wed, 09/13/2023 - 12:51</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/ellen_speaking.jpeg?h=3fb1951d&amp;itok=5X8R_Kv_" width="1200" height="800" alt="Do speaking on stage at TaiCHI 2023"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/168" hreflang="en">feature</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/image.png?itok=baMRG0m0" width="375" height="93" alt="TaiCHI logo"> </div> </div> <p dir="ltr">ATLAS Professor Ellen Yi-Luen Do had the opportunity to be a keynote speaker at <a href="https://taichi2023.taiwanchi.org/" rel="nofollow">TaiCHI 2023</a>, a symposium hosted by the Taiwan Human-Computer Interaction Society at Taiwan University in Taipei. The event gathered researchers and practitioners across a range of backgrounds in technology, design and human factors to deepen community connections and explore new ideas.&nbsp;</p><p dir="ltr">Sessions included presentations on fabrication, perception, interactions and other timely topics, with a surprising range in mediums from humble materials like felt and puppets to advanced VR technologies and metaverse interactivity.</p><p dir="ltr">As director of the <a href="/atlas/acme-lab" rel="nofollow">ACME Lab</a> at ATLAS, Do and her team conduct research on using everyday items as interfaces, creating objects to think with, new ways of working, and methods and tools to help others make things. Do delivered her presentation, entitled “Fun with Creative Technology &amp; Design”, advocating for playful computing with easily accessible materials like paper and cardboard, while highlighting ways to make toolkits for others to create for themselves.&nbsp;</p> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/fun_with_creative_technology.jpg?itok=Yf7tNubw" width="750" height="379" alt="Title slide of Do's presentation on Fun with Creative Technology and Design"> </div> </div> <p dir="ltr">&nbsp;</p><p dir="ltr">The audience, which included experts in computer science, psychology, media, art, design and business responded enthusiastically, finding common ground in this relatable, inclusive approach to otherwise complex technologies. Do received a particularly warm reception from students in the field. She noted, “Several students came to thank me for my talk, stating that they learned so much from me, and that they never thought research could be this fun and interesting.”&nbsp;</p> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/ellen_speaking.jpeg?itok=WZ9fgX8N" width="750" height="500" alt="Ellen Do speaking on stage"> </div> </div> <p dir="ltr">&nbsp;</p><p dir="ltr">Do expressed excitement for a few standout presentations from the conference including <a href="https://www.edchi.net/" rel="nofollow">Ed Chi</a>, Distinguished Scientist at Google DeepMind, who delivered a keynote on the large language model revolution. She said, “I was happy to learn that Bard will be a tool-use application applying to many of the Google apps and services people already use, including Maps, Sheets, Gmail, Docs, and more.”&nbsp;</p><p dir="ltr">She also called out <a href="https://www.youtube.com/watch?v=IbRG8cLv4mo" rel="nofollow">FeltingReel: Density Varying Soft Fabrication with Reeling and Felting</a> by Ping-Yi Wang and Lung-Pan Cheng as particularly intriguing.</p><p dir="ltr">Back in 2015, Do wrote the article “<a href="https://dl.acm.org/doi/10.1145/2694475" rel="nofollow">A flourishing field: a guide to HCI in China, Taiwan, and Singapore</a>”, and saw the founding of Taiwan HCI. Looking back, she reflects, “I’m happy to see TaiCHI 2023 have 300 people registered with vibrant discussions, demos and posters. It's definitely growing!”</p> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/group_shot.jpeg?itok=eWyTxcEw" width="750" height="500" alt="Group shot of TaiCHI 2023 attendees"> </div> </div> </div> </div> </div> </div> </div> <div>ATLAS Professor Ellen Yi-Luen Do presented on Fun with Creative Technology &amp; Design as keynote speaker at TaiCHI 2023.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 13 Sep 2023 18:51:39 +0000 Anonymous 4634 at /atlas Sandra Bae, ATLAS PhD Student, Awarded at VIS 2023 /atlas/2023/08/30/sandra-bae-atlas-phd-student-awarded-vis-2023 <span>Sandra Bae, ATLAS PhD Student, Awarded at VIS 2023</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-08-30T09:47:51-06:00" title="Wednesday, August 30, 2023 - 09:47">Wed, 08/30/2023 - 09:47</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/utility_touchsensing_img_9379.png?h=1d4eb506&amp;itok=6YacT45_" width="1200" height="800" alt="Touch sensing 3D printed node close up"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/1227" hreflang="en">bae</a> <a href="/atlas/taxonomy/term/168" hreflang="en">feature</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> <a href="/atlas/taxonomy/term/1511" hreflang="en">rivera</a> <a href="/atlas/taxonomy/term/1510" hreflang="en">utility</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div><p>Sandra Bae, PhD student and member of the <a href="/atlas/utility-research-lab" rel="nofollow">Utility Research Lab</a> and <a href="/atlas/acme-lab" rel="nofollow">ACME Lab</a> at ATLAS, has been honored with a Best Paper Honorable Mention at VIS 2023 for her research on network physicalizations.&nbsp;</p> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/article-image/vis2023_logo.jpg?itok=w93ORpkn" width="375" height="86" alt="VIS 2023 logo"> </div> </div> <p>Billed as “the premier forum for advances in theory, methods and applications of visualization and visual analytics”, <a href="https://ieeevis.org/year/2023/welcome" rel="nofollow">VIS 2023</a> will be held in Melbourne, Australia, from October 22-27, and is sponsored by IEEE. The Best Papers Committee bestows honorable mentions on the top 5% of publications submitted.&nbsp;</p><p>The paper introduces a computational design pipeline to 3D print physical representations of networks enabling touch interactivity via capacitive sensing and computational inference.</p><p class="text-align-center">[video:https://youtu.be/uv0Yu0WUeSQ]</p><p>&nbsp;</p><p><a href="https://arxiv.org/abs/2308.04714#:~:text=A%20Computational%20Design%20Pipeline%20to%20Fabricate%20Sensing%20Network%20Physicalizations,-S.&amp;text=Interaction%20is%20critical%20for%20data,visualization%2C%20fabrication%2C%20and%20electronics." rel="nofollow"><strong>A Computational Design Process to Fabricate Sensing Network Physicalizations</strong></a><strong>&nbsp;</strong><br><a href="/atlas/sandra-bae" rel="nofollow"><em>S. Sandra Bae</em></a><em>, Takanori Fujiwara, Anders Ynnerman, </em><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><em>Ellen Yi-Luen Do</em></a><em>, </em><a href="/atlas/michael-rivera" rel="nofollow"><em>Michael L. Rivera</em></a><em>, </em><a href="/atlas/danielle-szafir" rel="nofollow"><em>Danielle Albers Szafir</em></a></p><p><strong>Abstract</strong><br><em>Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection-the most critical atomic operation for interaction-by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption.</em></p> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/utility_touchsensing1_0.png?itok=qMgPfJAX" width="750" height="425" alt="Touch sensing network digital rendering"> </div> </div> <p>&nbsp;</p><p>Bae describes potential use cases for sensing network physicalizations:</p><ul><li><strong>Accessibility visualization </strong>-<strong>&nbsp;</strong>Accessible visualizations (e.g., tactile visualizations) focus on making data visualization more inclusive, particularly for those with low vision or blindness. However, most tactile visualizations are static and non-interactive, which reduces data expressiveness and inhibits data exploration. This technique can create more interactive tactile visualizations.&nbsp;</li><li><strong>AR/VR</strong>&nbsp;- Most AR/VR devices use computer vision (CV), but most devices using CV cannot reproduce the haptic benefits that we naturally leverage (holding, rotating, tracing) with our sense of touch. Past studies confirm the importance of tangible inputs when virtually exploring data. But creating tangible devices for AR/VR requires too much instrumentation to make them interactive. Our technique would enable developers to more easily produce fully functional, responsive controllers right from the printer within a single pass.</li></ul> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/utility_touchsensing_img_9319_0.png?itok=KVBKz2Oa" width="750" height="413" alt="Touch sensing 3D printed network"> </div> </div> <p>&nbsp;</p><p>The work continues as Bae plans to pursue more complex designs and richer interactivity including:</p><p><strong>Fabricating bigger networks</strong> -&nbsp;The biggest network Bae&nbsp;has 3D printed so far is 20 nodes and 40 links, but this is rather small for most network datasets. She will scale&nbsp;this technique to support bigger networks.</p><p><strong>Supporting output</strong> -&nbsp;Interactive objects receive input (e.g., from touch) and produce output (e.g., light, sound, color change) in a controlled manner. The sensing network currently addresses the first part of the interaction loop by responding to touch inputs, but she next wants to explore how to support output.</p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/utility_touchsensing_img_9321.png?itok=je39UoIb" width="750" height="563" alt="Touch sensing 3D printed network node close up"> </div> </div> </div><div class="col ucb-column"> <div class="align-center image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/utility_touchsensing_img_9379.png?itok=ePvfrjyO" width="750" height="466" alt="Touch sensing 3D printed node"> </div> </div> </div></div><p class="text-align-center">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p><p>Bae showcased this research along with fellow ATLAS community members at the <a href="/atlas/2023/05/08/atlas-innovators-win-big-reprap-festival" rel="nofollow">Rocky Mountain RepRap Festival</a> earlier this year. We’re excited to see where her innovative research leads next.</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 30 Aug 2023 15:47:51 +0000 Anonymous 4622 at /atlas