I am an Associate Professor in the Carnegie Mellon Robotics Institute. I'm interested in systems and interfaces that operate in real-time and build user intuition; lately, I've been applying these ideas to textiles fabrication and machine knitting as the leader of the Carnegie Mellon Textiles Lab.
This page is a monolithic presentation of the research software and projects I'm associated with. If you are interested in my personal life, games, or terse ramblings, please visit my web server at tchow.com.
I was a PhD student in the graphics lab advised by Nancy Pollard. Having mostly graduated in August 2010, I began a postdoc at Adobe's research lab in Boston. In October of 2012 I moved back to Pittsburgh to work full time on writing video games, creating Rktcr [for Windows, OSX, and Linux] and Rainbow [for iOS and Android]. In February of 2014, I took a job with Disney Research Pittsburgh, mainly because they offered me the opportunity to make real things with my research. In May of 2017, I joined the faculty of the Carnegie Mellon Robotics Institute.
My brother Mike McCann also earned a PhD at Carnegie Mellon University.
Industrial knitting machines are amazing pieces of hardware, but the software stack that drives them is heavily manufacturing-focused. My group is building the tools and infrastructure for rapid iteration and once-off manufacturing with machine knitting.
One approach we have taken is to build a "compiler" to perform low-level scheduling of knit primitives, leaving high-level design to the user. This makes it easy to knit complicated 3D shapes without worrying about stitch placement on the machine bed. A description of our compiler for knitting appeared at SIGGRAPH 2016. For more information see the project page.
We've also developed tools to automatically convert 3D meshes directly to machine knitting instructions. Project page.
More generally, we've tackled the problem of output-domain editing of knit structures while preserving knitability. Project page.
Many modern design tasks -- from particle effect generation to material design -- boil down to exploring and refining parameter selections in high-dimensional spaces. The Design Adjectives framework presents a modern take on how to integrate machine learning with design exploration and refinement.
This work connects to a larger theme of my work: building tools that help everyone become more creative and productive. The Design Adjectives system specifically fits with this goal by providing both exploration tools (good for everyone, but especially for non-experts trying to understand a domain) and low-level adjustment tools (required by experts).
This work will appear at UIST 2020. For more information, see the project page.
The flexibility of 3D printers means that structures can now be designed with complex internal geometry. This allows -- along other things -- informed trade-offs between strength and weight. We've explored optimization algorithms to deal with these trade-offs in a few interesting cases.
In one case, we have developed a lightweighting approach that works with unknown force locations by optimizing for the worst-case force. Project Page.
In a later project, we used a novel heat-based shell structure to ensure that the optimized object contains a single, connected, inner cavity for ease of cleaning or filling. Project Page.
During his internship at Disney Research, we set Andrew the task of tracking cover and motion events with RFID tags 10x faster than previous work, a feat he accomplished through the power of Bayes filters.
As part of the RapID pipeline, we built a Unity-compatible runtime, which allowed us to create some fun and cool looking example applications.
This work appeared at CHI 2016. For more information see the project page.
I was also involved in a follow-on project about deploying RFID readers in a light-bulb form-factor, which was appeared at UbiComp 2017. Project page.
My first Disney-affiliated paper describes a 3D printer that works with layered felt. Huaishu managed to take this project from back-of-a-napkin sketches (from Scott, Jen, and I) to a finished device in just 3 months. There are a lot of challenges inherent in handling, cutting, and bonding fabric, and this 3D printer manages to tackle them all in elegant ways.
This work appeared in the ACM Conference on Human Factors in Computing Systems (CHI) 2015. For more information, see the project page.
Another Adobe intern project. Tianqiang worked on optimizing the layout of furnishings in attempt to automatically generate product images in the style of catalogs (e.g. IKEA). Perhaps unsurprisingly, this required an optimization with quite a few terms in its objective function. That said, the results it produces are actually quite good (in a statistically-meaningful way).
This work appeared in Computer Graphics Forum (Proceedings of Eurographics 2015). For more information, consult the project page.
This project (which began when Ben was an intern at at Adobe) describes a hybrid of traditional sprite sheets and physics-driven animation. These "dynamic sprites" try to keep their large-scale deformations close to an artist-authored pose manifold while allowing physics to control both their fine-scale deformations and overall trajectory.
A paper describing this work appeared at the Motion in Games conference in November, 2013. For more information, see the project page.
In this intern project at Adobe, Sehoon designed a system for tuning physical (and pseudo-physical) simulations. Given a storyboard consisting of example behaviors and objective functions, this system is able to find parameters that optimize the objective, thus automatically tuning the game or simulation.
We demonstrated this system on three games ranging from the classically-physical (an Angry Birds clone), to arcade-physics (a top-view racing game), to pseudo-physics (an RTS).
This work appeared at Eurographics 2013. Project Page.
Graphics programs provide align and distribute buttons that do something (often based on object bounding boxes), but does this thing match what people do? I performed a series of Mechanical Turk experiments to answer this question. In each experiment, turkers were asked to align an object horizontally or vertically with two other objects, or distribute the space between objects evenly.
These experiments suggest that perpendicular distance (for distribute) and center of convex hull (for align) more closely match what turkers do than current bounding-box methods. (Though this does not imply that these are better choices for a graphics program.)
This work has not been published. If you'd be interested in continuing the project, let me know.
While in Boston, I occasionally walked over to Disney Research and worked on this system to use a lenslet array for simultaneous, co-located input and output of 3D lightfields. My main contribution was a solid-surface renderer (used in our demo, not shown in video) that worked by rendering a grid of 20x20 views and then carefully multiplexing them for the lenslets in a second pass.
We showed a 4-projector system (larger than that shown to the right) in SIGGRAPH 2012's Emerging Technologies showcase. Here's our video.
We published a full system description at UIST 2015; see the project page.
Soft Stacking is the continuous follow-on to Local Layering. It is to Local Layering what continuous alpha is to a covered/not-covered bitmask. With the techniques outlined in Soft Stacking you can represent and edit multiple stacking orders inside the same pixel.
This work appeared, in part, in my thesis. A revised version appeared as a paper at Eurographics 2012. The paper and video are available on the Project Page.
In my dissertation, I provided examples of systems where -- by choosing carefully what to make local -- standard 2D graphics tools were revised and improved.
In the Spring semester of 2009 I mentored two undergraduates working on a new idea I had in fluid control: mid-level control -- where features of fluid neighborhoods are controlled, but global pose is free to evolve. They started with my sketchy vision and added the ideas and research required to create a working prototype.
This work appeared in a poster at SCA 2009, and 2010; and will appear as a paper at GI 2011. See the project page for more details.
In graphics, we often stack up images like animation cels -- using one global order, even when layers have transparent holes and irregular shapes that would allow them to twine about each-other if cut from, e.g., paper. In this project, I present a simple method for realizing such paper-like stackings, and introduce a set of navigation primitives which are both correct and sufficient.
This work appeared as a paper at SIGGRAPH 2009. The paper and video are available on the Project Page.
I've designed a system to allow for brush-based gradient-domain editing with real-time feedback. This gives you a real gradient-domain clone brush and -- more interestingly -- lets you do gradient-domain sketches (like the teaser image to the right).
This work appeared as a paper at SIGGRAPH 2008. The paper and video are available on the Project Page.
In my background research for Real-time Gradient Domain Painting, I came across a method (described by R.W. Hockney in 1965) for solving Poisson's equation by using FFTs along one dimension in conjunction with the tridiagonal matrix equation. As this method is well-suited to out-of-core use on large images, I compared it with two state-of-the-art methods from 2007 and 2008.
This work appeared as a poster at SIGGRAPH 2008. The abstract which appeared in the poster proceedings is here. The actual poster is here (note: this is a big image).
In this project, we investigated what makes a pleasing transition between motion capture clips. We arrived at a dynamics-based approach which works by effort minimization that not only gives any potential stitching a score, but also is able to both pick both a transition path and duration.
The results of this work appeared as a Eurographics 2008 short paper Laziness is a virtue: Motion stitching with effort minimization, which I presented.
In this compilers class project, we investigated the potential for compiler-based level-of-detail in shaders. Our final compiler can load and parse ARB_fragment_program and ARB_vertex_program shaders into a graph-based intermediate representation, and then intelligently select per-fragment operations to hoist into the vertex shader using estimates of final visual error caused by result interpolation.
This approach should allows a single shader source to be used over many different graphics cards, as it can be dynamically degraded as performance decreases.
We think this is an interesting avenue of research, but do not intend to extend or publish these results at this time.
In this project, we investigated methods of visualizing large data sets (like the cmu mocap database and the millions of images other lab members have crawled from flickr). We created software to automatically extract a real-time zoomable map of the data, useful for exploration and intuition-building.
We presented a preliminary poster at SCA 2007; however, these results have not yet been published further, due to other projects taking much of our time.
I investigated approaches to make motion graphs with more immediate control, as would be useful in a game environment. I settled on reinforcement learning (specifically value iteration) as the key method.
This work appeared at SIGGRAPH 2007 as Responsive characters from motion fragments. The final pdf and video are available on the project page.
I investigated changing the timing of motion capture data using optimization of a physically-based objective function.
This work appeared as Physics-based motion retiming at SCA 2006. The final paper is available here. Some associated video material is available here.
Additionally, I've made the messy source code available. (It's messy because I used C++ tricks to calculate inverse dynamics parametrically -- never underestimate the power of operator overloading!)
As part of my doings, I've had to work extensively with motion capture data in ASF/AMC format (from the mocap database, in fact).
This is a release of the source code I use to read the files, wrapped up in a demo program which allows viewing. The program uses SDL+OpenGL for output.
I use custom presentation software for my talks. The source is here to satisfy the curious. This source is provided freely. Don't sell it (not that anyone would buy it). This was never intended to be user-friendly; keep that in mind, but feel free to e-mail comments and suggestions anyway. Heck, I may even provide limited techical support if I'm not busy, so it doesn't hurt to try.
You can grab the source and some example presentations on github.
Once you've built the source (you'll need standard perforce jam -- ftjam will work, boost jam won't), simply type './hack README.hack' in the dist directory.
For equations to work you'll need latex or a different 'dist/equations/*.pl'.
If you have problems with missing opengl extensions, edit the Jamfile to get rid of modules you aren't using.
An experiment in teaching students to write process-based interactive arguments using html and javascript, premiering in Spring 2021.
I teach a course called Algorithmic Textiles Design about using code to make things using CNC sewing, embroidery, weaving, and knitting machines. So far, it has occurred in Spring 2019, Spring 2020, and Spring 2021.
I was the professor for Computer Game Programming in Fall 2017, Fall 2018, Fall 2019, Fall 2020, and Fall 2021. In the course, students create computer games with C++ and OpenGL.
I co-taught Computer Graphics in Fall 2019.
I was the professor for Game Engine Programming. My students are working together to create and extend game engines in Javascript and C++.
I was the instructor for Computer Game Programming. It's been a crazy ride, but playing all the games has been interesting.
I was the TA for Computational Photography in fall 07. The students in this class did some great final projects, so -- as you've pretty much run out of my page at this point -- why not head over there and check them out?
I was the TA for AA&T in spring 2006.
Notes on setting up the linux computers for good maya menus availible here: on the course web page
Additional info about setting up the gnome window manager here
A simple example shader (draws hexagons) here.
Notes on using Gentoo linux instead of the SCS-supplied Fedora here. Includes notes on printing, backups, afs, and graphics group {NIS,NFS}.