Publication
Unsupervised Image to Sequence Translation with Canvas-Drawer Networks
Abstract
Unsupervised Image to Sequence Translation with Canvas-Drawer Networks
Kevin Frans, Chin-Yi Cheng
Encoding images as a series of high-level constructs, such as brush strokes or discrete shapes, can often be key to both human and machine understanding. In many cases, however, data is only available in pixel form. We present a method for generating images directly in a high-level domain (e.g. brush strokes), without the need for real pairwise data. Specifically, we train a ”canvas” network to imitate the mapping of high-level constructs to pixels, followed by a high-level ”drawing” network which is optimized through this mapping towards solving a desired image recreation or translation task. We successfully discover sequential vector representations of symbols, large sketches, and 3D objects, utilizing only pixel data. We display applications of our method in image segmentation, and present several ablation studies comparing various configurations.
Download publicationRelated Resources
1997
An empirical evaluation of Graspable User InterfacesThis paper reports on the experimental evaluation of a Graspable User…
1998
Babble: Supporting Conversation in the WorkplaceFor the last year our group has been developing and using a prototype…
2004
Regularizing a singular special Lagrangian varietyConsider two special Lagrangian submanifolds with boundary in…
2002
Why Distance Matters: Effects on Cooperation, Persuasion and DeceptionIn this study, we examine how geographic distance affects…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us