Tag Archives: inking

Paper: DrawMyPhoto: Assisting Novices in Drawing from Photographs

We present DrawMyPhoto, an interactive system that can assist a drawing novice in producing a quality drawing by automatically parsing a photograph in to a step-by-step drawing tutorial.

The system utilizes image processing to produce distinct line work and shading steps from the photograph, and offers novel real-time feedback on pressure and tilt, along with grip suggestions as the user completes the tutorial.

Our evaluation showed that the generated steps and real-time assistance allowed novices to produce significantly better drawings than with a more traditional grid-based approach, particularly with respect to accuracy, shading, and details.

This was confirmed by domain experts who blindly rated the drawings. The participants responded well to the real-time feedback, and believed it helped them learn proper shading techniques and the order in which a drawing should be approached. We saw promising potential in the tool to boost the confidence of novices and lower the barrier to artistic creation.


DrawMyPhoto-C&C-2019-thumbBlake Williford, Michel Pahud, Ken Hinckley, Abhay Doke, and Tracy Hammond. DrawMyPhoto: Assisting Novices in Drawing from Photographs. In Proceedings of the 12th conference on Creativity and Cognition (C&C ’19). ACM, New York, NY, USA, pp. 198-209. San Diego, California, United States, June 2019.
https://doi.org/10.1145/3325480.3325507

[PDF] [Video – mp4

Paper: Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking

Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking.

With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship.

We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.

[Watch Sketchnote Components video on YouTube]


Sketchnotes-CHI-2021-thumbRebecca Zheng, Marina Fernández Camporro, Hugo Romat, Nathalie Henry Riche, Benjamin Bach, Fanny Chevalier, Ken Hinckley, and Nicolai Marquardt. Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 466, 15 pages. Yokohama, Japan (Online Virtual Conference), May 8-13, 2021. Honorable Mention Award (top 5% of papers).
https://doi.org/10.1145/3411764.3445508
[PDF] [Video – mp4] [Watch on YouTube]

See also our github repository including all sketchnotes analyzed by this research, and their organization into our design space.

Book Chapter: Inking Outside the Box — How Context Sensing Affords More Natural Pen (and Touch) Computing

“Pen” and “Touch” are terms that tend to be taken for granted these days in the context of interaction with mobiles, tablets, and electronic-whiteboards alike.

Yet, as I have discussed in many articles here, even in the simplest combination of these modalities — that of “Pen + Touch” — new opportunities for interaction design abound.

And from this perspective we can go much further still.

Take “touch,” for example.

What does this term really mean in the context of input to computers?

Is it just when the user intentionally moves a finger into contact with the screen?

What if the palm accidentally brushes the display instead — is that still “touch?”

Or how about the off-hand, which plays a critical but oft-unnoticed role in gripping and skillfully orienting the device for the action of the preferred hand? Isn’t that an important part of “touch” as well?

Well, there’s good reason to argue that from the human perspective, these are all “touch,” even though most existing devices only generate a touch-event at the moment when a finger comes into contact with the screen.

Clearly, this is a very limited view, and clearly with greater insight of the context surrounding a particular touch (or pen, or pen + touch) event, we could enhance the naturalness of working with computers considerably.

This chapter, then, works through a series of examples and perspectives which demonstrate how much richness there is in such a re-conception of direct interaction with computers, and thereby suggests some directions for future innovations and richer, far more expressive interactions.


Thumbnail - Inking Outside the Box book chapterHinckley, Ken, and Buxton, Bill. Inking Outside the Box: How Context Sensing Affords More Natural Pen (and Touch) Computing. Appears as Chapter 3 in Revolutionizing Education with Digital Ink: The Impact of Pen and Touch Technology on Education (Human-Computer Interaction Series), First Edition (2016). Ed. by Tracy Hammond, Stephanie Valentine, & Aaron Adler. Published by Springer, Cham. June 13, 2016. DOI: https://doi.org/10.1007/978-3-319-31193-7_3
[PDF – Author’s draft]

P.S.: I’ve linked to the draft of the chapter that I submitted to the publisher, rather than the final version, as the published copy-edit muddied the writing by a gross misapplication of the Chicago Manual of Style, and in so doing introduced many semantic errors as well. Despite my best efforts I was not able to convince the publisher to fully reverse these undesired and unfortunate “improvements.” As such, my draft may contain some typographical errors or other minor discrepancies from the published version, but it is the authoritative version as far as I am concerned.