Gesture-and speech-based 3D modeling offers designers a powerful and intuitive way to create 3D Computer Aided Design (CAD) models. Instead of arbitrary gestures and speech commands defined by researchers, which may not be intuitive for users, such a natural user interface should be based on gesture and command set elicited from the users. We describe our ongoing research on a speech-and-gesture-based CAD modeling interface, GesCAD, implemented by combining Microsoft Kinect and Rhino, a leading CAD software. GesCAD is based on gestures and speech commands elicited from a specially designed user experiment. We conducted a preliminary user study with 6 participants to evaluate the user experience of our prototype, such as ease of use, physical comfort and satisfaction with the models created. Results show that participants found the overall experience of using GesCAD fun and the speech and gesture commands easy to remember.


  • Sumbul Khan, Hasitha Rajapakse, Haimo Zhang, Suranga Nanayakkara, Bige Tuncer, and Lucienne Blessing. GesCAD: an intuitive interface for conceptual architectural design. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (OZCHI '17), pp:402-406. ACM, 2017. [URL], [DOI]