Study Details
Informed Consent Information
Please read the following information carefully before deciding whether to participate.
Study Overview
You are being invited to participate in a research study titled "Beyond the SmartWatch: A Design Space Exploration of Near-Wrist Bioacoustic On-Skin Interactions". This study is being conducted by Sidney Grabosky, a M.S. Human-Computer Interaction student in School of Information at Rochester Institute of Technology, under the supervision of Dr. Garreth Tigwell, Associate Professor.
This research designs and determines a consensus on-skin bioacoustic gesture set for smartwatch wearable devices. As a possible alternative to the awkward and constrained interactions possible with small smartwatch touchscreens, bioacoustic sensing allows users to make gestures (such as taps) on their body near the watch — such as the palm, wrist, digits, and forearms — granting a much larger interaction surface with new possibilities for interacting with the technology.
Session 1: Gesture Elicitation & Training
After a brief orientation, you will participate in two study phases. First, a "gesture elicitation" study where you will use your creativity and understanding of technology to help design gestures and determine what they should do. You will be presented with video clips of an action or event occurring in a smartwatch interface and design an appropriate gesture to trigger it.
Second, a machine learning data training session. A laptop application will present you with visual aids and descriptions showing a gesture to perform. You will perform each gesture 3 times. This data, aggregated with other participants, will be used to train a machine learning model that can accurately recognize the bioacoustic gestures.
During both phases, you will wear an Apple Watch running prototype software that gathers accelerometer data (movements and vibrations) — the data that enables bioacoustic sensing. Sessions will have audio recorded to produce a transcript; video may record hands only to aid gesture analysis. No video will be published.
This session takes approximately 1 hour and 30 minutes. Participants will be reimbursed $20 for their time.
Session 2: Gesture Classification Model Validation
In this session, you will follow instructions displayed on a laptop application. The application will present a visualization and text description of a gesture to perform. You will perform each gesture 3 times, then continue to the next.
The system will compare what the gesture input was supposed to be against what the ML classifier recognized it as. This process is automatic and requires no action from you — the system is being evaluated for accuracy, not you. If you make a mistake or there is a glitch, that trial can be discarded and repeated.
You will wear an Apple Watch running prototype software throughout. Sessions may have audio recorded for transcription; video may record hands only. No video will be published.
This session takes between 30 minutes and 1 hour. Participants will be reimbursed $10 for their time.
Session 3: Prototype Evaluation
In this session, you will evaluate an Apple Watch prototype application that uses on-skin bioacoustic gesture interactions to trigger common functions such as scrolling through content, zooming in or out, and panning a map.
This is a usability study. You will be asked to engage in a "think-aloud protocol" and voice your thoughts, positive or negative, as you interact with and learn how to use the prototype. You will be asked questions about the comfort and difficulty of performing gestures, how intuitive they seem, your perception of the learning curve, the social acceptability of performing the gestures in different settings, and other opinions about the system.
You will wear an Apple Watch running prototype software throughout. Sessions may have audio recorded for transcription; video may record hands only. No video will be published.
This session takes approximately 1 hour. Participants will be reimbursed $10 for their time.
Data Collection & Sharing
No collected data will be personally identifiable. In all external reports and publications, participants will be referred to using anonymized identifiers such as P01, P02, etc. Your data will be used on the basis that it is necessary for the conduct of research, which is an activity in the public interest.
Non-identifying data may be used in future publications, open access databases, or posted to public code and machine learning model repositories such as GitHub, GitLab, or HuggingFace. In the event of publication, data may also be incorporated as open access alongside the paper (e.g., stored on the ACM Digital Library). This promotes transparency, reproducibility, and open access, and allows anyone — including yourself — to build upon the research or be inspired by it.
Raw audio recordings and transcripts will not be disseminated, but excerpts and findings may be anonymously quoted in published reports. Video, if recorded, will only feature participants' hands to aid in gesture analysis and will never be published.
Risks & Benefits
Potential Risks
We don't anticipate any significant risks. Bioacoustic sensing has no active component — a sensor in the smartwatch is simply measuring the tiny vibrations that occur routinely throughout your day. As the smartwatch prototype will be shared between participant sessions, it will be sanitized between participants to promote good hygiene.
Potential Benefits
In our experience, people enjoy taking part in research as they are helping to develop new technology. In this case, the technology is novel and futuristic, which may be enjoyable to experience. If so inclined, you may incorporate the public release of the classifier model or data to produce your own prototypes. Your participation will help us explore the possibilities of bioacoustic interactions with smartwatches and find an intuitive gesture taxonomy for future interfaces.
Voluntary Participation
Your participation in this study is entirely voluntary. You may choose not to participate or withdraw at any time without penalty or loss of benefits to which you are otherwise entitled. If you withdraw from the study, you will still receive compensation for any phases you have already completed.
Your decision whether or not to participate will not affect your relationship with Rochester Institute of Technology in any way. You are free to leave or stop at any time.
Contact Information
If you have questions about this study, please contact:
- Principal Investigator: Sidney Grabosky —
- Faculty Advisor: Dr. Garreth Tigwell, Associate Professor —
If you have questions about your rights as a research participant, please contact:
Ready to Participate?
Start with the short screener survey to check your eligibility.
Takes less than 5 minutes