I aspire to belong in the Human-Computer Interaction (HCI) and Interaction Design (IxD) research communities. My current research focuses on three topics:
Embodied Interaction: Utilizing the human body extensively in HCI.
Social Drones: Well-designed interactions with autonomous flyers, for both users and bystanders.
Heirloom Computing: Digital artifacts designed to be cherished and valuable in the long run.
Our electronic devices are not as resilient and reliable as mechanical ones, and our digital imprints are not cherished like physical ones. The reasons are technical, social, and economical; and they can be conquered through design.
To introduce a wider audience of gesture and sign language researchers to motion capture and quantitative movement analysis, we conducted reviews of research that utilized motion capture to study the kinematics of sign and gesture production. We presented the preliminary results from our review, along with comments on technical and methodological issues, at the DComm conference Language as a Form of Action.
Technological platforms like virtual reality and augmented reality offer many possibilities for gesture and sign language research, including but not limited to scalability (as in, access to large and diverse participant pools). At the DComm conference Deictic Communication – Theory and Application we presented an open source foundation prototype for streaming content from a motion capture system to an augmented reality headset.
DComm 2019 Conference Poster
mbaytas/QTM-MagicLeap on GitHub
DComm 2017 Conference Poster
Siren is a hybrid system for algorithmic composition and live-coding performances, based on the marriage of a tracker-based user interface and elements of live coding. It uses the functional programming language TidalCycles as its core pattern creation system, and augments the pattern creation process through various UI features.
ViewFinder is a cross-platform mobile application made to support the installation and reconfiguration of marker-based motion capture systems with multiple cameras. It addresses a common issue when installing or reconfiguring motion capture systems: that system components such as cameras and the host computer can be physically separate and/or difficult to reach, requiring personnel to maneuver between them frequently and laboriously. ViewFinder allows setup technicians or endusers to visualize the output of each camera in the system in a variety of ways in real time, on a smartphone or tablet, while also providing a means to make adjustments to system parameters such as exposure or marker thresholds on the fly. The app has been designed and evaluated through a process observing user-centered design principles, and effectively reduces the amount of work involved in installing and reconfiguring motion capture systems.
ViewFinder is based on previous work by the development team at Qualisys AB, and an interaction design master's thesis project by Emmanuel Batis and Mathias Bylund at Chalmers University of Technology.
LabDesignAR is an augmented reality application to support the planning, setup, and reconfiguration of marker-based motion capture systems with multiple cameras. It runs on the Microsoft HoloLens and allows the user to place an arbitrary number of virtual “holographic” motion capture cameras into an arbitrary space, in situ. The holographic cameras can be arbitrarily positioned, and different lens configurations can be selected to visualize the resulting fields of view and their intersections. The features in LabDesignAR are mainly inspired by the Qualisys Lab Designer web application, and adapted for augmented reality.
LabDesignAR also demonstrates a hybrid natural gestural interaction technique, implemented through a fusion of the vision-based hand tracking capabilities of an augmented reality headset and instrumented gesture recognition with an electromyography armband. The code for LabDesignAR and its supporting components are open-sourced.
This project was undertaken as a prelude to an interaction design project that aimed to develop user interfaces (hardware and software) for performing loop-based, live-sequenced electronic music (e.g. techno, house. etc.) with performer-device interactions that are emotive and legible to the audiences. We were interested in the question of how watching a live-sequenced electronic music performance, compared to merely hearing the music, contributes to spectators’ experiences of tension. We explored this question via an experiment based on Vines, Krumhansl, Wanderley & Levitin’s 2006 work on cross-modal interactions in the perception of musical performance. We conducted an experiment where 30 participants heard, saw, or both heard and saw a live-sequenced techno music performance recording while they produced continuous judgments on their experience of tension. Eye tracking data was also recorded from participants who saw the visuals, to reveal aspects of the performance that influenced their tension judgments. We analysed the data to explore how auditory and visual components and the performer’s movements contribute to spectators’ experience of tension.
Hotspotizer is an open-source application for users without programming skills to graphically design and implement custom full-body gesture sets for the Microsoft Kinect. The gestures are mapped to system-wide keyboard commands which can be used to control arbitrary applications. Hotspotizer was my interaction design master’s thesis project at the Koç University Design Lab. The software, as well as the thesis (in LaTeX), are open source.
Hotspotizer has been featured on Microsoft’s Channel 9 Coding4Fun Kinect Projects blog, and utilized in educational contexts.
We proposed the "re-reading" of ancient artifacts to inform the design of future media on non-flat displays. As an example, we illustrated how different narrative typologies found in ancient Greek vases can inspire interactive content, which resulted in design implications for the graphic compositions on spherical displays.
For my bachelor's capstone, I contributed to a MEMS biosensor project at the KU Optical Microsystems Laborator (OML). The multi-analyte MEMS biosensor used an array of coated μ-cantilevers that shift their resonant frequencies upon analyte mass accretion. The cantilevers are magnetically actuated and their resonant frequencies are observed via interferometric optical readout. I designed and implemented a custom GUI and mechanism for setting up characterization experiments by directly manipulating the position of the chip relative to the laser beam. The system then traverses the μ-cantilever array and collects data without supervision.
As an undergraduate I contributed to the development of a versatile experimental laser manufacturing workstation with marking, cutting, engraving and powder sintering capabilities at the KU Manufacturing & Automation Research Center (MARC). We used a 10.6 μm CO2 laser coupled to a 3-axis CNC positioning system, as well as a galvo-driven 1064 nm Nd:YAG laser. The industrial lasers, AC servos, galvanometric scanner and sintering bed mechanism were driven with precision using a combination of open-source and custom software, and Arduino-based electronics, with integrated toolpath and G-code generation from STL models. The entire machine was designed, mechanically analyzed, fabricated and hand-assembled by a team of three.