Skip to main content Skip to search
Exploring the Potential of the iPad and Xbox Kinect for Cognitive Science Research
Games for Health Journal
Short Title: Games Health J
Format: Journal Article
Publication Date: 2015/06//
Pages: 221 - 224
Sources ID: 116842
Visibility: Public (group default)
Abstract: (Show)
Many studies have validated consumer-facing hardware platforms as efficient, cost-effective, and accessible data collection instruments. However, there are few reports that have assessed the reliability of these platforms as assessment tools compared with traditional data collection platforms. Here we evaluated performance on a spatial attention paradigm obtained by our standard in-lab data collection platform, the personal computer (PC), and compared performance with that of two widely adopted, consumer technology devices: the Apple (Cupertino, CA) iPad(®) 2 and Microsoft (Redmond, WA) Xbox(®) Kinect(®). The task assessed spatial attention, a fundamental ability that we use to navigate the complex sensory input we face daily in order to effectively engage in goal-directed activities. Participants were presented with a central spatial cue indicating where on the screen a stimulus would appear. We manipulated spatial cueing such that, on a given trial, the cue presented one of four levels of information indicating the upcoming target location. Based on previous research, we hypothesized that as information of the cued spatial area decreased (i.e., larger area of possible target location) there would be a parametric decrease in performance, as revealed by slower response times and lower accuracies. Identical paradigm parameters were used for each of the three platforms, and testing was performed in a single session with a counterbalanced design. We found that performance on the Kinect and iPad showed a stronger parametric effect across the cued-information levels than that on the PC. Our results suggest that not only can the Kinect and iPad be reliably used as assessment tools to yield research-quality behavioral data, but that these platforms exploit mechanics that could be useful in building more interactive, and therefore effective, cognitive assessment and training designs. We include a discussion on the possible contributing factors to the differential effects between platforms, as well as potential confounds of the study.