> Workshop - 28 Sept 2021 (virtual event - link to be announced)
- Prof. Thecla Schiphorst, Simon Fraser University, Canada
- Prof. Beatrice de Gelder, Maastricht University, The Netherlands
- Dr. Marta Bienkiewicz, Université de Montpellier, France
- Dr. Mårten Björkman, KTH Royal Institute of Technology, Sweden
- Dr. Gualtiero Volpe, Università di Genova, Italy
- Dr. Erwin Schoonderwaldt, Qualysis AB, Sweden
- Ben Bland, Looper | Neoco | (and formerly also Sensum), Ireland
- Paper presentation of challenge participants and challenge winner announcement
The AffectMove challenge is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution
of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent [de Gelder 2009; Pezzulo et al. 2019].
The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving,
and interactive dance contexts respectively.
There are several qualities of these data that make them an interesting machine learning challenge:
- i) they are representative of unconstrained everyday settings where affective/cognitive experiences are spontaneous and their expression a modulation of movement execution during physical activity rather than simply being a distinct or isolated gesture;
- ii) the affective/cognitive experiences that they capture are application-specific states and not the so-called basic emotions that have traditionally been explored;
- iii) they are relatively limited in size compared with benchmarks in other machine learning problems such as image recognition or activity recognition and reflect real-world difficulties where affective data capture is not trivial; and
- iv) altogether, they come from multiple contexts and population groups and were captured using different sets of movement sensors.
> Challenge > Description
The AffectMove challenge consists of 3 tasks:
- Task 1: Protective Behaviour Detection based on Multimodal Body Movement Data
The aim of this task is to advance continuous detection of protective behaviours, i.e., bodily-expressed pain behaviours, in people with chronic musculoskeletal pain. Chronic pain is a major healthcare challenge [The Pain Consortium 2016] and technology that is able to assess pain behaviour could support the delivery of personalised therapies in the long-term and self-directed management of the condition with the aim of improving engagement in valued everyday activities [Olugbade et al. 2019].
Those who participate in this task will be required to build a model for classification of protective behaviour continuously as present or absent throughout the exercise performance of a person with chronic pain, based on their full-body joint positions and back muscle activity. Ground truth for the exercise type will also be made available, but they must not be used as input data.
We will provide anonymised 3D full-body joint positions and concomitant back muscle activity data for 19 people with chronic low back pain from the EmoPain dataset [Aung et al. 2016]. The data will be given with corresponding protective behaviour labels obtained from clinician observers [Aung et al. 2016]. We will also include the exercise type.
The data will be given in training, validation, and test partitions which contain instances from 10, 4, and 5 people with chronic pain respectively. The test partition will not include the protective behaviour labels or the exercise type.
- Task 2: Detection of Reflective Thinking based on Body Movement Data
The aim of this task will be to pioneer continuous detection of reflective thinking in children during maths problem-solving activities. Understanding mathematical ideas such as angles and shapes is a key part of basic education and digital learning technology that promotes the use of body movement as well as further recognizes critical learning moments (e.g., reflective thinking) could support learning of abstract mathematical ideas which may otherwise be challenging to relate to.
Those who participate in this task will be required to build a model for classification of reflective thinking continuously as ‘observed’ or ‘not observed’ while a child solves maths problems, based on joint positions. Ground truth activity labels which will be made available can also be included in the modelling, but they must not be used as input data.
We will provide anonymised 3D full-body joint positions for 24 children from the weDraw-1 Movement dataset [Olugbade et al. 2020]. The data will be accompanied with corresponding reflective thinking labels based on expert observer annotation [Olugbade et al. 2020]. We will additionally include labels of the corresponding maths problem-solving activities.
The data will be given in training, validation, and test partitions which contain instances from 13, 5, and 6 children respectively. The test partition will not include the reflective thinking labels or the activity type.
- Task 3: Detection of Lightness and Fragility in Dance Movement based on Multimodal Data
The aim of this task will be to further develop the state of the art in the detection of lightness and fragility [Camurri et al 2016; Niewiadomski et al 2017] in dance movement. Automatic detection of such qualities is valuable for informing the interactive sonification with these qualities for the purpose of enriching audience experience of a dance performance as well as for rehabilitation purposes to help dancers improve their skills [Niewiadomski et al 2017].
Those who participate in this task will be required to build a model for classification of lightness and fragility in dance movement sequences, based on one or more of video, accelerometer, and audio-based respiration data.
We will provide accelerometer data captured from wrists, ankles, and waist, videos with faces blurred, and audio respiration data for 13 dancers from the Unige-Maastricht Dance dataset [Camurri et al 2016; Niewiasomski et al 2017; Vaessen et al 2018]. The data will include corresponding labels for the dance type (lightness or fragility). These labels are based on both observer annotations (an excel file is provided with 5 experts annotating the fragments) and on the neuroscientific experiment described in [Vaessen et al 2018].
> Challenge > ParticipateThe challenge will follow the following protocol:
To register, a potential participating team will need to contact the organisers by email
(temitayo.olugbade.13(at)ucl.ac.uk for Tasks 1 and 2,
and cp.infomus(at)gmail.com for Task 3;
please use ‘AffectMove@ACIII2021’ in the email subject), and they will further have to complete an end-user license agreement
EULA) to access the challenge data.
Upon satisfactory completion and return of the
EULA, the challenge data will then be provided to the participant in training, validation, and test partitions. Ground truth labels will be provided for the training and validation sets alone. The data will be accompanied by a READMEdocumentation which will give details about the data and further refer participants to publications for fuller descriptions of the data and previous investigations on them. The baseline for each task will be chance level performance.
The participants will be required to complete at least one of the 3 challenge tasks. There will be a special recognition for participants who use the same
machine learning architecture for highest number of the tasks greater than 1, with performance better than chance level detection in those tasks.
By the close of the competition, each participating team will need to have submitted their predicted test labels. Each team can submit up to three different
sets of predictions for each task.
The organisers will evaluate these labels against the ground truth labels for the test set using F1 score per class, Matthews Correlation Coefficient, and accuracy,
and send each team the performance of their submitted predictions. For each task, the team with a set of complete predictions with the highest performance based on
these metrics will be the winner for the task.
Participants will further be required to submit a paper on their work to the Affective Movement Recognition workshop and additionally present it orally at the
workshop during ACII 2021. The challenge winners will be announced at the workshop.
> Important Dates
- 11 March 2021 - Call for participation announced and data available
- 6 June 2021
31 May 2021- Final submission of predicted test set labels for evaluation
- 13 June 2021
7 June 2021- Paper submission deadline
- 5 July 2021 - Review deadline
- 8 July 2021 - Review comments sent to authors
- 19 July 2021 - Camera ready version deadline
- 28 September 2021 - Workshop date
- Temitayo Olugbade, University College London
- Nadia Bianchi-Berthouze, University College London
- Amanda Williams, University College London
- Nicolas Gold, University College London
- Gualtiero Volpe, Università di Genova
- Antonio Camurri, Università di Genova
- Roberto Sagoleo, Università di Genova
- Simone Ghisio, Università di Genova
- Beatrice de Gelder, Maastricht University
Please check for further details and updates.