COMPUTER VISION FOR LIVE MEDIA ARTS
Our solution in computer vision and information retrieval algorithms provide an algorithmic chain capable of: (i) analysing shapes in sequential video-frames, (ii) extracting vision based features used to detect the performers’ gestures and movements.
Our solution is a new creative tool for multi-media augmentation of live performance. Once the shape has been described and analysed, the surrounding and the stage with multi-media techniques (projections, interactive audio, responsive lighting, robotics) can be augmented, tightly synced with performers without affecting their freedom of movement.
Our solution is a virtual personal tutor to performers: the system produces detailed descriptions of the performer’s shape and movements and, coupled with machine learning techniques, allows the training of the system with the input of a number of expert performers. Colour and textures of the stage or costumes can be added to provide more information and improving accuracy.
what we do
We develop novel techniques able to recognise different sequential gestures, to the level where they will describe and compute articulated movements in real time. In the context of live media arts, the research outcomes would change the paradigm of creating, learning, performing, designing for live media arts, by giving feedback on performance after analysing, in real time, the streaming video of the performance.
Our solution allows applying a novel representation of dynamic shapes when undergoing articulated movement. Our method is based on an adaptation of perception-based results including the codon features and medialness hotspots. In every frame of a video sequence (from a single camera) we can reliably obtain a shape representation which expresses in a compact way the highest values of medialness (aka hot spots) and the most descriptive convexities and concavities along contours.
Prof. Frederic Fol Leymarie
Prof. STEFAN RÜGER
Dr. PRASHANT APARAJEYA
Dr. VESNA PETRESIN
news and events
Developer in Computer Vision and Machine Learning Research Location: London Salary: Not specified Hours: Full Time Contract Type: Fixed-Term/Contract We are looking for an R&D developer for a 12 month Government funded project investigating a new approach to movement computing (vision + machine learning). The ideal candidate is a gifted programmer who enjoys working with Read more about Open position: AI, vision, ML[…]
Frederic presented in late November, some of our on-going work on shape-based information retrieval in large image and video databases at the annual British Computer Society’s Search for Solutions Conference. http://irsg.bcs.org/SearchSolutions/2016/sse2016.php Frederic was the last speaker of the day … some people had left already (and voted!) … speakers included researchers from : Google, Microsoft Read more about Best presentation at Search Solutions 2016[…]
Stefan gave a keynote at the IEEE International Conference on Semantic Computing 2016 in Laguna Hills, California on Visual mining. Like text mining, visual media mining tries to make sense of the world through algorithms – albeit by analysing pixels instead of words. This talk highlighted recent important technical advances in automated media understanding, which Read more about Keynote at IEEE ICSC 2016[…]
Prof Frederic Fol Leymarie, Founder and Director of DynAikon, is keynote speaker at MOCO 2016, the Movement and Computing Symposium and Workshop in Thessaloniki, Greece. The title of his talk: “Drawing, Gestures, Robots” moco16.movementcomputing.org
Movement Description and Gesture Recognition for Live Media Arts – presented by DynAIkon at the CVMP 2015 symposium, BFI London Our research aims to develop novel techniques able to recognise different sequential gestures, to the level where they will describe and compute articulated movements in real time. In the context of live media arts, Read more about CVMP – London, BFI 2015[…]