Computer scientists at the University of British Columbia have come up with an algorithm to make computer-generated characters move much more naturally, a breakthrough which could have applications in robotics technology.
Using advanced machine learning, the program called DeepLoco supplies graphical characters with the ability to learn from their surroundings through trial and error, just as humans do, thus giving animators and robotics engineers a new tool for developing life-like creations.
“We’re creating physically-simulated humans that learn to move with skill and agility through their surroundings,” says Michiel van de Panne, UBC computer science professor who is presenting the new research at Siggraph 2017, the annual computer graphics conference, this year held in Los Angeles, California.
In a video posted ahead of their SIGGRAPH presentation, the researchers from UBC and the National University of Singapore show how their new characters can perform complex manoeuvres such as running along a narrow trail, dribbling a soccer ball or getting on and off a moving sidewalk. All of which are notoriously difficult to program, says van de Panne.
“It’s like learning a new sport,” said van de Panne. “Until you try it, you don’t know what you need to pay attention to. If you’re learning to snowboard, you may not know that you need to distribute your weight in a particular way between your toes and heels. These are strategies that are best learned, as they are very difficult to code or design in any other way.”
Funded in part by a Natural Sciences and Engineering Research Council Discovery grant, the researchers see DeepLoco as having applications in robotics, where robots could better adapt to their environments without having to have the rules for every type of interaction laid out in code.
Billed as the world’s largest graphics and interactive techniques conference, this year’s SIGGRAPH (short for Special Interest Group on Computer Graphics) encompasses five days of presentations, screenings and interactive exhibits, along with industry-focused demos of the newest innovations in graphics hardware and software.
Another SIGGRAPH-featured project involving UBC research will be the showcasing of a new water-based technique for 3D scanning, one which uses Archimedes’ principle of water displacement to enable the reconstruction of even the hidden parts of objects, ones that today’s 3D laser scanners can’t capture.
The new technique allows for readings of an object’s volume to influence the modelling of its surface, something that isn’t possible with printers solely powered by cameras and lasers. Similar to the computed tomography used in hospital scanners, the process takes multiple readings of an object from different angles in order to create a complete reading. In this case, the result comes from dipping the object in water at different angles and reading the water displacement to reconstruct a full 3D model. The project is a collaboration from researchers at Tel Aviv University and Ben-Gurion University of the Negev in Israel, Shandong University in China and UBC.
The 2018 edition of SIGGRAPH will take place in Vancouver, with submissions opening this fall.
Below: UBC software teaches computer characters to walk, run, even play soccer
We Hate Paywalls Too!
At Cantech Letter we prize independent journalism like you do. And we don't care for paywalls and popups and all that noise That's why we need your support. If you value getting your daily information from the experts, won't you help us? No donation is too small.