Naughty Dog Animator Breaks Down Mass Effect: Andromeda’s Animations, Offers Explanation for Low Quality

Unless you’ve been living under a space rock, you’ll have heard about Mass Effect: Andromeda and the controversy surrounding its subpar character modeling and animations. It was one of the many complaints we made in our review. Amidst the number of armchair developers complaining and hurling abuse, Jonathan Cooper (an actual developer for Naughty Dog) took to Twitter to offer his thoughts. In a fantastic display of rebellion against the site’s 140 character limit, Jonathan gave an in-depth explanation of how animation works and what exactly players are seeing with Andromeda.

We’ve compiled all of Jonathan’s tweets below for easy reading.

Folks have been asking so here are my thoughts on Mass Effect Andromeda’s animation. Hopefully people will better understand the process.

First though; going after individual team members is not only despicable, but the culprits and choice of target revealed their true nature.

Just as we credit a team, not an individual, for a game’s success, we should never single out one person for a team’s failures.

That said, animating an RPG is a really, really big undertaking – completely different from a game like Uncharted so comparisons are unfair.

Every encounter in Uncharted is unique & highly controlled because we create highly-authored ‘wide’ linear stories with bespoke animations.

Conversely, RPGs offer a magnitude more volume of content and importantly, player/story choice. It’s simply a quantity vs quality tradeoff.

In Mass Effect 1 we had over 8 hrs of facial performance. In Horizon Zero Dawn they had around 15. Player expectations have only grown.

As such, designers (not animators) sequence pre-created animations together – like DJs with samples and tracks.

Here is the Frostbite cinematic conversation tool circa Dragon Age Inquisition. (Source: http://www.frostbite.com/2014/08/creating-biowares-first-rpg-on-the-frostbite-engine/)

Here’s the cinematic conversation tool for the Witcher 3. Both tools make it fast to assemble from a pool of anims. https://www.youtube.com/watch?v=cF9zt-vEGc4

Because time denotes not every scene is equally possible, dialogues are separated into tiered quality levels based on importance/likelihood.

The lowest quality scenes may not even be touched by hand. To cover this, an algorithm is used to generate a baseline quality sequence.

Mass Effect 1-3 populated default body ‘talking’ movement, lip-sync and head movement based on the dialogue text.

The Witcher 3 added to this with randomly selected body gestures that could be regenerated to get better results. http://www.gameanim.com/2016/03/23/cinematic-dialogue-witcher-3/

Andromeda seems to have lowered the quality of it’s base algorithm, resulting in the ‘My face is tired’ meme featuring nothing but lip-sync.

This, presumably, was because they planned to hit every line by hand. But a 5-year dev cycle shows they underestimated this task.

(All this is exacerbated by us living in an era of share buttons and youtube, getting the lowest quality out to the widest audience.)

Were I to design a conversation system now, I’d push for a workflow based on fast and accessible face & body capture rather than algorithms.

While it hasn’t 100% proved this method, Horizon Zero Dawn’s better scenes succeed due to a use of facial mocap.

The one positive to come out of all this is that AAA story-heavy games can’t skimp on the animation quality with a systemic approach alone.

The audience has grown more discerning, which makes our job more difficult but furthers animation quality (and animators) as a requirement.

What do you make of all this? Did Jonathan provide a good understanding of how animation works?

If you want to hear more about what Andromeda does badly, check out our Everything Wrong With video. Or, for a more positive flick, be sure to watch the new official Sara Ryder trailer.

[Source: Twitter via NeoGAF]

TRENDING
X