Core Dump: Recap
Mike Acton is Director, Core at Insomniac Games. He has been with the studio since Ratchet & Clank Future: Tools of Destruction, heading up the Insomniac Engine team. Core Dump occurred over August 5-6, 2016 with nearly 90 attendees representing several development studios throughout the industry. CoreDump featured in-depth interviews and panels from the game-dev front lines along with tangible lessons learned and practical solutions for some of today’s toughest challenges.
***Note: Unfortunately we lost audio on the last 4 sessions from Day 1. We apologize for that. Below are all the other Day 1 videos. Day 2 had no such audio issue and those videos will be coming soon.***
I think #CoreDump was a success. There were lots of ways it could have gone wrong. Lots of ways it could have gone spectacularly wrong. But what we really discovered was the obvious: When you put a bunch of experts in a room together and discuss things they are expert in, there’s plenty to share and take-away. In fact, I think our main problem was that in two days you can barely scratch the surface of the problem spaces we encounter in the engine and tools of making games.
Day 1 went essentially as planned. We interviewed people on the Core team and had panels discussing topics we felt were interesting. Keeping things working on time was important to us, and it was immediately obvious that a traditional open Q&A from the audience wasn’t going to work. We quickly settled on a Twitter format where people could tweet questions and we’d have someone watching the stream and quickly filtering them (and potentially rewording them) so that we could keep things moving as smoothly as possible. Surprisingly perhaps, there were people in the audience who didn’t use Twitter and were not interested in creating an account. But leaning over and asking the person next to them to send a question seemed to work for most.
During the day, we asked around during the breaks on what people thought was working and not working. Probably the most common feedback was that people wished other studios were participating. That multiple perspectives on a problem were really valuable. That we should do that “next time.” On one hand, I think people may underestimate the logistics of getting multiple studios to agree on anything like an untested conference beforehand and so I’m not convinced that would have been possible to arrange well in advance. On the other hand, we had a room full of people from other studios already in place. There was no good reason to wait for “next time” and so the format of the conference changed for Day 2.
No one was trying to present a polished version of their “solution” to some problem implying it would “just work” for others. We were discussing the real-world issues and implications and causes and struggles and that’s always where the real complexity lies.
On Day 2, we invited anyone to join the interviews. I was pleasantly surprised that we could add someone from another studio to every single interview and panel for the day. I thought there were quite a few standout discussions that evolved from that. For instance, in contrasting the asset build solutions between Insomniac, Riot and Ready at Dawn, it was very interesting to discover how different things were even though we largely share the same problem space. It was also abundantly clear that no one solution was “right” or even “complete” and that we all had things we could learn from each other. And that, for me, reinforced the value of this type of conversation. No one was trying to present a polished version of their “solution” to some problem implying it would “just work” for others. We were discussing the real-world issues and implications and causes and struggles and that’s always where the real complexity lies. The devil is, as always, in the details. Your specific answer to questions like “How do you handle errors?” would radically impact what decisions you make regarding your build systems. The difference between systems that enforce hard constraints and those that try to make things work as well as possible when things are broken are enormous.
The huge volume of unanswered or unanswerable questions that we all shared was also on display. How do you train? How do you impart specific, technical trade-offs? How do you really profile? How do you trace data through a system when things go wrong? How do you divide up your team? Who is ultimately responsible for making sure everything runs on the platform? How do you test? These are the kinds of discussions that most needed to be had (and will continue to be valuable topics) and that simply don’t work in a traditional lecture format when you don’t have a novel answer.
I think we’ll do another CoreDump. I know it was valuable for me. And if we can judge by the comments we’ve gotten so far, I’m pretty sure it was time well spent for everyone who attended. If nothing else, in quite a few cases we just reinforced that hard problems are hard and that there just isn’t a magic solution. So no, you’re not doing it wrong. But we could all be doing it better. The ad hoc nature of the discussions was absolutely the heart of the event and that’s one thing I would not want to change next time. But I think having a bit more context like collecting screenshots and interesting tidbits over time to point to and discuss would be a valuable addition. And definitely it’s clear the thing to do is get other studios involved right from the start, if at all possible.