In this episode of Game Industry Career Guide Podcast, I answer a question from Jesse, who asks “My biggest question is how does the game industry adapt to new technologies?”
In this episode, you’ll learn:
- Why the game industry has a love-hate relationship with new technology
- How a game studio gets their hands on prototype hardware of pre-relase consoles
- Why it takes years for game developers get the hang of “new” technology, and how they do it
If you have a question you'd like to get answered on the podcast, leave a comment below or ask me anything here.
Find game schools near you
Hello, and welcome to the Game Industry Career Guide Podcast. This is Episode #48. I’m Jason W. Bay from GameIndustryCareerGuide.com, and this is the podcast where I answer your questions about getting a job and growing your career making video games.
Today’s question is from Jesse, who sent me a short and sweet email asking simply this, “My biggest question is how does the game industry adapt to new technologies?”
New technology: blessing and curse
Now, this question is a little bit different from the ones that I usually answer on this podcast, because it’s not directly related to getting a job in games. But the video game industry actually has an interesting love-hate relationship with technology. So I think it’ll be helpful for aspiring game developers to get some insight into how the industry thinks about new technologies, and how it changes and adapts as new technologies come and go.
It’s well-known that the game industry is often a key driver of the bleeding edge of many technologies, such as 3D graphics rendering, networking, artificial intelligence, and virtual reality. How does a typical game studio keep up with all of those new developments and still release games that are high-tech and fun to play? Well, while it’s true that video games are often responsible for pushing the outer limits of consumer computer technology, the fact is that most game studios struggle to keep up, at least at first.
Keeping up with new tech
To understand why, let’s talk about how a typical game studio goes about adapting to big shifts in new technology, such as the release of a brand new generation of hardware, like a new Xbox or PlayStation. Or hardware with a totally new kind of input device, like the original Wii or the original iPhone. First off, it’s helpful to understand that when a game console maker, like Nintendo or Microsoft, are developing the next generation of their game consoles, it’s critical that there will also be a handful of new games ready to be released at the same time.
But how can an independent game studio possibly make a new game for a new console that hasn’t even been released yet? The way they do it is that they allow game studios to borrow pre-released prototypes of the new console hardware, so that the studios can start making games months or years before the console is actually ready for release to the public. Now, that sounds pretty awesome. I mean, who wouldn’t want a chance to get your hands on an early prototype of the next Xbox or the next PlayStation? Well, it’s not actually quite as awesome as it sounds, well for two reasons.
Prototypes are prototypes
First, because prototype hardware can be very finicky. Sometimes it’s literally just a bunch of circuit boards held together loosely by wires and ribbon cables, and tape. I’m not even kidding. So when you’re trying to develop a game for that hardware and it doesn’t work quite right, it’s hard to know whether you are programming it wrong. Or if the component that you’re relying on just isn’t working correctly yet in this version of the prototype. Or if something just came loose, because somebody accidentally bumped your desk.
Second, prototypes change overtime. The reason it’s called a “prototype” and not a “finished console” is because the console company is still working on it. They’re still building it. So it’s common to get a part of your game working on one version of the prototype, but then the game might stop working on the next version of the prototype, because something in the hardware was added or removed, or changed by the console maker. For example, I remember when my game studio was building a launch title for the unreleased next version of the Nintendo DS.
At that time, all we knew was that this hunk of circuit boards was going to be the next Nintendo DS with better graphic capabilities and a few other bells and whistles. It wasn’t until very late in the development of our game that Nintendo finally let us in on their little secret. That this new device is actually going to be called the Nintendo 3DS, and that the hardware prototype sitting there on our desks actually contained a fancy new 3D screen. Now, that was exciting news, except it also meant that we had to fundamentally redo a bunch of our design, art, and code in order to make our game work in 3D. It was a serious scramble to make that happen, and it was much more work than we had been planning before we knew that the screen was going to be in 3D.
Casualties of novelty
Okay. Besides having to deal with the challenges that come along with using prototype hardware, it can also be a challenge for game developers whenever the new hardware has drastically different player input methods. Such as back when the Nintendo Wii or the PlayStation Move first came out. Before those consoles were on the market, most developers had never even played a game that used motion controllers, let alone made a game for one. So nobody really knew quite how to design a game for it. It took a lot of creativity and experimentation, and trial and error, before developers figured out how to make motion controller games that were fun. That sort of iteration means that the early games took a lot more time and money to make with no guarantee that the games were going to turn out to be fun. You might remember that a lot of them did not turn out to be fun.
Even after a game studio figures out how to deal with flaky prototypes and novel new hardware or input devices, it still takes time and practice before all of the programmers and artists, and designers, and auto engineers, and everybody else becomes truly skilled at making games for the new hardware. It can actually take several full game development cycles before everybody really figures it out.
Have you ever looked at the games that came out when a console first launched, and compared them to the games that came out years later, towards the end of the console’s cycle? The later games are bigger, they look better, and they play better, and that’s because the dev teams finally started to get a hang of developing for the new hardware. It just takes time.
Back to the drawing board
So after dealing with the challenges of shifting hardware prototypes, novel hardware and input devices, and years of practice getting good at development on the new platform, then what happens? You guessed it. Something new comes out and the cycle starts all over again. The bottom line is this. Keeping up with the bleeding edge of technology is hard work.
Not only does it take a lot of time and effort on the part of game developers, but keeping up doesn’t always pay off. Getting good at developing for the latest hardware certainly improves your chances of making a hit game, but it’s definitely not a guarantee. A high-tech game on the latest high-tech hardware will still be a commercial flop if it’s no fun to play.
I hope that gives you some interesting insight into how the game industry keeps up with the latest hardware technologies. Thanks to Jesse for that question, and thank you for showing up and learning a little more about how the game industry works on the inside. If you enjoyed the podcast, then please help me spread the word by sharing it with your friends on social media, or go to iTunes and leave me a review. For more information and inspiration on getting a job and growing your career making video games, visit me at GameIndustryCareerGuide.com. I’m Jason W. Bay. I will see you again next week, right here on the Game Industry Career Guide Podcast.