My brain is chewing over several things, and I think I’m going to end up coming to some sort of realization about the game industry, like I did in my Innovate post. But I don’t know what it is yet. So I’m kind of going to write down all the various bits that are coming together…just to kind of get my thoughts straight. Perhaps as I write, the realization will come to me.

First, I recently watched Scratch, an excellent documentary about the birth of hip-hop. I’ve also been reading Jeff Minter‘s History of Llamasoft series of articles over at Way of the Rodent, and I can’t help but notice parallels. In both cases, a small group of young men are presented with a new artistic medium and start using this medium to do “cool stuff” for their own enjoyment and to impress their friends (often to the chagrin of their parents), with no thought whatsoever that what they are doing might actually be profitable…and accidentally create a billion-dollar industry.

Second, I’ve been listening to the absolute hysterics surrounding the unveiling of the new consoles, as developers cry, “My god, these boxes are so powerful that we’re going to have to invent TIME TRAVEL in order to make games for them!” Please. Not every game has to look like that (obviously pre-rendered) Killzone 2 movie in order to succeed. I mean, the GTA games are still using RenderWare, for crying out loud, and they were outstanding successes.

Third, I’ve been thinking about the Golden Age. Ask just about any game developer and they’ll tell you that the golden age of PC gaming was about ten to fifteen years ago. But why then? Why not before then, or after then?

There were two converging factors that make 1990-1995 the “Golden Age”: barriers to entry and player expectations.

In the 1980’s, there were really only two ways to get into the game industry: learn assembly language for a popular computer and write the game yourself, or get hired by Atari, Mattel, Coleco, or one of the other first-generation console companies. Needless to say, doing either of these was damn hard, which kept the number of game developers low. But towards the end of the 80’s, the PC revolution was driving the price of hardware down, and Borland was coming out with excellent, inexpensive compilers, which meant that games could easily be written in C on the cheap. Thus, the number of game developers rose.

On the other hand, ALL games were written for a VGA screen – 320×200, 256 colors. Real, polygonal 3D was a novelty used by flight sims that were only played by hardcore sim fanatics willing to put up with 5-10 frames a second. Making content for such a setup wasn’t that hard and didn’t take that much time. Player expectations could still be fulfilled with a small team in a few months (or heck, even one talented programmer/artist). We went from high cost to entry and low player expectations to low cost to entry and still reasonably low player expectations. Thus, the Golden Age.

And then Quake came out.

I think I am only now beginning to truly understand the impact Quake had on the game industry. Yes, it made first-person shooters even more popular and spawned a hojillion imitators. Yes, it made mods easy and fun to make, creating the mod scene. Yes, it made internet play easy and fun. All this I’ve covered before.

But what Quake really did was raise player expectations through the roof. We players were very forgiving of 2D games; we were aware of the limitations of that system and thus we didn’t complain when Link’s sword mysteriously changed hands as you moved him around. Suddenly we could move around a 3D space and interact with 3D entities, and since we live in a 3D space and continually interact with real 3D entities, we know how that is supposed to look and feel, and thus billions of dollars have been spent by hardware and software developers in an effort to bring the look and feel of their 3D games closer to reality, so that player expectations can be fulfilled. And so we have the Killzone 2 movie.

(Oddly enough, almost all players have no problem falling back into “2D mode”, even now, lowering their expectations when they play a 2D game. And they do it without even realizing it. The same thing happens when we watch an animated movie as opposed to a live-action movie.)

And now we’re spending so much time making sure our in-game characters have smooth transition animations between sitting, standing, walking, running, leaning, fidgeting, idling, talking and dying that we can’t seem to spare any time to make sure they don’t run into walls – or enemy gunfire.

Is this bad?

I think it just “is”. There wasn’t any getting around it; somebody was going to do it. And yes, we are in for some growing pains as we figure our way around this new hardware.

But there really isn’t anywhere to go from here. Graphics are quickly topping out (and may already have). Both CPUs and GPUs are showing diminishing returns. Eventually all the really hard stuff we have to do right now will be handled by middleware.

What do we do then?

(Jeez. I just realized that all I’ve done is reiterated Jason Rubin’s main point from his GDC talk a few years ago. Of course, that doesn’t make me (or him) wrong.)

Game developers will have to turn back to the other, neglected fields of game development in order to set their games apart. Perhaps we will finally get an RPG that is better than Ultima VII in the world modelling department. Perhaps we will finally get a first-person shooter that has AI demonstrably better than Half-Life‘s. Perhaps we will finally get an RTS that is truly better than Starcraft, Total Annihilation or Age of Kings.

Instead of panicking and screaming about the “death of innovation”, I’m looking forward to a new Golden Age.