Category: Games

An Evening with Marc LeBlanc

Warren Spector is hosting a series of master classes in game design at the University of Texas here in Austin.

Despite very short notice and a near lack of funds, I managed to squeak in. The first session was Monday night and it was with Mark LeBlanc, who is most famous for his work on the classic Blue Sky/Looking Glass games (Ultima Underworld 1 and 2, System Shock and Thief 1 and 2) and his more recent game, Oasis.

The session took place in a studio in the CMB building on the UT campus and was professionally recorded. Doubtless all the sessions will be available in some fashion after the series is over, but, having never had the opportunity to go to the GDC or any other game conference, I am very grateful for the chance to see them live.

When I got there I was surprised – for one thing, the studio wasn’t full to bursting, and for another, most of the people there were fresh-faced college students rather than the slew of industry grognards I was expecting. I found myself wondering if these kids even knew who Marc was…

The format was one I hadn’t seen before. Warren interviewed Marc for about an hour on Marc’s work history, then after a brief break Marc presented a lecture on his core design philosophies. Then Warren interviewed him again, this time asking Marc about specific games he had worked on or contributed to. The whole thing lasted about three hours and I was fascinated the whole time.

Now, I have to give Warren his props. I’d seen videos of him presenting at the GDC and he was very good there, but he also turns out to be an excellent interviewer.

But listening to Marc was a mind-expanding experience. This guy knows his stuff. You can get the gist of it by going to his blog and reading about the Eight Kinds of Fun and Mechanics, Dynamics and Aesthetics, but the real meat of his talk was how he actually applied those precepts to the design of Oasis. You can get the slides for that talk at his site as well, but it was much better live (and the ability to interact was key).

And now I’m just going to throw out random things that I remember from the talk in no particular order.

Blue Sky/Looking Glass actually started as a group of MIT students, one of whom had an uncle who was working at Origin and wanted to start his own company (Paul Neurath).

One of the really odd parallels between Blue Sky and Id Software is that at both studios all the developers started off living and working together in the same house – the Blue Sky house eventually had ten employees living in it. This both facilitated the work and kept initial production costs way down.

Warren said that when he first came to the Blue Sky house (to produce Ultima Underworld) the guys there wouldn’t talk to him until he got his laptop on the network and named it. Apparently, having a machine that you could name yourself was a big status symbol at MIT, and the idea that you weren’t “somebody” until your computer had a name carried over to Blue Sky. Warren said he named his computer “Elmer PHD” and that he uses that as his online tag now.

Warren said that Marc has the ability to play your game for a short time and tell you exactly what’s wrong with it and give you a whole bunch of ideas for improvement. How I wish I could have him play Planitia…

Marc finally left Blue Sky during the development of Terra Nova after he got into an argument with Dan Schmidt, the director, over a feature Marc didn’t want to implement.

Marc said that he liked the fact that his involvement with System Shock 2 was purely technical and didn’t have anything to do with the design because he could then actually play and enjoy the game!

Marc is very big on programmer/designers. He said that if you want to work at Mind Control Software, you can expect to get grilled on game design even if you’re interviewing for an art position. Warren chimed in and said that they do the same thing at Junction Point. Marc also mentioned that at Valve, there are no game designers – they have “gameplay programmers” instead. This neatly coincides with my two favorite game postmortems.

After it was all over I went over, shook his hand and thanked him for the Looking Glass stuff. He said, “Hey, I was just on the team.” I said, “Well, you’re the member of the team who is here, so I’m thanking you.” He didn’t seem to mind that.

Frankly I think the whole thing was good enough to put on TV, and I’m hoping that’s where it will end up. Looking forward to next Monday’s session, which will be with Mike Morhaime, one of the founders of Blizzard.


Spock Lizard

I recently tried the demo for Age of Empires III: Asian Dynasties. I really enjoyed…most of it. The setting is far more interesting in my opinion than that of the Americas and holy cow it’s the prettiest Age game ever by a mile.

But in the end, it shares the design flaw that I think prevented Age of Empires III from replicating the success of its predecessors.

In the beginning, there was Age of Empires. Age of Empires had three basic units: archers, infantry, and cavalry. These are arranged in classic “rock-paper-scissors” format. Archers beat the slow infantrymen (as long as they get to attack at range). Cavalry beat archers because they can close quickly. And infantrymen beat cavalry…for some reason. Classic design, easy to understand.

Age of Empires II added a whole bunch new units but in the end didn’t mess with the basic formula too much. Most of the new units were simply better archers, infantry and cavalry and could be used in the same way.

With Age of Mythology, Ensemble decided it was time to start mixing up the design. They introduced three new classes of units – normal human units, heroes, and mythological units. These three classes are also arranged in the rock-paper-scissors wheel – humans beat heroes beat myth units beat humans. But each class also has archers, infantry and cavalry within them; thus human archers are really, really good at beating hero infantry because archers beat infantry and humans beat heroes. This wasn’t…too bad, but I did feel that the design was starting to get out of hand.

Age of Mythology also introduced the idea of counter-units. These are units that are only good against the same type of unit – that is, archers that are only good against other archers, infantry that are only good against other infantry, etc. Thus, you don’t have to remember what beats what if you’re using counter-units – you counter with the same unit you’re being attacked with. Not a terrible idea, but the only counter-units in the game were humans; it was still up to you to remember how the hero and myth wheels worked. So it probably just ended up confusing players even more.

And then in Age of Empires III they messed it up completely by expanding the wheel to five unit types – archers, infantry, hand cavalry, archer cavalry, and artillery. With three unit types there are exactly three interactions: archers beat infantry beat cavalry beat archers. With five there are now ten interactions: infantry beats hand cavalry beats artillery beats archers beats archer cavalry beats artillery beats infantry beats archer cavalry beats hand cavalry beats archers beats infantry.

Yeah, I think that last sentence sums up Age III’s design flaw perfectly. The interactions are now too big for most people to hold in their heads any more. Age III is a perfect example of designers on the latest iteration of a long-running series adding features just to make the current version different from its predecessors without thinking about how well those features work as a game. Why do they do this? Well, I think it’s mostly the fault of reviewers. I may have mentioned this before, but I was appalled at the reviews Dungeon Keeper 2 got; over and over I heard reviewers say, “It’s just Dungeon Keeper with a fully 3D engine, some minor design tweaks to fix problems, and some new units and room types.” Uh, yeah. That’s why it was one of the best games of 1999 in my opinion – it was an already great game made even better by improving the base design and not betraying it with lots of unnecessary changes. But if reviewers don’t see enough new stuff…

When designers write a sequel to a game, their goal should be to supersede the original. Once the sequel comes out, players should have no desire to go back to the previous version.


Practical Direct3D Programming

Or, what I learned writing Planitia and didn’t learn from Frank Luna’s book.
This article will be of most use to programmers who have run through some Direct3D tutorials and know how to draw shapes on the screen but haven’t done any serious Direct3D coding yet. If you’ve read and done the exercises in Introduction to 3D Programming with DirectX 9 then you should be fine. I’m going to be using my game Planitia as my example, since it is by far the most complex Direct3D program I’ve ever written.

Overview

First, let’s talk about what was actually necessary for Planitia.

Welcome to Planitia.

Planitia is a 3D real-time-strategy game, played from a 3/4 perspective. The terrain of the game world is a heightfield and a second heightfield is used to represent water. Units are presented as billboarded sprites (simply because I had no animated models I could use). Other game objects like the meteor are true meshes. So the Planitia engine needed to be able to render all of these at a minimum.

Planitia’s design presented some interesting challenges because the terrain of the entire map is deformable. The player (as a god) can raise and lower terrain to make it more suitable for villagers to live on. Earthquakes and volcanoes can also deform the terrain at just about any moment of play. Thus, it was necessary for the game to constantly check to see if the game world had significantly changed and regenerate the Direct3D data if it had.

Initializing Direct3D

Since this was my first Direct3D project, I deliberately limited the number of technologies that I was going to use. I decided that I would not use any vertex or pixel shaders since I didn’t want to start learning them until I felt I was familiar enough with fixed-function Direct3D. I also wanted to make the game friendly to older hardware and laptops.

To this end, I don’t do a lot of capability checks when I initialize Direct3D. But one check that I did find useful was the check for hardware vertex processing. If that capability check fails, it’s a pretty good indicator of older/laptop hardware and I actually make some changes about how the terrain is rendered based on it (that I will detail in a bit).

Vertex Structure and FVF

My vertex structure is as follows:

class Vertex
{
public:
	Vertex();
	Vertex(float x, float y, float z,
		DWORD color, float u, float v, float u2 = 0, float v2 = 0);

	float _x, _y, _z;
	DWORD _color;
	float _u, _v;
	float _u2, _v2;
};

And my FVF:

DWORD FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_TEX2;

Notice that there are no normals. I’m using baked lightning for Planitia (as described in Frank Luna’s book – indeed, I used his code) and thus normals aren’t necessary. I am using two sets of UV coordinates because I “paint” various effects on top of the normal grass for the terrain (more on that in a minute).

Division of Labor – Creating the Index and Vertex Buffers

Okay, so what exactly is a Planitia map?

A Planitia map consists of a 64×64 grid of terrain cells. Thus, it must be drawn with 65×65 vertices. Each map has a heightfield of 65×65 values, as well as a 64×64 array of “terrain types”. Terrain types are identifiers I created that basically record what kind of terrain is in the cell. Values in the heightfield range from 0 to 8. If all four corners of a cell have a height of .2 or less, that cell is underwater has terrain type TT_WATER. If one corner of the cell is .2 but others are higher then the terrain type is TT_BEACH. Otherwise the terrain cell is TT_GRASS. Other terrain types like lava, flowers, ruined land and swamps are drawn over grass terrain and have their own terrain types.

And here’s my first fast/slow split. If I detect that hardware vertex processing is available, then each cell consists of five vertices – one each for the corners and one for the center. Drawing a terrain cell requires drawing four triangles.

Four triangles per cell!

If hardware vertex processing is not available, then I only use four vertices for each cell and only draw two triangles.

Two triangles per cell!

I set the UV coordinates across the entire terrain to the X/Y position of vertex in question. Thus the UV coordinates of vertex (0, 0) are (0, 0), the UV coordinates of (0, 1) are (0, 1), etc. This allows textures to tile properly while also giving me access to a few tricks (which I will get to in a minute). You’ll notice that this means that I’m not specifying what texture to draw with the UV coordinates – I do not have all my terrain textures on one big tile. That’s a good technique but I couldn’t use it for Planitia.

The diffuse color of each vertex actually stores two different sets of information. The RGB values are combined with the grass texture based on the lighting for that particular cell (again, using the pre-baked lighting code from Frank Luna’s book, page 224). The alpha value isn’t used for lighting. It’s actually used to create the beach effect, where sand blends evenly into grass. There’s more information on how this works in the Rendering section.

I actually create eight vertex buffers – one for each terrain type. Each vertex buffer contains data about the geometry of the terrain mesh and the shading of the terrain, but doesn’t contain any data about what texture to draw or how the vertices form into triangles.

Once the vertex buffers are done, I create index buffers to sort the vertices into triangles. Again, there’s an index buffer for every terrain type. And again, if hardware vertex processing is supported I create four triangles per quad; otherwise I only create two…but I use a technique called triangle flipping.

Triangle Flipping

Here’s how it works: for every cell in the terrain that you create, you test its upper-left corner against the upper-left corner of three other cells – the one diagonally up-left from the target cell, the one to the left of the target cell, and the one above the target cell.

If the difference between the target cell and the one to the upper-left is higher than the cell to the left and the one above, we flip the cell by specifying a different set of vertices to draw than the standard.

If you didn’t completely understand that, that’s okay. Here’s the code.

float diffA = abs(GetValue(x, y) - GetValue(x - 1, y - 1));
float diffB = abs(GetValue(x, y - 1) - GetValue(x - 1, y));
bool triFlip = diffA > diffB;

If triFlip is false, we create the triangles normally.

No triangle flip.

If the test is true, we create the triangles like this instead:

Triangle flipped.

The results are pretty impressive. Here’s Planitia with two-quads-per-triangle without triangle flipping:

No triangle flip.

Notice all the jagged edges. When we use triangle flipping, they go away:

Triangle flip.

That’s much better – it gets rid of the spikes – but now we’ve got lots of straight lines and the coast looks a bit boring. Using a center point on our quads looks even better:

Center-point doesn't need traingle flip.

Now it looks smooth and interesting. Which is why I do that when the hardware supports it.

Drawing The Scene

All right, the vertex and index buffers are created and it’s time to actually draw the terrain. Here’s the procedure I use.

The first thing I do is to turn alpha blending off. Then I draw all eight of my vertex buffers. I set the texture to be drawn based on the terrain type I am drawing (this is why data about what texture to draw isn’t stored anywhere in the vertex or index buffers). If the terrain type is “water” or “beach”, I set the sand texture and draw it. If it’s anything else, I set the grass texture and draw it. The result:

Oooh.  Hope the next pass makes it look better...

Time to do some blending. I turn alpha blending back on and set the grass texture as the active texture, and then I redraw the vertex buffer for the beach. Since blending is on, the grass is drawn fading out over the sand, resulting in a sand-to-grass effect. Now it looks like this:

Oooh!  Yes it does!

This technique is called multipass multitexturing. Instead of…

Oh, good grief…

Must…resist…

Can’t…stop…

Leeloo Dallas multipass!

There. Got it out of my system.

Instead of using multiple sets of UV coordinates and setting multiple textures at once, you draw the same geometry twice with different textures. The upside of this is that it’s easy to do and very hardware-compatible. The downside is that you are drawing more polygons than you technically need to, but if you’ve got a good visibility test (which we’ll get to in a minute) it shouldn’t be a problem.

Alpha Masking

This is the one thing in Planitia that I’m proudest of (well, along with the water).

The other terrain types – lava, flowers, ruined land and swamp – are all drawn over grass and are masked so that the grass shows through. This is why I already drew these once with grass set as the active texture. But I’m using an additional trick here. These textures won’t get their alpha information from the vertices and they don’t have any alpha information of their own. They get their alpha information from another set of textures altogether.

You see, practically any grass terrain cell can be turned into any of the other four types at practically any time during the game. If I simply draw the cell with the new texture, I get big chunks of new terrain on top of the grass:

Oooh!  Blocky!

I can alter the textures so that they fade out at the edges, but that still gets me soft tiles of terrain lined up in neat columns and rows.

What I really needed was for tiles that were next to each other to sort of glom together…and be able to do so no matter how they were configured.

Hmmm…

And then I remembered that I’d seen this problem already solved in Ultima VI! The slimes in that game would divide if you hurt them without killing them, but instead of making smaller slimes they’d make one big mass of connected slime. So I grabbed the Ultima VI tiles to take a look at how the Origin guys had done it.

Slime!

Turns out that they had done it by disallowing diagonal connections, thus reducing the number of connection possibilities from 256 to 16, and then they had drawn custom tiles for each connection permutation. This would still look better than either of the previous two solutions.

So I fired up Photoshop and created an alpha mask texture based on the slime texture.

It's the Mask!

The thing was…I didn’t just want to burn this filter onto each of my terrain type textures. I had a couple reasons for this. First, it would make the terrain type textures very specialized. Second, I’d have to make them much bigger to handle the sixteen permutations. And third, it would mean I wouldn’t be able to make my lava move by altering its UV coordinates (more on that in a second).

So what I needed to do was to set two textures – the mask texture and whatever texture I was drawing with. I needed to tell Direct3D to take the alpha information from the mask texture and the color information from the other texture.

I’ve tried to keep this article code-light, but this was tricksy enough that I want to go ahead and post the complete code. So here it is!

First we set our lava texture to be texture 0 and our masking texture to be texture 1.

gp_Display->m_D3DDevice->SetTexture(0, m_LavaTexture->m_Bitmap);
gp_Display->m_D3DDevice->SetTexture(1, m_MaskTexture->m_Bitmap);

In the first texture stage state, we select both our alpha value and our color value to come from texture 0 (the lava texture). Note that I am modulating the color value with a texture factor – I’ll talk a bit more about that in a minute.

gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);
gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_TFACTOR);

In the second stage we simply select the color value we already had (meaning the lava value) but we overwrite the previous alpha value with the alpha value from texture 1, which is the mask texture.

gp_Display->m_D3DDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_SELECTARG1);
gp_Display->m_D3DDevice->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_CURRENT);
gp_Display->m_D3DDevice->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
gp_Display->m_D3DDevice->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);

The end result was that I could use one mask plus four terrain textures to get four terrain types that stuck together no matter how they were positioned.

Oooh!  Not blocky!

UV Transformation

The ruin, flowers and slime terrain types are all drawn the same way in the manner I just described, but I did a little extra work on the lava to make it look better.

First, I turn off the diffuse color when I draw the lava using the following render states:

gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
gp_Display->m_D3DDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1);

This means that the lava is always drawn fullbright and isn’t affected by the baked-in lighting. This makes the lava seem to glow with its own light.

I enhanced this effect by using the texture factor. This is simply an arbitrary number that you can set and then multiply the texture color by. I alter it on a per-frame basis to make the lava brighten and darken, thus looking like it’s glowing. Again, this is simply a render state that you set.

gp_Display->m_D3DDevice->SetRenderState(D3DRS_TEXTUREFACTOR, D3DCOLOR_XRGB(redvalue, greenvalue, bluevalue));

And finally I use a UV transformation to offset the lava’s UV coordinates over time, causing the lava to look like it’s flowing. A UV transform is just what it sounds like – it’s a matrix that the UV coordinates are multiplied by before they are applied.

Now, warning warning danger Will Robinson. Whenever a Direct3D programmer starts using this feature for the first time they almost always get confused. They typically try (just like I did) to create a transformation matrix using D3DXMatrixTransformation() or D3DXMatrixTransformation2D() and they end up (just like I did) with a very strange problem – for some reason, scaling and rotation seem to work just fine but translation does not.

That’s because the UV transformation matrix is a two-dimensional transformation matrix and two-dimensional transformation matrices are 3×3 matrices, not 4×4. The scaling and rotation numbers are in the same place in both, but the translation information is on line 3 of the 3×3 instead of on line 4 like the 4×4. This is why scaling and rotation work but translation does not. Put your translation values into the _31 and _32 values in your D3DXMATRIX structure and it’ll work fine.

(Now you may be asking, “Why doesn’t D3DXMatrixTransformation2D() produce a 3×3 matrix?” Good question. I have no idea why, but it doesn’t.)

Here’s the result:

All of these little tricks were suggested to me by Ryan Clark. Except the alpha masking, which is the one thing I came up with on my own which is why I’m ridiculously proud of it.

A Good Raypicker Is A 3D Programmer’s Best Friend

You can’t really write a 3D game without a raypicker, and this is where I’m going to ding Frank Luna a few points. While he does present the concept behind raypicking and some of the math behind it, in the end he cops out and does a line/sphere test once the ray has been transformed into world space. This is accurate enough for picking objects in a 3D world, but it’s not accurate enough to pick polygons within an object, and that’s exactly what I needed. I needed to be able to tell exactly in what triangle (or at least what cell) on the terrain the user clicked.

I made some manual improvements to the raypicker but it never seemed great. So I used a little google-fu and came up with…well, it’s pretty much the perfect ray-triangle intersection test. C source is included, which I was able to drop into my code pretty much unaltered and I was amazed at how much better it worked without any discernable performance hit. Get it, use it, love it.

(Not) Seeing the Unseen

“But why?” you may ask. “Sure, raypicking involves some 3D math, but it doesn’t involve 3D rendering, now does it?”

Actually, it does, because you can use a raypicker to find out which parts of your world are visible and which aren’t, and only draw the visible parts.

Which means that when I talked about how I fill out the index buffers above, I left out a step. Sorry, but it’s a big step and deserves a section of its own.

I think the most important thing I learned on this project was just how slow drawing triangles is. It’s slow. It’s dog-slow. It’s slow as Christmas. Slow as molasses flowing uphill in January.

When I first started programming I thought that a Planitia map would be small enough that I wouldn’t have to do any visibility testing. But it turns out that you can test your entire game world for visibility and compile a list of the visible triangles in less time than it takes to just draw the whole world. Even if your world is just a little 64×64 heightfield and some billboarded sprites.

That’s how slow drawing triangles is.

In case I haven’t made my point, drawing triangles is damn slow and you should only do it as a last resort. It’s so bad that actually having to draw a triangle should almost be seen as a failure case. Your code should not be gleefully throwing triangles at the hardware willy-nilly. Indeed, it should do so grudgingly, after forms have been filled out in triplicate. And duly notarized.

“Enough!” I hear you cry. “We get it! Drawing triangles is slow! Now would you please tell us how you did your visibility testing?”

Oh, right, the visibility testing. Well, there are actually two techniques I use.

The first is a simple distance test from the center of each cell to the camera’s look-at point. If the distance is larger than 25 (an arbitrary number I arrived at through experimentation) the cell cannot possibly be visible. This very quickly excludes most of the terrain on the first pass. There are 4096 terrain cells in a Planitia map; this first pass will let no more than 1964 (25 * 25 * pi) of them through.

In this video I have drawn the camera back so that you can see the circle of passing cells that moves as the camera does.

Now, that’s good, but it’s not good enough. Typically fewer than five hundred cells are actually visible and the circle test still has us drawing almost four times as many. So all the cells that passed the first test now go to the second test, which involves the raypicking code. Actually, it involves the inverse of the raypicking code. Instead of projecting a ray from screen space into world space, we project a point from world space into screen space.

For each cell, I take its four corner points and then project each one from world space into view space and then into projection space. This “flattens” that point into a 2D point that represents the pixel that point would be drawn as on the screen.

If any of these four points are inside the screen coordinates (which for Planitia is 0, 0 to 800, 600) then at least part of the cell is visible and the cell should be drawn. If all four of the points are outside the screen coordinates then the cell is not visible and should not be drawn.

The function I use for this is D3DXVec3Project(); it makes this procedure very easy.

Again, I’ve drawn the camera back in this video so that you can see how the visible area moves with the camera.

Only cells that pass both tests have their indices added to the index buffer, and thus it is the index buffer that limits how many triangles are drawn. The final result? We only draw what can be seen – and the game runs a whole lot faster.

Old Man River

And now for the last bit – the water.

Planitia’s water is its own heightfield. It uses the same vertex structure and FVF as the terrain. It’s a four-vertex, two-triangle heightfield and I don’t use triangle flipping on it. It’s pretty darn simple.

On the other hand, I do use an index buffer for it so I can do the same visibility tricks I do for the rest of the terrain.

The heightfield is updated fifteen times a second. During this update new heights are calculated based on a formula that changes over time, thus the heightfield seems to undulate. Yes, I could have used a vertex shader, but please recall what I said at the beginning about limiting the technologies I’m using.

While an undulating heightfield is nice, if the texture doesn’t animate the water can look more like blue slime. Populous: The Beginning has this problem.

So the second trick is to get the water texture to animate, and that is all done with the UV coordinates. I am not using a UV transformation matrix like I did for the lava, because that transformation matrix is applied to every UV coordinate identically and I needed to be able to customize them. So the UV coordinates are all individually calculated. And then hand-dipped in Bolivia chocolate before being delivered in a collectible tin.

The water texture.

The first thing we do is to simply add the current game time in seconds to all the UV coordinates. That gets the water moving.

The second thing we do is to add a very little bit of the camera’s movement to the UV coordinates. This is subtle but works really well, especially if your water texture incorporates reflected sky. Basically it makes it look like the reflected sky is moving at a different rate than the water, which it would be in reality. In the following movie, look at the edges to see the effect most clearly.

Now for the really clever bit. I add the same offset that I’m using to make the water undulate to the UV coordinate for that vertex. That is, if my undulation function says that the vertex is .015 above the normal height, I add .015 to the UV coordinates of that vertex. This has the effect of making the texture seem to squash and stretch as it moves. I think this does more to actually sell the idea that the water is flowing than anything else.

Now for one more thing. I actually add the height of each vertex in the terrain heightfield to the UV coordinates in the water heightfield. This has the effect of making the water “bunch up” around the land.

I could probably improve the water if I added another heightfield on top of the existing one, moving faster and in a different direction. If I did that, I would probably move the camera movement to the top heightfield, since it represents reflection movement. I may do this at some point, but I think Planitia’s water looks good enough for now.

And I think that’s about it. Planitia will be released with full source code so there won’t be any mysteries about how I did anything. If you’ve read this and you’re trying to replicate something I’ve done and are having trouble, please feel free to contact me at anthony.salter@gmail.com. And good luck with your own 3D programming endeavors!


Various Bits

Okay, it’s time for grab-bag post!

First, I’ve been playing a fair amount of Team Fortress 2 lately. My take: extremely polished and quite fun, even if you don’t have twitch skills any more. My only two caveats are that a) sudden death sucks – I much prefer maps where it isn’t possible and b) the spy appears to be overpowered. I know, I know, learn to play…except that the game is usually so freakin’ chaotic that trying to pick out which of your teammates might be a traitor at the same time gets really hard. Although it’s quite satisfying when you do. “Hey, why’s one of our scouts just hanging arouKILLKILLKILLKILL!” Overall, it’s the first online FPS I’ve played in years and I’m really glad I preordered it so I could get into the beta.

But I am pleased to report that TF2 has not prevented me from working on Planitia. I’m focusing on the AI right now. My computer players can now intelligently flatten the land around their villages so that their mana production increases…now they just need to be able to use god powers and build armies. I’m hoping to have another demo with rudimentary AI this weekend.

And finally, I usually have to pay for entertainment like this.


Finally!

Remember that horrible tease of a post I made a couple months back about how we here at Aspyr were working on cool things I couldn’t talk about?

Well, I can finally reveal one of them. Guitar Hero III will be released for the PC and Mac this fall. Who’s porting it? We are, baby!


The Fundamental Disconnect of Computer RPGs

I’ve mentioned earlier that I try to keep negativity off this blog. I also try not to read blogs that I consider overly negative, and yet one of the blogs I do read is Scorpia’s.

Scorpia is the grande dame of adventure/roleplaying. She got her start reviewing adventure and computer roleplaying games for Computer Gaming World decades ago. I always enjoyed reading her reviews, especially when she would gleefully excoriate some piece of crap she’d been forced to play. After CGW dropped her she got a web presence and kept going.

Now, Scorpia’s got two themes that she constantly returns to. The first is that CRPGs today suck compared to those of the past. The second is that CRPGs never seem to turn out as good as the paper-and-pencil RPGs she plays. And while she’s technically right on both counts, in the end complaining about them isn’t particularly useful.

It’s not useful to complain about the first because the first is all perception. In the end, CRPGs today are much better than their older counterparts. The problem is that back in The Day(tm), the genre was still being explored. Games could still surprise us with new methods of pulling us in. Older CRPGs used lots of tricks to suggest that the world didn’t strictly revolve around the player. I recall running across a random fight between a group of bandits and the town guards in Ultima VI and thinking, “Whoa, what is going on in this world that I don’t know about?” Answer: nothing, but the suggestion was there. Did I have the same experience when the same thing happened in Oblivion? Of course not.

Now those tricks can still work, but only on younger players who haven’t Seen It All like us grognards. Which is why, ultimately, complaining about this is futile.

It’s also not useful to complain about the second because of the dirty little secret of computer role-playing games. Which is that there’s no such thing as a computer role-playing game.

There are two aspects to paper-and-pencil role-playing. The first is the numerical aspect – the stats, the skills, the to-hit percentage and the amount of damage done per attack, as well as the improvements to all these numbers as the character progresses. Computers do this scintillatingly well, but in the end this isn’t roleplaying. It’s just character bookkeeping.

The other aspect of paper-and-pencil role-playing is collaborative storytelling between the players and the game master. Computers cannot do this at all and they’ll never be able to ever ever ever ever. Well, at least not until artificial intelligence is perfected and by then we’ll all be too busy running for our lives from the hunter-killer robots.

The best a computer “RPG” can possibly do is to marry a good pre-programmed story with a fun iteration of character bookkeeping. That’s it, and that’s all there will ever be. I guess this doesn’t bother me as much as it does her because I was never able to do as much paper-and-pencil roleplaying as I wanted. When I was growing up I got maybe one real roleplaying session a year, and the rest of the time I’d have to scratch my itch by playing solo RPG adventures like the Fighting Fantasy gamebooks. Which, goshwow, married good pre-programmed stories with fun iterations of character bookkeeping. So the transition to CRPGs wasn’t a painful one for me.

But this is the root of Scorpia’s dissatisfaction. I think she’d be happier if she either stopped playing them or stopped expecting them to be something they never can.


Okay, I’m Back

Whew, glad that’s over. My wife went out of town to visit a friend and get a much-needed break from the chilluns…of course in order for her to do so, I had to watch them all myself. Thus, my hiatus.

But she’s back safe and sound and had a great time, and I’ve managed to get a full night’s sleep now, so everything is back to normal.

I actually managed to get a surprising amount of work done on Planitia while she was gone. I improved the “painted” terrains, got the lightning bolt finalized, got the earthquake working and did various other tweaks. There will be a demo release this weekend, although I don’t know if all the god powers will be done by then.

Actually…I think I’m just going to start doing releases on a weekly basis. That way you guys can just grab the latest and see what’s new instead of waiting on me. I have been loathe to release something that feels “incomplete” and yet I should be getting as much feedback as possible on how the game plays so I can improve it. I need to just get over it and start releasing on a regular basis…so I will, with the first release being this Saturday.

I also discovered pSX over the weekend, which is an emulator for the original PlayStation. I’d played around with ePSXe (a different emulator) for a while but was ultimately unhappy with how poorly it ran most games. It seemed like ePSXe’s developers were more interested in “improving” the original PlayStation than truly emulating it.

Not so with pSX. The devteam for pSX pride themselves on the accuracy of their emulation, and I was astounded by how well all my old PlayStation games ran. Even Chrono Cross, which uses tons of graphical tricks and pushes the PlayStation to its limit, ran just fine. I also happen to have a PS2-to-USB converter so I could actually play the games with a real PlayStation controller. It was fantastic! If you’ve got old PlayStation games you want to play and you’re not in a position to be able to play them on the family TV any more (possibly because you have three kids), pSX is an excellent solution.


The Bioshock Demo

I wasn’t as taken with it as Tycho. The most obtrusive problem is the enemy design. In System Shock your enemies were programmed cyborgs which explained their simple enemy behavior. In System Shock 2 they were mindless mutants, which ditto. But the people in Bioshock are people, and I simply do not understand why every single person in the world is willing to fight to the death to kill me as soon as they see me. You can say, “Splicing drove them insane” except that there are several points during the demo when I hear splicers talking to each other rationally. Of course, as soon as they sense me, they turn into Quake 1 monsters and all I can do is shoot them.

Second flaw, in my opinion – very little backstory. Atlas starts barking orders at you as soon as you leave the bathysphere and tells you nothing about what is actually going on in the city. Yes, one of the charms of games like this is that you piece it together for yourself, but it’s just incongruous not to get ANY information from him…even if it’s misinformation. He doesn’t even tell you about plasmids; your character basically just walks up to a busted vending machine, picks up a syringe and plunges it into his forearm for no good reason (as far as he knows at the time).

The whole demo just feels kind of lazy, as if Ken Levine & Co are betting that you played previous Shock games and know the formula and thus they don’t have to spend time setting things up.

And one more niggling thing…the voice messages you get are vital both in terms of plot and to keep the gameplay flowing, and they are hard to understand because of all the “it’s a late 50’s recording device” scratchiness overlaid on them. I turned on subtitles, but that’s got its own problem…subtitles actually run ahead of the audio you’re listening to, which is annoying, and the only things subtitled are recordings and transmission – no in-game speech is subtitled.

The good? Goshwow, it’s pretty (though my computer can barely run it). The story does seem complex and interesting and there’s a suggestion on one of the voice recordings that Atlas is not being completely straight with us, so it may not just boil down to Ryan == Bad, Atlas == Good. Plasmids are fun. Shooting is fun (if the frame rate can stay high enough to make it possible). Holy crap the game is creepy in spots – excellent atmosphere.

I’ll almost certainly pick up the full game eventually…but unless the game improves immensely, I don’t think it’s going to beat System Shock 2 despite all the pretty.


Well, This Means No Sleep For Me…

The Bioshock PC demo will allegedly be released tonight at 7 PM EDT. I probably won’t manage to get a copy before midnight. Will I allow that to prevent me from playing it? Hell no.


The Arsecast

The Arsecast is a podcast by Graham Goring that covers news and reviews of retro and indie games. I found out about the Arsecast a few months ago just as he released episode eight…in which he announced that he was stopping. So I didn’t feel the need to link him then. Fortunately he is continuing the podcast in a new format with Bob Fearon, and thus I feel secure that if I link to the podcast now there will be new stuff in the future.

Now, I try to keep the swearing and negativity on this blog to a minimum. Graham, being British, does no such thing. He glories in eviscerating bad games, usually with the foulest language possible. He is also quite effusive with his praise when a game merits it…though the language typically isn’t any better.

Needless to say, he’s absolutely hysterical. I can’t wait for him to review Planitia.

So go listen. Just make sure you’re eighteen. Or maybe twenty-one. Heck, sometimes I feel I’m not mature enough to listen to the Arsecast…