An example of a first generation 3D game engine texture.
Color and shading are ecoded into the color map.
|
What is a Normal Map?
In the early days of 3D video games, textures on surfaces
consisted of color information only. The texture would be a picture of whatever
material the designer was trying to convey - brick, stone, vegetation, wood,
etc. - and would contain shading information as well as the texture and
color to create the impression of a complex, dimensional surface. In the
earliest games (Doom, Marathon, Duke Nukem) in-game lighting of surfaces
was restricted to the level designer assigning various brightness levels
to them to simulate lighting and shadowing effects. The effect was harsh,
sharp edged, and only abstractly "realistic".
In
the second generation of 3D games (Quake, Unreal) a light map was combined
with the color map to create a more realistic sense of shading and soft
shadowing. The trouble with this technique is that the shading on a particular
texture would often be at variance with the perceived sources of light
in the scene in which they were used. For example, a texture might be
drawn as if it were lit from the upper left, but in a particular scene,
the actual light source may be from the right, or below the texture.
The color map and light map combined on a surface in a game engine. Although
the surface is realistically shaded by the light map, highlights and shadows
on the texture ignore which direction the light is actually coming from.
|
A color map that has no intrinsic shading property.
A height map that encodes the distance of the geometry from the camera
as shades of gray.
|
A new technique was developed, generically
called "bump mapping". In its simplest form, a texture on a
surface contains two textures - a color map and a second map that contains
the height information for the surface. A height map would be a gray-scale
image that encodes height information as shades from black (the lowest)
to white (the highest). The game engine is able to use this height map
to render highlights and shadows on the surface according to where the
lights in the scene are located, resulting in a vastly improved surface
realism.
The color map and height map conbined in a game engine. The height map
allows the game engine to render the surface as though it has been illuminated
by the light source in the game. Highlights and shading appear coming
from the correct direction. Because of the limitations of this technique,
the resulting surface appears flattened and lacking in richness.
There is a limitation to simple height mapping techniques
- the game engine is restricted to portraying transitions in elevation
only, resulting in a flattened appearance of the actual height information
the designer is trying to convey. Fine details get blurred out of the
bump map render, because a single pixel contains no real data of its own
- it is only in relation to its neighbors that it has any meaningful information.
A 100% white height map renders exactly the same as a 100% black height
map. A game engine looking at a height map has to take a sample of a certain
radius around each pixel to determine if this pixel is the same relative
to its neighbors, or sloping up, down, to the left or to the right, before
it can rendering the shading on that surface. This sampling slightly degrades
the detail that can be portrayed with the height map.
|
A normal map of the texture geometry. This encodes X,Y,Z vector information
in the channels of an RGB image.
The red channel encodes X (horizontal) vector information.
The green channel encodes Y (vertical) vector information.
The blue channel encodes Z (depth) vector information.
|
Enter the "normal map". While
a height map only contains 1 plane of information - the "Z"
(height) plane, a normal map contains 3 vectors of information - "X"
direction, "Y" direction and "Z" direction. Thus,
each pixel in a normal map encodes which direction that particular point
is facing - the "normal vector" of the surface. Each pixel in
a normal map has meaningful information, so details can be rendered more
crisply than with bump maps alone. This allows modern game engines (Doom
III) to more realistically portray the lighting on a surface. A properly
constructed normal map can fool the eye into perceiving much more complex
3D geometry on a simple surface. Theoretically, normal maps on a cube
can make it appear spherical, at least in terms of shading properties
(the outline remains unchanged).
A combination of color map and normal map on a surface in a game engine.
Note how the surface appears much deeper and more geometrically complex.
The shading is much more accurate now, appearing as rich as the original
flat shaded texture, but with the addition of realisitic highlights and
shadows that react to the lighting around them.
Normal maps use three channels of information to encode
their information. This can be conveniently mapped to a standard RGB image.
The red channel is used to encode normal vectors in the X direction. 100%
red indicates a vector facing right - an X normal direction of +1. 0%
red indicates a vector facing left - an X normal direction of -1. A 50%
value in the red channel indicates an X normal component of 0. Similarly,
the green channel encodes normal vectors in the Y direction. 100% green
indicates a vector facing up - a Y normal direction of +1. 0% green indicates
a vector facing down - a Y normal direction of -1. 50% value in the green
channel indicates a Y normal component of 0. The blue channel encodes
normal vectors in the Z direction. 100% blue points straight out of the
surface. 0% blue points straight behind the surface. A value of 50% in
the blue channel indicates a Z normal component of 0. Normal maps don't
contain values below 50% in the blue channel since these would be pointing
behind the surface.
Where Can I Work With These Normal Maps?
As of the date of this writing, DOOM III is a future
glimmer. But there is a game engine that can be used as a testing ground
for developing familiarity with these texturing methods in advance - Tenebrae.
Tenebrae is an open source modification of the old Quake 1 engine, adding
modern techniques such as real time lighting, and normal mapping of surfaces.
These two techniques combined can give a budding game designer a very
rich and realistic game world in which to play.
How Can I Use My Current 3D Program To Make These
Normal Maps?
Texture designers used to use painting programs to
create textures for games. More and more, textures are being created from
3D geometry in rendering programs such as Lightwave, 3D Studio Max, or
Maya. The big advantage of using a 3D program to create textures, aside
from the powerful texturing algorithms these programs enjoy, is that you
can use the geometry to directly create the normal map to go along with
the color map. Plug-ins are available to developers, created by video
card manufacturers to enable the production of normal maps from a certain
select group of 3D applications. But what if your application of choice
has no plug-in available for it? This tutorial will show you how to use
your current 3D application to create normal maps. There is a bit of work
involved, but the results are well worth the extra effort. Please read
the next page to find out the details.
|