Lab: Texture Mapping

Estimated Time: 1 hour - DO IT, or you'll look over at your neighbor's screen and think "Holy Cow!  That looks awesome!" and then you'll feel bad because your code still uses plain old materials.

Terms: textures, texture mapping, UV coordinates, pixels, texels, magnification, minification, shimmering.


If you're already comfortable with how texture mapping works, you may consider jumping down to the "Loading Images" section.

You've probably already picked up on the fact that a vertex can be a complex thing.  Not only does it have a position in 3D space (x, y, z), but it can also have a color and a normal. In this lab, you're going to learn about the final piece of vertex information that we'll talk about in this course.  It's called a texture coordinate but it's often just called a UV coordinate - or "UV" for short.  Oddly, the actual variable names are called s and t.

The basic idea behind texture mapping is that we're going to wrap an image (a.k.a. a texture) around a 3D model.  Conversely, you could think of it as how the mesh maps back to a 2D texture.  This isn't terribly difficult to do. Just like normals and colors, we'll associate an extra coordinate (yes, the UV coordinate) with each vertex that describes where in an image it can find its color (called a "texture element" - or texel).  Once each vertex knows where it maps back into the 2D texture, the fragment shader receives an interpolated coordinate to find the appropriate texel (i.e. map back into the 2D image).  And just for clarification, a pixel (or picture element) is the smallest light emitting speck on your screen.

Realize that UV coordinates *should* be between 0.0 and 1.0, such that 0.0 for the u value means "leftmost pixel of the image", 0.5 means middle and 1.0 means "right".  A value of 0.0 for the y value could mean either top or bottom, depending on how you read in the image.  What happens if the values exceed 1.0 or are less than 0?  We can either repeat the texture (by just taking the fractional part of the number - called GL_REPEAT) or we can clamp the values (or GL_CLAMP) so they are bound between 0.0 and 1.0.

Understanding UV coordinates

To understand how all of this works, consider the image below.  On the left, we have a texture full of texels.  On the right, we have a polygon that we wish to apply the image to. Realize that this polygon would normally be comprised of two triangles, but for this example, it doesn't matter. As described before, each vertex can also have an additional UV coordinate that describes which texel it maps to.  The coordinates are shown below, but are not to be confused with the position of each vertex - which could be anywhere in the 3D space.

Texture mapping

Note that the "size" of the image doesn't necessarily match the "size" of the polygon - but that doesn't matter either because the image will be "stretched" to fit the polygon!  It's not really stretching, though. Because the lower-left vertex has the UV of (0,0), it will clearly map to the bottom-left texel of the image.  The upper-right vertex with the UV of (1, 1) will map to the upper-right texel. All of the "in between" pixels on the polygon are interpolated to be values between 0 and 1, depending on their relative position to the vertices. 

Texture mapping

Minification, Magnification and Mipmapping

What would happen during texture mapping if we were to increase the size of the polygon to something really huge?  Think about that for a second.  Or what if you had a really tiny image? Either way, you'd have one texel that maps to several pixels in the final rendering!  This actually happens all the time in 3D video games when you get really close to an object, such as a wall.  We call this situation magnification.

Conversely, imagine that a polygon is really far off in the distance (i.e. it's really small) and you have a really large texture. You'd have several texels map to one pixel!  We call this minification.

In both cases, we don't have an exact 1-to-1 mapping between texels and pixels. We can handle minification and magnification in one of a few ways:

  1.  We can pick the nearest texel. This is a fast operation, but in practice, it can look really bad!  For magnification, it can lead to blockiness and for minification it can create a weird visual artifact. As objects move, this is also called shimmering.
  2. We can linearly interpolate between texels. This "costs" more than the nearest texel approach, but helps to reduces the problems described above.  For modern hardware, you can assume this is the option you would use.

As an extreme example, the images below show a texture that is 8x8 texels big (yes, incredibly small).  This has been applied to a polygon that is hundreds of pixels wide (in screen coordinates), so we have magnification going on.  The image on the left shows the results of chosing GL_NEAREST, such that many pixels map to exactly one texel. On the right, you can see the results of using GL_LINEAR - where the color of a pixel is determined by the linear interpolation between the colors of several texels.  Note that you get the same "star pattern" that you would in Gouraud shading.


Wouldn't it be nice if we could eliminate extreme examples of minification and magnification - such as when a huge texture maps to a small polygon?  Well, we can by using mipmapping. Basically, OpenGL will create several textures for you - each of exactly 1/2 the size of the original and then it choses the most appropriate one! There's a great image on Wikipedia that shows what this looks like. 

Clamping and Repeating

Remember how I said that you should have texture coordinates between 0.0 and 1.0?  What happens if you don't? You have a few options:

  1. You can clamp the values between 0.0 and 1.0 - such that any vertex with values greater than 1.0f or less than 0.0f get clamped to either 0.0 or 1.0f.  You can use GL_CLAMP for this option, but in practice, you don't really use this very often
  2. You can repeat the texture by using the fractional part of the UV coordinate.  For example, if you have a texture coordinate of (9.4, 2.7), you reduce this to the value (0.4, 0.7).  This is a common choice because, chances are, you don't want parts of your model to remain untextured.
  3. You can repeat the texture, but mirror it.  This is my personal favorite because it helps to eliminate situations where the left side of your image doesn't match up visually with the right side of your image (and the same is true for the top and bottom).  This is a really common approach for texturing large terrains.  Think about it.  You can have 1 giant plane and set its texture coordinates to range between (0, 0) and (100, 100) - so you would have 10,000 mirrored grass textures on the plane!






From left to right, the images above show 1) a polygon that has UVs ranging from (0, 0) to (1, 1), 2) a polygon with UVs that range to (4, 4) with GL_REPEAT, 3) the same polygon with GL_CLAMP and 4) the same polygon with GL_MIRRORED_REPEAT

Loading Images

Realize that no matter what "file type" our textures are, we ultimately have to get them into an array in our code.  In most graphics classes, this is why you see the checkerboard pattern all the time!  They're easy to generate with a nested loop and provide a couple of different colors. If you're really fancy, you can figure out how to do the same thing to create wood, marble and even clouds!

Textures should also be of the proper dimensions - where the width and height of an image are a power of two (e.g. 256x256 or 512x512)

However, in the real world, you typically want to work with .jpg, .gif, .png and other file formats.  The problem with these formats is that they use compression - making it difficult to extract the color data from the file.  Instead, I would recommend working with:

  1. Bitmaps - this is a common Windows file format that has a very simple header (meta data) and simple file format.  While it supports compression, you don't have to use it.
  2. RAW - this has no header information at all!  It is what it sounds like - raw color information.  There are a few image programs, like IrfanView, that support this file format.
  3. Targa/TIFF - I haven't messed around with these, but I hear they are easy to work with.

In this lab, we'll be working with Bitmap files.  This way, we can read the header information to determine how large of an array we need to allocate and the start reading the color data.

OpenGL Code

The code to do texture mapping isn't terribly difficult if you understand the concepts above.  In general, you perform the following steps:

  1. glEnable (GL_TEXTURE_2D) - this turns texturing on.
  2. Load the UVs into the buffer when you load other vertex information
  3. Find the "vTextureCoord" variable in the shader and then tell it where to find its data in the buffer from step 1.  Remember, you do this with glVertexAttribPointer()!
  4. Load the texture into an array
  5. Create a buffer on the GPU to hold the texture
  6. Copy the image onto that buffer
  7. Set the minification and magnification preferences  (nearest or linear interpolation)
  8. Set the repeating preferences (clamping, repeating, mirroring)
  9. Activate a "texture unit" (usually texture unit 0) and bind the variable "texture" to the that unit.

Here's some example code that does all of this.

GLuint tex_buffer_ID = -1;     //Note: if you're blindly copying and pasting this code, this line will clearly mess you up...
glGenTextures (1, &tex_buffer_ID);
glBindTexture (GL_TEXTURE_2D, tex_buffer_ID); // Make this the current texture buffer
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, bitmap_data);       // Copy the image onto the GPU
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);   // Set the preferences
GLuint texID = glGetAttribLocation(progID, "vTextureCoord");        // Find the vTexCoord variable in the shader
glEnableVertexAttribArray(texID);                            // Turn that variable "on"
glVertexAttribPointer(texID, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(36*sizeof(GLfloat))); // and tell it where in the buffer to find the UVs     
glActiveTexture(GL_TEXTURE0);  // Activate texture unit 0
glUniform1i(glGetUniformLocation(progID, "texture"), 0); // and then tell texture unit 0 that the currently bound texture is the one it should use.

Shader Code

The code for the vertex shader is almost identical to what we've been using.  However, the code includes a "vTextureCoord" variable that we refer to in our OpenGL program.  It also includes a "texCoord" that goes out to the fragment shader:

in vec4 vPosition;     // From our OpenGL program!!
in vec2 vTextureCoord; // In from OpenGL
out vec2 texCoord;     // Going out to the shader

// Remember, uniforms are the same for a vertices.
uniform mat4 p;         // This is perpsective matrix
uniform mat4 mv;        // This is the model-view matrix
uniform vec4 vColor;    // Coming in from your OpenGL program

void main () {
    gl_Position = p*mv*vPosition;
    texCoord = vTextureCoord;

The fragment shader is much different!  Remember, it's now pulling its color information from the texture. It receives the interpolated "texCoord" from the vertex shader and also has a "texture" variable (that is bound in the last statement of our OpenGL code above).  This is the actual image.  So what we do is "sample" the texture variable using the texture coordinates. Here's a simplified version of the fragment shader:

out vec4 fColor;               // Final output
in vec2 texCoord;              // From the vertex shader (interpolated UV coordinate)
uniform sampler2D texture;      // This is texture!!  
void main () {
        fColor = texture2D (texture, texCoord);  // Get the texel from the texture using the texture coordinate

What you need to do...

Warning: if you blindly copy the code above, you're in trouble...  While it's technically correct, it's just for an example and doesn't match our project.

  1. First, download the starting point for the lab. You might need to set the paths again.
  2. Look in main.cpp.    The only thing that changed from the last lab is the call teapotModel->loadBitmap("../ShaderLab/teapot.bmp");
  3. Look at Model.h - specifically at the variables that are in Model. See the tex_buffer_ID, vTexCoord and texture?  Good. Understand what they are used for.
  4. Look at Model.cpp - specifically at the loadBitmap() method.  There are 5 things you'll need to code.  Understand what those lines do.
  5. Look at the render( ) method in model - specifically where if (vTexCoord != -1).  The code inside works, so you shouldn't mess with it.  Those last two lines tell OpenGL that the currently bound texture should be placed into texture unit 0.