Saturday 8 February 2014

UOIT Game Dev - Development Blog 5 - Shadows

In terms of visuals, shadows play a very important role in games. It can make a very simple yet dull scene look beautiful. It adds depth, allowing the player to see how far or tall an object is, and it can also give the player a sense of elevation by looking at the shadow cast by a floating object.

(http://dragengine.rptd.ch/wiki/lib/exe/fetch.php/dragengine:modules:opengl:imprshalig.png)
(http://gameangst.com/wp-content/uploads/2010/01/psm0.jpg)

(http://www.geeks3d.com/public/jegx/200910/hd5770_shadow_mapping_near_light_8192x8192.jpg)

I can go on and on, but the main point is that shadows add so much more detail and realism to a game which can make all the difference visually. So there are a few ways games do shadows in real time:

Screen Space Ambient Occlusion

(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiawKRcIOUzsZzdzS0OOXQszuMVE29ZStvvNj7hEttRa6ZFiuWqwNd5vww13Y1pBQGZpC3pz3l3N58YhOFC1DYXlqZQTTH7CUwS_N4PbYGB6p8l01Zw50rz-I4z9_HJr_gmnp-3WeANTfM/s1600/AO.jpg)

This technique basically takes the rendered depth scene, samples it and darkens areas in which objects occlude one another. Essentially, if an object or mesh is close to one another, then light around that area will bounce an extra amount of times losing intensity creating a slight darkness. So we would need to take a pixel of the rendered screen, sample the neighboring pixels (occluders) rotate them, then reflect them around a normal texture. This technique is quite effective in adding detail to a scene while sampling 16 or less times per pixel. 

Radiosity

(http://upload.wikimedia.org/wikipedia/commons/5/55/Radiosity_Comparison.jpg)

While global illumination looks stunning, it simply can't be done in real time due to the number of rays needed to compute such a scene. Radiosity can be a solution as it minimizes the amount of computation needed to create a decently lit scene. 

I think this link does a very good job explaining the algorithm.(http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm)

Basically, there are several passes to this technique, during each pass we have to account for all the polygons in the scene and look through their perspective. The more light it sees the more lit it will be, and after a few passes it can collect light from other lit polygons. 

Shadow Mapping

This is probably the most widely used technique. Most easily done with direction lights, we take the perspective depth from the light's point of view creating a depth texture. This depth texture will be used to determine which pixels of the scene from the camera's point of view will be in shadow. During the scene pass we would have to convert each scene pixel to the coordinate from the light's point of view and finally determine if there's anything in front of it. This 2 pass algorithm gives us a realistic shadow projection from objects in the scene. 

Previous tests


I've played with some shadow mapping in the past and I quickly realized of the limitations created by this algorithm. In the scene above you can see 2 things right away; the shadow is a bit pixelated and there is a cut off point near the top of the image. This is because shadow maps are stored on a texture with a limited resolution. While it worked fine for a small area of the scene, 1 shadow map wouldn't work for a larger scene as individual shadows will lose a lot of detail and become fuzzy. 

In the prototype submitted for last semester, I essentially clipped the light projection to the character's position so there would always be a crisp shadow no matter where he goes. 


I don't know much about the cascaded shadows mapping yet but I assume it's something similar where you have a shadow projection clipped to each object then compute the final scene through multiple shadow maps. 



No comments:

Post a Comment