Saturday, 22 March 2014

Week 11 - Motion Blur

On Monday a lot of the studios got to show off their games to the rest of the class. Everybody's games are starting to come together and look really great, a huge step up from first semester.


Above is a screenshot of the new lighting system in our game. By next week i'll have screenshots of our entire first level instead of just one model. 

Motion Blur:

Motion blur is an effect that is caused when the image being recorded or rendered changes during the recording of a single frame. This can be caused either by rapid movement or long exposure. 

There are many ways of implementing motion blur. There is an easy way that involves the use of the accumulation buffer. The downside to this method is that it requires rendering each frame multiple times, if you want a lot of blur or have a lot of geometry that can cause slow downs.

- draw the frame
- load the accumulation buffer with a portion of the current frame
- loop and draw the last n frames and accumulate.
- display the final scene

The more modern way or effective way of doing motion blur involves the use of motion vectors. To do motion blur with motion vectors you need to calculate each pixel's screen space velocity then that velocity is used to do the blur. The calculation of this vector is done in fragment shader on a per-pixel basis. 



Monday, 17 March 2014

Week 10 - Depth of Field

This week we learned about depth of field and had an in-class competition on friday.

Depth of field is an effect that causes objects that are out of focus to appear blurry.
Computer graphics uses the pinhole camera model which results in perfectly sharp images. The pinhole camera model only lets a single ray through. Real cameras use lenses with finite dimensions which is what causes depth of field.



Depth of Field Implementation:
- Use destination alpha channel to store per-pixel depth and blurriness information.
- Use fragment shader for post-processing
- Downsample and pre-blur the image
- Use variable size filter kernel to approximate circle of confusion
- Blend between original and pre-blurred image for better image quality
- Take measures to prevent "leaking" sharp foreground into blurry background
- We pass the camera distance of three planes to scene shaders
     - Focal plane: points on this plane are in focus
     - Near plane: Everything closer than this is fully blurred
     - Far plane: Everything beyond the far plane is fully blurred

Monday, 10 March 2014

Week 9 - Lighting and Deferred lighting

This week we watched a conference video about God of War's lighting system and we also looked at deferred lighting.




 During the presentation they explained the process behind their shadows:

ZPrePass -> Cascade 2 -> WB Shadow Map -> Cascade 1 -> WB Shadow Map -> Cascade 0 *

* -> WB Shadow Map -> Opaque -> Transparent+Effects+UI+Flip

I'm really interested in learning more about their use of the white buffer and just the uses of the white buffer in general.

Our second lecture this week was on deferred lighting.

Deferred lighting is a screen-space lighting technique where the lighting is postponed or deferred until the second pass hence the name deferred lighting or shading.



One of the advantages of deferred lighting is lighting is now based on number of lights instead of the actual geometry allowing you to have a lot more lights in your scene. Some downsides to deferred lighting is that its difficult to do antialiasing, transparent objects still need to be done separately, hard to use multiple materials, memory bandwidth, and shadows are still seperate.

Sunday, 2 March 2014

Week 8 - Midterm and Tidbits

On Monday we had our midterm for intermediate computer graphics. I think it was a pretty good midterm, it wasn't too difficult but also it wasn't too easy. 

During our lecture this week Dr. Hogue talked about little tidbits to add on to what we've already learned. First we talked about colours and the different ways of representing colour.  

We also talked about thresholding. Thresholding is a image processing effect that basically turns an image into a binary image (black and white). Thresholding is useful in technologies like QR code cameras
We also looked at several different convolution kernels like image sharpening, blurring and motion blur.