Wednesday, 3 December 2014

Game engines blog 5

This past week was very busy with many deadlines quickly approaching. I spent most of the week only working on my water effects for homework, so this blog post will be about that. I’ll start off with showing off the final result:



I’m really happy about the way it turned out. Maybe next semester I’ll implement it into our game but I would have to change the style a bit. This water looks too realistic for our game’s art style.

The entire process requires 3 rendering passes.

The first pass renders the reflection texture. In order to do that a 2nd camera is added. The reflection camera is positioned at the regular camera’s position and scaled by -1 in the Y direction. The point the camera is looking at is also scaled by -1 in the Y direction. All objects below the water are clipped from the scene. The scene from that camera perspective is stored in a framebuffer object.

The second pass is to render the refraction texture. It is similar to the reflection pass except from the main camera’s perspective and all objects above the water are clipped from the scene. The scene is then stored to another framebuffer object.

The final pass is when the water is actually rendered. The reflection and refraction textures are passed into the water fragment shader, as well as a bunch of other uniforms including 2 other textures (normal map and dudv), and camera direction. In the vertex shader not much is done except preparing variables for the fragment shader and displacing vertices based on the colours from the normal map. The fragment shader is where everything happens. The reflection and refraction maps are sampled for their colour. The uv’s are also distorted based on a constant distortion value set in the shader. The camera direction is used in the fresnel reflection so that when you are looking directly at the water it looks more transparent than when you are looking away from it.

A few mistakes that I made while trying to complete this question were:

The reflection camera position. At first I was just making it upside down instead of scaling it by -1. By setting the orientation to upside down the objects ended up on the wrong side. I also forgot to scale the lookat position as well so reflections didn’t show up where they should be.

Rendering order. Sometimes when I would run the program the objects in the reflection would end up behind the skybox or disappear when the skybox went over them. I fixed this by making sure the objects were the last things to be rendered during the render to texture passes.

Overall I am quite happy with the water effect I was able to create. Hopefully I can implement it into our game sometime next semester.

Friday, 28 November 2014

Game Engines Blog 4

In the past couple of weeks we managed to implement a lot of new things into our prototype. The main things we added were power-ups, AI, and we implemented the Bullet Physics engine.

The bullet physics engine was fairly simple to implement and use. One thing that we need to change though is the way the environment rigid bodies are generated. When I first implemented the engine, I just wanted to get it working. So since our first level is simple enough, I just created boxes manually and set them around the map for collision. In the future we’ll probably try to create the rigid bodies based off of meshes instead of manually. Below is some of our bullet code.



Our game, Kinetic Surge, is meant to be a multiplayer game but we implemented a simple AI for demonstration purposes. The AI just moves in random directions but that's all we really need to demonstrate our main mechanics of our game.

Lastly, we a power-up system to our game. Currently we only have 1 type of power up which gives the player a 1.2x speed boost for 5 seconds. We have 2 locations on the map where the power-ups can spawn and they spawn every 30 seconds. In the future we plan on adding more power-ups, for example like invisibility. Below is a screenshot of a power-up in our prototype.



I’m currently working on my water shaders for the homework questions. I’ll probably post more about it in next weeks blog post.

Friday, 7 November 2014

Game Engines Blog 3

For the past couple of weeks we have been learning about several things related to graphics. Some of the topics I had already learned/knew about such as deferred rendering and motion blur but I did learn about some new things. One of the new things I learned was stencil buffers. Simply put, the stencil buffer is used to limit the area of rendering.The stencil buffer can be useful in many situations such as improving shadows and reflections. I plan on looking into stencil buffers but probably not during this semester. 

I upgraded my water shaders this past week. I added vertex displacement based samples from a height map.

Now all thats missing is some cool lighting and some reflections. I plan on working on that next week to get homework out of the way. I started looking into fresnel reections and I think it shouldn't be too difficult.

In terms of GDW, development has slowed down due to the approaching deadline of the homework questions. But after the MIGS trip next week, game development should continue 100%. I'm currently working on getting first-person camera controls working and after that I'll start working on some fluid character movement.

Friday, 17 October 2014

Game Engines Blog 2

Most of the lectures the past couple weeks have been reviewing topics we’ve already learned to sort of get them fresh in our mind. We went over things like vectors, matrices and the math between them. We also went over the different spaces such as screen space, tangent space, etc. I’m glad we went over these topics because its been a while since I’ve reviewed them myself.


A couple weeks ago I began working on rendering water for a homework question. I finally got it working after several problems with updating uniform values in tloc. The problem was that I wasn’t using shader operators in my code. I plan on continuing to work on the water questions and hopefully implement really cool looking water into our game. Currently my water is basically just a scrolling texture on a quad. Below is a screenshot of my simple water so far.





In yesterday’s tutorial we learned how to expand and create our own component systems. In the tutorial we made a simple health system. Right after the tutorial I started thinking about how what we did in class could translate to our game. I think a simple system just like the health system we made could be used for our stamina mechanic our game.

Friday, 26 September 2014

Game Engines Blog 1

One of the most important topics we learned during these first three weeks was entity component systems. We were shown two images to help understand entity component systems a little better. Entities can be seen as a key with the teeth of the key being its different components. Systems can be seen as locks which require an entity (key) to begin working.







We also looked at scene graphs which is used for node parenting. Scene graphs can be extremely useful for understanding how objects in games should be interacting with each other. For example when a character is using a bow and arrow the arrow would go from being parented to the character to the bow and then to the environment.


I started playing around with TortoiseHg. Last year we actually made a repository for our game but we never ended up using it. After making a test repository and doing very simple commits, pushes, pulls and updates, I now understand how useful having a repository is. I actually wish we used a repository for our code last year because I feel like we had wasted way too much time just trying to resolve issues that would’ve been way easier to fix with simple version control. I’m actually kind of looking forward to keeping a repository of our game. It will be nice for keeping our project clean and neat and not having several folders of different versions of the game on our desktops.


Friday, 11 April 2014

Week 12 - game con

There was no lecture on friday due to Level Up. Our group wasn't chosen to showcase our game, all of the games this year were really impressive and I hope that UOIT represented well at Level Up. Gamecon was on monday and it was really tiring fun as well. I enjoyed looking at and playing all of the other games developed by UOIT gamedev students.
As expected we spent the night before gamecon touching up our game. We added nice visual effects including bloom and better lighting. Here are a couple screenshots for comparison.

old

new

Unfortunately implementing the better graphics presented us with other bugs to deal with. Our unit selected no longer worked correctly and we had to implement a really difficult to use temporary fix. This selected issue should be fixed before the final submission.


Saturday, 22 March 2014

Week 11 - Motion Blur

On Monday a lot of the studios got to show off their games to the rest of the class. Everybody's games are starting to come together and look really great, a huge step up from first semester.


Above is a screenshot of the new lighting system in our game. By next week i'll have screenshots of our entire first level instead of just one model. 

Motion Blur:

Motion blur is an effect that is caused when the image being recorded or rendered changes during the recording of a single frame. This can be caused either by rapid movement or long exposure. 

There are many ways of implementing motion blur. There is an easy way that involves the use of the accumulation buffer. The downside to this method is that it requires rendering each frame multiple times, if you want a lot of blur or have a lot of geometry that can cause slow downs.

- draw the frame
- load the accumulation buffer with a portion of the current frame
- loop and draw the last n frames and accumulate.
- display the final scene

The more modern way or effective way of doing motion blur involves the use of motion vectors. To do motion blur with motion vectors you need to calculate each pixel's screen space velocity then that velocity is used to do the blur. The calculation of this vector is done in fragment shader on a per-pixel basis. 



Monday, 17 March 2014

Week 10 - Depth of Field

This week we learned about depth of field and had an in-class competition on friday.

Depth of field is an effect that causes objects that are out of focus to appear blurry.
Computer graphics uses the pinhole camera model which results in perfectly sharp images. The pinhole camera model only lets a single ray through. Real cameras use lenses with finite dimensions which is what causes depth of field.



Depth of Field Implementation:
- Use destination alpha channel to store per-pixel depth and blurriness information.
- Use fragment shader for post-processing
- Downsample and pre-blur the image
- Use variable size filter kernel to approximate circle of confusion
- Blend between original and pre-blurred image for better image quality
- Take measures to prevent "leaking" sharp foreground into blurry background
- We pass the camera distance of three planes to scene shaders
     - Focal plane: points on this plane are in focus
     - Near plane: Everything closer than this is fully blurred
     - Far plane: Everything beyond the far plane is fully blurred

Monday, 10 March 2014

Week 9 - Lighting and Deferred lighting

This week we watched a conference video about God of War's lighting system and we also looked at deferred lighting.




 During the presentation they explained the process behind their shadows:

ZPrePass -> Cascade 2 -> WB Shadow Map -> Cascade 1 -> WB Shadow Map -> Cascade 0 *

* -> WB Shadow Map -> Opaque -> Transparent+Effects+UI+Flip

I'm really interested in learning more about their use of the white buffer and just the uses of the white buffer in general.

Our second lecture this week was on deferred lighting.

Deferred lighting is a screen-space lighting technique where the lighting is postponed or deferred until the second pass hence the name deferred lighting or shading.



One of the advantages of deferred lighting is lighting is now based on number of lights instead of the actual geometry allowing you to have a lot more lights in your scene. Some downsides to deferred lighting is that its difficult to do antialiasing, transparent objects still need to be done separately, hard to use multiple materials, memory bandwidth, and shadows are still seperate.

Sunday, 2 March 2014

Week 8 - Midterm and Tidbits

On Monday we had our midterm for intermediate computer graphics. I think it was a pretty good midterm, it wasn't too difficult but also it wasn't too easy. 

During our lecture this week Dr. Hogue talked about little tidbits to add on to what we've already learned. First we talked about colours and the different ways of representing colour.  

We also talked about thresholding. Thresholding is a image processing effect that basically turns an image into a binary image (black and white). Thresholding is useful in technologies like QR code cameras
We also looked at several different convolution kernels like image sharpening, blurring and motion blur.

Sunday, 16 February 2014

Week 6 - Brutal Legend and Midterm review

We didn't learn anything new this week but we did watch a video about the making of Brutal Legend to show how what we learned is applied in popular games. Our second lecture this week was just a review for our midterm after reading week. 


Overall the video was really interesting to watch and the developers made some really smart choices when developing Brutal Legend. One thing that really surprised me was how they chose to render the sky. They chose to make their sky one giant particle instead of using a traditional sky-box. They also talked a lot about how they did particle rendering and lighting. 


Saturday, 8 February 2014

Week 5 - Lighting and Shadow Maps

This week we talked about global illumination and shadow mapping. 


     Global illumination is a general name for a group of algorithms used to give more realistic lighting to a 3D scene. These algorithm's allow objects to be lit not only from light rays directly from the light source but also by rays that have bounced off of other surfaces. As seen in the image above the white surfaces are tinted with colour cause by the light rays that have bounced off of the coloured green and red walls. 

     Ray tracing is a technique that can be used to achieve more realistic lighting in a scene. Ray tracing is basically trying to simulate rays of light. Ray tracing is done by tracing rays of light from a light source and checking to see how it interacts with the objects in the scene and render accordingly.

 


Shadows:

     Shadows are caused by absence of light in an area. In a 3D scene this is usually because there is an object in-between the light source and the object creating an obstruction for the light.



Shadow Maps:

The algorithm for shadow mapping that we went over in class required 2 rendering passes. During the first pass you would render the scene from the light's viewpoint, draw blockers and store nearest z in the z-buffer and the resulting depth buffer is the shadow map. The second pass is done from the observer's viewpoint and you compare the depth of each pixel against the shadow map to determine which pixels are in shadows.




Sunday, 2 February 2014

Week 4 - Fullscreen Effects/ PostProcessing

This week we learned about full screen effects and post processing effects like blur and bloom. 

Blur: 

Blurring is done by applying a filter to the texture of your frame where each frame is assigned a weighting. 
In the image above each pixel is equally weighted.

Gaussian blurring is done in a similar way except each source pixel is not equally weighted, pixels are weighted higher in the center. As a result, bigger the window the stronger the blurring.

HDR/Bloom:

HDR and Bloom are post processing effects that are actually quite simple to do, they are done by doing multiple pass-throughs before displaying the final image.

     1. Render 3D scene to offscreen framebuffer
     2. Highlight bright areas(tone-mapping)
     3. Apply a Gaussian blur to highlighted areas
     4. The final image is equal to the blurred frame + the initial 3D scene



Above is an example of HDR and bloom in a video game, in this case it is The Legend of Zelda : Wind Waker HD remake. 

GDW:

We are planning on adding some postprocessing effects into our game and currently we are working on our framework to make these effects easier to implement. 

Saturday, 25 January 2014

Week 3 - Lighting

This week we learned about lighting and the effects it can have on video games. 


Currently in our game we only have the simple OpenGL lighting implemented. We do plan on adding better lighting and shadows to our game and what we learned this week will help us do that .

Dr.Hogue talked about the importance of light in video games. Lighting creates mood and emotion, it can also make things look more realistic



Another important thing to note is that lighting is additive and is made up of 4 different components; emissive light, diffuse light, specular light, and ambient light.

emissive: Self-illumination which equally radiates in all directions from a surface.

diffuse: Reflection of light from a surface

specular: Bright highlight on an object caused by direct reflection

ambient: Light that affects all objects in a scene equally.


Dr.Hogue also showed us some of the shader code required to do these lighting techniques. I was surprised at how little code is actually required to compute lighting which seems so complicated.


Finally we looked at toon shading. I really like toon shading and I didn't think it was as simple as Dr.Hogue showed us. Toon shading is done by taking the max dot product of N and L and that value is used to determine what amount of light or what colour the pixel is. The results can be really beautiful.



In terms of the progress of our game we are currently working on modifying our model loader to use vbo's and vao's. After this is complete we can start working on creating some really cool lighting and shader effects to make our game look awesome. 

- Mark Henry 

Tuesday, 14 January 2014

Week 1 - Prelude

Thoughts

     This week we learned an introduction to shaders. Last semester we learned a bit about shaders, but we basically coded extremely simple shaders that, for example, change the colour of a square. There are two types of shaders, pixel shaders and fragment shaders. After seeing examples of what can be done with shaders like bump mapping and normal mapping, I'm really excited to improve our game from last semester.


Future Plans

     One thing I think we should do to improve our game is lower the poly count of our models and make them look nice with shaders instead of insane amounts of geometry. Currently our models consist of a lot of geometry, especially the character models. It hasn't been a problem since our game was just a prototype and not much was going on. Since this semester we plan on doing a lot more with our game, I think our engine will really benefit from lower poly models. I look forward to actually coding and implementing shaders.

Development Progress

Currently working implementing shaders into our existing framework.




- Mark Henry