All Systems Phenomenal
Nils' Homepage

Blog      Publications      Gallery      Teaching      About

Heightfield shading using Blinn-Phong

2023-02-11

Keywords: Computer graphics, 2D, Python
    Introduction
========================================================================
A good way of turning an otherwise flat looking picture more visually attractive is by adding an illusion of depth through shading.
This is true both in traditional drawing and in computer graphics.

The most basic shading methods in computer graphics use the Phong or the Blinn-Phong lighting models.
They are commonly covered in books on the topic and in tutorials and blogs that cover how to get started with _e.g._
OpenGL, such as [opengl-tutorial](http://www.opengl-tutorial.org/beginners-tutorials/tutorial-8-basic-shading/) and [learnopengl](https://learnopengl.com/Lighting/Basic-Lighting).

In this blog post I will show how this shading method can be applied on gray level images where the pixel values represent the height of _e.g._ a landscape, as illustrated in Figure [header].
Such images will be denoted [heightfields](https://en.wikipedia.org/wiki/Heightmap) and by applying shading a 3D depth effect can be added when viewing them.

![Figure [header]: **a)** A gray heightfield image of some half-spheres where upward bumps have been colored orange and downward bumps colored green. **b)** The heightfield can be seen as a 3D surface but can also be used as a 2D image which can still be shaded **c)**.](files/heightfield-shading/heightfield-shading-composed-3d-surface.png width=100%)

!!! Info
    _**From OpenGL and shaders to python and numpy**_

    I've been using this shading technique for almost ten years but always with OpenGL and its shading language glsl.

    Breaking down the method and implementing the shading in python turned out to be a good learning experience that
    made me reflect on how this shading method actually work.

    It even made me find an error in the pseudo-code in the book I used as primary reference!

$\require{amsmath}$ $\require{color}$ $\newcommand{\Real}{\mathbb{R}}$ $\newcommand{\half}{\frac{1}{2}}$ $\newcommand{\xfat}{\textbf{x}}$ $\newcommand{\ambn}{s_{\text{ambn}}}$ $\newcommand{\diff}{s_{\text{diff}}}$ $\newcommand{\spec}{s_{\text{spec}}}$

Heightfields ======================================================================== Heightfields, bump maps and normal maps -------------------------------------------------------------------------------- A heightfield is a two-dimensional scalar-valued function \begin{equation} u(x, y) = z \label{eq:scalarfield} \end{equation} In the discrete setting this is just a gray scale image where the local image intensity, $z$, represents the height at that spatial location, $(x, y)$. In computer graphics such images are sometimes referred to as _bump maps_ and are used to add shading detail to 3D models without adding any additional polygons. This is done by mapping them like textures and use the normal vectors from the _bump map_ as the local surface normal. If the surface normals of the _bump map_ are precomputed its corresponding _normal map_ is produced. Since I'm not mapping the scalar images to any 3D model in these examples I refer to them as heightfields. Computing the surface normal vectors -------------------------------------------------------------------------------- In order to apply shading to a heightfield we first need to compute its **surface normals**. First we rewrite the heightfield equation as a 3D parametric surface \begin{equation} h(x,y,z) = u(x, y) - z = 0 \label{eq:heightfield} \end{equation} We then note that the 3D gradient of this function is parallel to the normal vectors of the parametric surface represented by the heightfield. \begin{equation} \mathbf{n} \propto \nabla h(x,y,z) = (\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, -1) \label{eq:diffeqdef} \end{equation} Finally we normalize this to get the surface normal vectors of length 1. \begin{equation} \mathbf{n} = \frac{\nabla h(x,y,z)}{||\nabla h(x,y,z)||} \label{eq:normalize} \end{equation} The shading process is illustrated in Figure [heightfield-normal] with the heightfield $u(x,y)$ in Figure [heightfield-normal] **a)** and its normals in Figure [heightfield-normal] **c)**. ![Figure [heightfield-normal]: A heightfield with flat colors can be shaded by calculating surface normals and applying a lighting model. This brings a 3D depth to the otherwise flat image.](files/heightfield-shading/heightfield-shading-composed.png width=100%) Shading ======================================================================== The Phong lighting model ------------------------------------------------------------------------------- Shading in computer graphics is the process of applying lighting to a surface. This is typically done in one of two ways. Either through a global illumination model using _e.g._ [ray tracing](https://raytracing.github.io/books/RayTracingInOneWeekend.html) or by using a simpler approximation where light does not bounce around the scene but is calculated only locally from the surface orientation relative to the direction of the light. Since the second family of methods is computationally much faster it has been popular in real-time computer graphics. Probably the most popular such method is the one by [Phong](https://en.wikipedia.org/wiki/Phong_reflection_model) [#phong1975] first proposed in the 1970s with a modification suggested shortly after by [Blinn](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model) [#blinn1977]. In the Phong lighting model the lit surface colors are decomposed into an **ambient**, a **diffuse** and a **specular** component, all of which are illustrated in Figure [phong-components]. The Phong and Blinng-Phong lighting models differ only in how the specular component in Figure [phong-components] c) is calculated. ![Figure [phong-components]: The three Phong shading components in **a)**, **b)** and **c)** are added to each other to compose the final shading. The result of this shading method is the characteristic, plastic, look seen in **d)**.](files/heightfield-shading/heightfield-shading-phong-components.png width=100%) We now consider how to apply the light locally to a single point on the surface. In this context we are shading a heightfield that is represented by an image, so we will perform this for each pixel in the image. !!! Warning _**Simplified light calculations**_ I have implemented the lighting model with fewer features than what is typically seen even in minimal examples in the context of 3D rendering. First of all the color of the light is always white. Second, I only use a single light source. Adding more is not very technically challenging but balancing and composing the lights become more intricate. In a full 3D rendering scene this would be a bit limiting but here the goal is only to add some sense of depth to an otherwise flat image.
The light calculation use the following vectors: \begin{equation*} \left\{ \begin{aligned} \mathbf{n}: & ~\text{Surface normal.} \\ \mathbf{v}: & ~\text{Direction to viewer.} \\ \mathbf{l}: & ~\text{Direction to light source.} \\ \mathbf{r}: & ~\text{Direction of light reflection.} \\ \mathbf{h}: & ~\text{Half-way vector between viewer and light source.} \\ \end{aligned} \right. \end{equation*} The half-way vector $\mathbf{h}$ is used in the modified [#blinn1977] shading model. It is computationally slightly faster than the original method by Phong because $\mathbf{h}$ is cheaper to calculate than $\mathbf{r}$. We begin with the **ambient**, flat lighting. The ambient component, $\ambn$, in Figure [phong-components] a) is simply a scalar, typically set to be in the interval $[0, 1]$. \begin{equation*} \ambn \in [0, 1] \end{equation*} The ambient term softens the contrast with the unlit areas by removing the completely dark parts seen in Figure [phong-components] b).
![Figure [phong]: Vectors involved in the Phong and Blinn-Phong lighting calculations. Inspired by an illustration at [wikipedia](https://en.wikipedia.org/wiki/Phong_reflection_model).](files/illustrations/blinn-phong-illustration.svg width=100%)
The second one is the **diffuse** component that models light which is scattered equally in all directions. The diffuse reflection phenomenon is shown in Figure [diffusive] where it is illustrated how the model is valid if the surface is very rough at a microscopic scale. As light hits the surface at an angle, $\theta$, the intensity is reduced since the light spreads across a larger surface area. As the surface area grows the intensity decreases proportionally to $\cos (\theta)$ which is illustrated in Figure [lambertian]. This relationship is called Lambert's cosine law, where a surface perpendicular to the light direction will excibit maximum intensity that will decrease as the angle between them grows. We note that $\cos (\theta) = \mathbf{n} \cdot \mathbf{l}$. Indeed, as the light is scattered equally in all directions this component is independent of the direction to the viewer and only depends on the direction of the surface normal $\mathbf{n}$ and the direction to the light source $\mathbf{l}$. If the surface normal is angled in the opposite direction to the light we get a negative valued projection which we remove by setting the value to 0. \begin{equation*} \diff = \max(\mathbf{n} \cdot \mathbf{l},~0) ~ \in [0, 1] \end{equation*}
![Figure [diffusive]: A perfectly diffusive surface will reflect light equally in all directions regardless of the direction to the light source. Inspired by an illustration in [#angel2011]](files/illustrations/diffusive-surface-illustration.svg width=100%) ![Figure [lambertian]: The Lambertian cosine law describes how light at an angle is spread across a larger area and thus reducing the light intensity per area.](files/illustrations/lambertian-cosine-law-illustration.svg width=100%)
Lastly we have the **specular** component that adds highlights that make the surface look shiny in certain spots. This models perfectly, or nearly perfectly, reflected light, as shown in Figure [specular]. In the Phong model the specular component depends on the direction of the reflected light $\mathbf{r}$ and the direction to the viewer $\mathbf{v}$. If the reflected light is angled in the opposite direction from the viewer we get a negative valued projection which we remove by setting the value to 0, like we did for the diffuse component. The term is raised to a power by a parameter that determines the amount of _shine_ and, finally, it is multiplied by a second parameter that determines the _strength_ of the specular highlights. \begin{equation*} \text{spec} = \max(\mathbf{r} \cdot \mathbf{v}, ~0) ~ \in [0, 1] \end{equation*} \begin{equation*} \spec = \text{strength} \times \text{spec}^{\text{shine}} \end{equation*}
![Figure [specular]: A perfectly specular surface will reflect light only in a certain angle that depend on the surface normal and the direction of the light. The amount of this reflected light that aligns with the direction to the viewer determines the strenght of the specular highlight. Inspired by an illustration in [#angel2011]](files/illustrations/specular-surface-illustration.svg width=100%)
!!! Tip _**Blinn-Phong specular light**_ The specular term can also be computed using the approximative Blinn-Phong model that instead uses the half-way vector $\mathbf{h}$ and the surface normal $\mathbf{n}$. The advantage of this is due to $\mathbf{h}$ being cheaper to calculate than $\mathbf{r}$. The Blinn-Phong specular component is computed as \begin{equation*} \spec = \text{strength} \times \max(\mathbf{n} \cdot \mathbf{h}, ~0) ^{\text{shine}} \end{equation*} If the light comes from a nearby source, such as a light bulb, we can model an **attenuation** by reducing the intensity as the distance, $d$, increases from the surface to the light source. In vacuum this fall-off would be the inverse square of the distance, but we can also have other phenomena reducing the intensity like a slight absorption in air. A suggested way of modelling this it to use a quadratic function with three parameters that can be tuned. \begin{equation*} \text{attenuation} = a + bd + cd^{2} \end{equation*} Note that the attenuation never affects the ambient term and is only valid for small nearby light sources. Directional lights, that model _e.g._ the sun, should not be modelled this way. In the examples I used very little attenuation since the goal here was to create a more striking image and not necessarily a realistically shaded one. **Finally**, the three components are added together. The ambient and diffuse scalar values are added and multiplied with the material color. The specular scalar is multiplied with the color of the light, which in this implementation is set to always being white, and added to produce the resulting color of that pixel in the heightfield. \begin{equation*} \text{color}_{\text{Phong}} = (\ambn + \frac{1}{\text{attenuation}} \times \diff) \times \text{color}_{\text{surface}} + \frac{1}{\text{attenuation}} \times \spec \times \text{color}_{\text{light}} \end{equation*} Directional and point lights ------------------------------------------------------------------------------- I have implemented two types of light sources. A **directional light** represents a very strong light source at a very large distance, like the sun. The direction and intensity of the light is the same for every point of the heightfield. For the second type, the **point light**, the direction to the light has to be calculated for every pixel in the heightfield. Gamma correction and normalization ------------------------------------------------------------------------------- [Gamma correction](https://en.wikipedia.org/wiki/Gamma_correction) is a nonlinear transform that adjusts the brightness of the image and can be used to give a softer and less dark result. A [simple gamma correction](https://raytracing.github.io/books/RayTracingInOneWeekend.html#diffusematerials/usinggammacorrectionforaccuratecolorintensity) technique is to raise the pixel values of each channel by some power, $\gamma$, in the range 0 to 1. \begin{equation*} \text{color}_{\text{Gamma corrected}} = \text{color}_{\text{Phong}}^{\gamma}, \quad \gamma \in [0, 1] \end{equation*} Finally, I normalize the resulting image such that the values match the desired output which often is either in the range 0 to 1 or 0 to 255. Python implementation ======================================================================== **Numpy linear algebra** ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python import numpy as np def normalize(v: np.ndarray) -> np.ndarray: """ Normalize all 3D vectors stored in a NxMx3 numpy array. Return a NxMx3 numpy array. """ l = np.sqrt(np.einsum('...i,...i', v, v)) v = np.copy(v) v[:,:,0] /= l v[:,:,1] /= l v[:,:,2] /= l return v def dot(a: np.ndarray, b: np.ndarray) -> np.ndarray: """ Compute the dot product between all 3D vectors stored in a NxMx3 numpy array and a single 3D vector or with other 3D vectors stored in another NxMx3 numpy array. Return a NxM numpy array. """ c = np.zeros((a.shape[0], a.shape[1], 3), dtype=np.float64) if len(b.shape) == 3: c[:,:,0] = a[:,:,0] * b[:,:,0] c[:,:,1] = a[:,:,1] * b[:,:,1] c[:,:,2] = a[:,:,2] * b[:,:,2] elif len(b.shape) == 1: c[:,:,0] = a[:,:,0] * b[0] c[:,:,1] = a[:,:,1] * b[1] c[:,:,2] = a[:,:,2] * b[2] return c[:,:,0] + c[:,:,1] + c[:,:,2] def reflect(n: np.ndarray, v: np.ndarray) -> np.ndarray: """ Compute the reflection of all 3D vectors stored in a NxMx3 numpy array against a single 3D vector or with other 3D vectors stored in another NxMx3 numpy array. The reflection r of a vector v against a surface with normal n is calculated as: r = 2 * dot(n,v) * n - v Return a NxMx3 numpy array. """ if len(v.shape) == 3: rn = 2.0 * dot(n, v) r = np.zeros(v.shape, dtype=v.dtype) r[:,:,0] = rn[:,:] * n[:,:,0] - v[:,:,0] r[:,:,1] = rn[:,:] * n[:,:,1] - v[:,:,1] r[:,:,2] = rn[:,:] * n[:,:,2] - v[:,:,2] elif len(v.shape) == 1: r = 2.0 * dot(n, v) * n - v return r ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Heightfield surface normal** ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python import numpy as np from scipy import signal def computeHeightmapNormal(u: np.ndarray) -> np.ndarray: """ Compute the normal vectors of all pixels in the heightfield image. n = (du/dx, du/dy, -1) The derivatives are computed using correlation with a central finite difference mask, du[i,j]/dx = 1/2 * (u[i+1,j] - u[i-1,j]). The correlation operation will apply the kernel with the elements as ordered, as opposed to convolution which apply them in reversed order. """ kx = (1.0/2.0) * np.array([[ 0.0, 0.0, 0.0], [-1.0, 0.0, 1.0], [ 0.0, 0.0, 0.0]]) ky = (1.0/2.0) * np.array([[ 0.0,-1.0, 0.0], [ 0.0, 0.0, 0.0], [ 0.0, 1.0, 0.0]]) dx = signal.correlate2d(u, kx, mode='same', boundary='fill', fillvalue=0) dy = signal.correlate2d(u, ky, mode='same', boundary='fill', fillvalue=0) n = np.zeros((u.shape[0], u.shape[1], 3), dtype=np.float64) n[:,:,0] = dx n[:,:,1] = dy n[:,:,2] = -1 n = normalize(n) return n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Directional and point light** ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python import numpy as np def computeShadedDirectionalLight( u: np.ndarray, parameters: ShadingParameters, ) -> Tuple[np.ndarray, np.ndarray]: """ Compute the direction and the distance to the light source for each pixel in the heightfield. For a directional light source it is the same everywhere and the distance will not be used and is set to one for all points. """ N = u.shape[0] M = u.shape[1] l = np.zeros((N, M, 3), dtype=np.float64) light_direction = parameters.lightDirection() light_direction = -light_direction / np.linalg.norm(light_direction) l[:,:,0] = light_direction[0] l[:,:,1] = light_direction[1] l[:,:,2] = light_direction[2] light_distance = np.ones(u.shape) return l, light_distance def computeShadedPointLight( u: np.ndarray, grid: ImageGrid, parameters: ShadingParameters, ) -> Tuple[np.ndarray, np.ndarray]: """ Compute the direction and the distance to the light source for each pixel in the heightfield. The point light is represented by a 3D position and the direction from each pixel position of the heightfield is calulated. The distance is calculated at the same time. """ N = u.shape[0] M = u.shape[1] heightfield_pos = np.zeros((N, M, 3), dtype=np.float64) light_vector = np.zeros((N, M, 3), dtype=np.float64) l = np.zeros((N, M, 3), dtype=np.float64) light_pos = parameters.lightPosition() heightfield_pos[:,:,0] = grid.X heightfield_pos[:,:,1] = grid.Y heightfield_pos[:,:,2] = np.zeros(grid.X.shape) light_vector = heightfield_pos - light_pos light_distance = np.linalg.norm(light_vector, axis=2) l[:,:,0] = -light_vector[:,:,0] / light_distance l[:,:,1] = -light_vector[:,:,1] / light_distance l[:,:,2] = -light_vector[:,:,2] / light_distance l = normalize(l) return l, light_distance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Phong and Blinn-Phong heightfield shading** ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python import numpy as np def computeShaded( u: np.ndarray, c: np.ndarray, grid: ImageGrid, parameters: ShadingParameters, ) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]: """ Compute the Phong or Blinn-Phong shaded heightfield, u, given a matching color image, surface normals, an image grid that stores physical positions of all heightfield pixels in x and y and a set of shading parameters including light properties. Return several result images corresponding to partial and final results as well as the heightfield surface normals, n. Returns u_diff_result, u_spec_result, u_ambn_only, u_diff_only, u_spec_only, n """ ## Allocate "grids" of 3D vectors N = u.shape[0] M = u.shape[1] v = np.zeros((N, M, 3), dtype=np.float64) ## View h = np.zeros((N, M, 3), dtype=np.float64) ## Half-way l = np.zeros((N, M, 3), dtype=np.float64) ## Light r = np.zeros((N, M, 3), dtype=np.float64) ## Reflection ## The camera is positioned "above" the surface so the normals are ## inverted in order to be oriented in correct direction. camera_pos = np.array([grid.x[N//2], grid.y[M//2], parameters.camera_distance], dtype=np.float64) n = computeHeightmapNormal(u) n *= -1 ## Light vectors if parameters.use_positional_light: l, d = computeShadedPointLight(u, grid, parameters) else: l, d = computeShadedDirectionalLight(u, parameters) ## Direction from point to viewer v[:,:,0] = camera_pos[0] - grid.X v[:,:,1] = camera_pos[1] - grid.Y v[:,:,2] = camera_pos[2] - u v = normalize(v) ## Ambient strength and diffusive light ambn = parameters.ambn diff = dot(n, l) ## https://en.wikipedia.org/wiki/Specular_highlight if parameters.use_blinn_phong: ## Blinn-Phong h[:,:,0] = v[:,:,0] + l[:,:,0] h[:,:,1] = v[:,:,1] + l[:,:,1] h[:,:,2] = v[:,:,2] + l[:,:,2] h = normalize(h) spec = dot(n, h) else: ## Phong r = reflect(n, l) r = normalize(r) spec = dot(r, v) diff[diff<0.0] = 0.0 spec[spec<0.0] = 0.0 shiny = parameters.shiny strength = parameters.strength spec = strength * np.power(spec, shiny) ## Allocate u_diff_only = np.zeros((u.shape[0], u.shape[1], 3)) u_spec_only = np.zeros((u.shape[0], u.shape[1], 3)) u_diff_result = np.zeros((u.shape[0], u.shape[1], 3)) u_spec_result = np.zeros((u.shape[0], u.shape[1], 3)) ## Compute light attenuation (default it no attenuation) and apply to diffuse and specular terms attenuation = parameters.light_attenuation_a \ + parameters.light_attenuation_b * d \ + parameters.light_attenuation_c * np.power(d, 2.0) diff *= 1.0 / attenuation spec *= 1.0 / attenuation ## Compute lighting results, all have three channels u_ambn_only = ambn * c u_diff_only[:,:,0] = diff * c[:,:,0] u_diff_only[:,:,1] = diff * c[:,:,1] u_diff_only[:,:,2] = diff * c[:,:,2] u_spec_only[:,:,0] = spec u_spec_only[:,:,1] = spec u_spec_only[:,:,2] = spec u_diff_result[:,:,0] = (ambn+diff) * c[:,:,0] u_diff_result[:,:,1] = (ambn+diff) * c[:,:,1] u_diff_result[:,:,2] = (ambn+diff) * c[:,:,2] u_spec_result[:,:,0] = u_diff_result[:,:,0] + spec u_spec_result[:,:,1] = u_diff_result[:,:,1] + spec u_spec_result[:,:,2] = u_diff_result[:,:,2] + spec ## Normalize results and apply gamma correction results_images = [u_diff_result, u_spec_result] for i in range(len(results_images)): image = results_images[i] image[image<0.0] = 0.0 minv = np.amin(image) maxv = np.amax(image) image -= minv if minv != maxv: image /= (maxv - minv) ## Gamma correction image = np.power(image, parameters.gamma) results_images[i] = image u_diff_result = results_images[0] u_spec_result = results_images[1] return u_diff_result, u_spec_result, u_ambn_only, u_diff_only, u_spec_only, n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ !!! Warning _**Performance**_ The code above uses numpy for most computations but even so the performance was a bit slower than I had hoped. Of course, this all run on the CPU compared to, fully parallellized, implementations that run in shaders on the GPU. Example animations ========================================================================
a) Directional light.
b) Point light.
**Figure 7:** _Animations of a directional **a)** and a point light **b)** as they are used to shade a heightfield consisting of a few half-spheres sticking out of and into the plane._
a) Directional light.
b) Point light.
**Figure 8:** _Animations of a directional **a)** and a point light **b)** as they are used to shade a heightfield originally created with a landscape generating software and downloaded from [wikipedia](https://en.wikipedia.org/wiki/heightmap)._
a) Directional light.
b) Point light.
**Figure 9:** _Animations of a directional **a)** and a point light **b)** as they are used to shade a heightfield created by a [reaction diffusion model](../reaction_diffusion_models_and_turing_patterns/)._
**Figure 9:** _Animation of a point light used to shade a heightfield representation of the surface of the earth. Both the terrain elevation heightfield and the color image are from NASAs [visible earth](https://visibleearth.nasa.gov/images/73934/topography) project._ Conclusion ======================================================================== By applying some shading we can add a 3D depth to a flat 2D surface. This is commonly used in 3D graphics to add more apparent detail to objects in the form of bump and normal maps. The shading technique described here is one of the most basic ones but there is nothing that stops the implementation of more advanced techniques such as shadows, ray-tracing or some other global illumination technique even in this context. +++++ **Bibliography**: [#wikiheight]: **_Heightmap/heightfield_**
https://en.wikipedia.org/wiki/heightmap [#wikiphong]: **_Phong reflection model_**
https://en.wikipedia.org/wiki/Phong_reflection_model [#wikiblinn]: **_Blinn-Phong reflection model_**
https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model [#phong1975]: Bui Tuong Phong. 1975. **_Illumination for computer generated pictures_**
Communications of the ACM. 18 (6): 311–317
https://dl.acm.org/doi/pdf/10.1145/360825.360839
https://users.cs.northwestern.edu/~ago820/cs395/Papers/Phong_1975.pdf [#blinn1977]: James F. Blinn. 1977. **_Models of light reflection for computer synthesized pictures_**
Proc. 4th Annual Conference on Computer Graphics and Interactive Techniques: 192–198.
https://dl.acm.org/doi/10.1145/965141.563893 [#angel2011]: Edward Angel and Dave Shreiner. 2011. **_Interactive computer craphics: A top down approach with shader based OpenGL_ (6th edition)**
Chapter 5: Lighting and shading
https://archive.org/details/interactive-computer-graphics-a-top-down-approach-6e/page/268/mode/2up [#wikigamma]: **_Gamma correction_**
https://en.wikipedia.org/wiki/Gamma_correction [#shirley2020]: Peter Shirley. 2020. **_Ray tracing in one weekend_**
Chapter 8.3 Using Gamma Correction for Accurate Color Intensity
https://raytracing.github.io/books/RayTracingInOneWeekend.html#diffusematerials/usinggammacorrectionforaccuratecolorintensity [#learnopengl]: **_learnopengl.com Basic lighting_**
https://learnopengl.com/Lighting/Basic-Lighting [#ogltutorial]: **_OpenGL tutorial Basic shading_**
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-8-basic-shading/ [#nasa]: **_Visible earth_**
https://visibleearth.nasa.gov/images/73934/topography


Code for all figures and examples:

https://bitbucket.org/nils_olovsson/heightfield_shading/src/master



[#]