Introduction to shaders
When we look at an HD monitor we’re looking at over 2 million pixels that the GPU is sending to the screen. The GPU knows what to send because we give it that information. We don’t do it pixel by pixel but we give it something like a 3D scene and it uses various tools to turn that into a 2D array of pixels.
Graphics APIs like OpenGL and DirectX are some of those tools. Our data goes through a series of steps, usually referred to as a graphics pipeline. Each graphics API has a slightly different one but there are usually a few common steps:
1. Prepare vertex stream
The graphics pipeline needs a list of vertices to work on, so the first thing that happens is it prepares the data for all the vertices that make up the objects in our scene. Their positions, normals, texture coordinates are passed into the pipeline to be processed by the next steps.
2. Process vertex data with vertex shaders
A vertex shader is a function that takes the data for one vertex as input and at a minimum outputs the processed vertex position. There’s often more data involved that is needed further down in the pipeline.
3. Process primitive data with geometry and tessellation shaders
Newer versions of graphics APIs offer the option to use geometry shaders to process primitive data (usually triangles) and even create new primitives. One modern use for these is improving the quality of shadows in 3D scenes with shadow volumes. Tessellation shaders are another modern addition to graphics APIs and they allow lower-detail meshes to be rendered at a higher-detail level, reducing memory footprint at the cost of GPU processing time.
We don’t need to use these features to get a convincing simulation of ultrasound images so we won’t be going deeper into them in this material.
4. Process fragment data with fragment shaders
A fragment shader (also called a pixel shader) receives interpolated data from the vertex shader and returns per-fragment data. Here interpolated means that the fragment shader input data is calculated automatically to values between the neighboring vertices’ data.
The fragment shader is where we can look up textures for our 3D objects, calculate the correct lighting and return the color of a particular pixel on our screen.
5. Generate final pixel value from previous data and depth/stencil buffers
Finally, depth and stencil tests are performed on the calculated fragment data to decide if it will be drawn and if yes, how to combine (blend) it with other data that’s available.
There’s an important difference between how shader code runs on the GPU and how regular code runs on the CPU. A shader function is executed many times per frame for different input values (vertices, fragments, primitives) and many of these executions are parallel. This allows the GPU to perform optimizations that wouldn’t be possible in sequential execution but it also means that each execution of the shader function is independent of the rest.
Performance considerations
Vertex shaders are run for each vertex once per frame, while fragment shaders are run for each pixel. This means that if we have a simple scene with a rectangular mesh that covers the entire screen, the vertex shader will run 4 times per frame (once for each vertex), while the fragment shader can run more than 2 million times (for a 1920×1080 resolution). This means it’s efficient to move as many calculations as we can to the vertex shader.
That said, modern hardware is very capable of performing large amounts of calculations and GPU optimizations help reduce the load.
Shaders in Unity
Unity allows us to program the shaders at each step using a language called ShaderLab that wraps the rest of the shader syntax.
Shader "FancyShader" { Properties { // properties passed to the shader from unity cs scripts } SubShader { Pass { // shader code } } }
The shaders themselves are simply functions written in Cg/HLSL. Technically, Cg and HLSL are two languages that share the same syntax, since they were developed in collaboration between Microsoft and Nvidia. Microsoft’s HLSL documentation can be used as a reliable source of information for learning.
We tell the compiler which function corresponds to which shader using #pragma
statements.
CGPROGRAM // the vert function will be used as a vertex shader #pragma vertex vert // the frag function will be used as a fragment shader #pragma fragment frag #include "UnityCG.cginc" float4 vert(float4 vertexPosition : POSITION) : POSITION { // the output position needs to be transformed from world space coordinates to clip space coordinates // easily done with a built-in function found in UnityCG.cginc float4 clipSpacePosition = UnityObjectToClipPos(vertexPosition); return clipSpacePosition; } float4 frag(float4 fragmentPosition : SV_POSITION) : SV_Target { float4 red = float4(1, 0, 0, 1); return red; } ENDCG
The above shader simply outputs a red screen. We hard-coded a red color (rgba) in the fragment shader so regardless of input the output will always be red. The vertex position returned by the vertex shader is expected to be changed from world space to clip space so we use a function provided by Unity.
The keywords after some parameter and function declarations in the above code are called semantics. They specify what a particular input or output variable’s role is. For example, a vertex shader input parameter marked with the POSITION semantic will contain the vertex position in world space. The processed vertex position in the output must also be marked with POSITION. More info on semantics can be found in the Unity documentation.
Shader input and output data is read according to the semantics we use.
Shaders can have user-defined structs as input or output, as long as the struct members are marked with the right semantics. Structs included with UnityCG.cginc by Unity are also available.
CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // vertex to fragment struct struct v2f { float4 position : SV_POSITION; }; // appdata_base is defined in UnityCG.cginc v2f vert(appdata_base vertInput) { v2f vertOutput; vertOutput.position = UnityObjectToClipPos(vertInput.vertex); return vertOutput; } float4 frag(v2f fragInput) : SV_Target { float4 red = float4(1, 0, 0, 1); return red; }
If we want to replace the fragment shader’s red output with a texture we’ll need a way to pass that texture into the shader. This is what the Properties block is for. We’ll use this chance to add texture coordinates to the data passed from the vertex shader to the fragment shader.
Shader "FancyShader" { Properties { _FancyTexture ("Fancy Texture", 2D) = "white" {} } SubShader { Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 position : SV_POSITION; float2 textureCoordinates : TEXCOORD0; }; // we need to declare the texture in the CGPROGRAM block sampler2D _FancyTexture; v2f vert(appdata_base vertInput) { v2f vertOutput; vertOutput.position = UnityObjectToClipPos(vertInput.vertex); vertOutput.textureCoordinates = vertInput.texcoord; return vertOutput; } float4 frag(v2f fragInput) : SV_Target { // we sample the texture at the coordinates provided in the input struct return tex2D(_FancyTexture, fragInput.textureCoordinates); } ENDCG } } }
It’s a simple shader but at least now we know where to start.
Other options
Unity offers a few alternatives to writing vertex and fragment shaders by hand.
One of them is surface shaders. This feature allows you to write lit shaders faster than incorporating lighting into vertex and fragment shaders, which can take a lot of work. Surface shaders don’t directly correspond to a step in the graphics pipeline but are compiled into vertex and fragment shaders.
Unity 2018.1 also introduced Shader Graph, a visual node editor, useful for creating shaders without directly writing code.
Unity utilities
If we want to add post-processing effects to a Camera the best place to do it is the MonoBehaviour.OnRenderImage method. If you implement it in a script and you attach your script to a GameObject with a Camera component, that Camera’s output will be passed to this method as a RenderTexture. This allows you to process it before it’s rendered to the screen.
void OnRenderImage(RenderTexture source, RenderTexture destination) { // process "source" and write the result to "destination" }
The recommended way to process it is to use Graphics.Blit. Passing the source
RenderTexture as the first parameter and destination
as the second will simply copy the data from one to the other. Setting the third parameter to a material will process the texture with the material’s shader. Unity will automatically assign the source texture to the shader’s _MainTex
property.
void OnRenderImage(RenderTexture source, RenderTexture destination) { Graphics.Blit(source, destination, fancyMaterial); }
You can do this repeatedly if you need to apply multiple effects and one shader pass isn’t enough. Using the same texture in both parameters of Blit
will overwrite the texture, but this can be inefficient on some hardware, especially mobile GPUs. The recommended way to do this is to allocate a new temporary texture and release it when we’re done.
void OnRenderImage(RenderTexture source, RenderTexture destination) { RenderTexture temporaryTexture = RenderTexture.GetTemporary(source.width, source.height); // process it once and write it to a temporary texture Graphics.Blit(source, temporaryTexture, firstFancyMaterial); // process it again and write it to destination Graphis.Blit(temporaryTexture, destination, secondFancyMaterial); // release the temporary variable so it can be reused or cleaned up RenderTexture.ReleaseTemporary(temporaryTexture); }
Ultrasound image
How medical ultrasound works
The creation of a medical ultrasound (aka ultrasonography) image can be broken up into three parts:
- Generating a sound wave and sending it into the tissue we want to scan. This is done with a device called an ultrasonic transducer, usually built into a hand-held probe.
- Receiving echoes from the tissue, also done with the transducer.
- Interpreting the echoes into a useful image.
The resulting image represents a slice of the scanned body, each pixel telling us how strong the echo from a particular spot in the tissue is. A stronger echo usually corresponds to a brighter pixel.
We can see the interior of different organs and tissues because they are not perfectly homogeneous. They have tiny irregularities that scatter sound waves in multiple directions. This is referred to as diffuse reflection. Some of the sound echoes back to the transducer and usually shows up as gray in our final image. Note that liquids and air appear black because they reflect little to no sound.As sound passes from one tissue to another tissue and their densities are sufficiently different the sound will be reflected. This shows up as a bright line between the tissues in the final image. This specular reflection is proportional to the difference in density.
Diffuse scattering causes multiple echoes to reflect on the tiny structures such as cell walls. The resulting interference gives the image a grainy, noise-like quality, referred to as speckle.
Once sound reaches a structure that causes a reflection its strength will be reduced. This causes shadows to show up in the parts of the image behind these structures.
As sound passes through the body it gradually gets absorbed and reflected and its strength is attenuated. This means that the echo will be strongest for tissues nearest to the ultrasound probe and weaker for tissue farther away.
Creating a simulation
To create a convincing simulation we need to handle the elements highlighted in the previous section: diffuse reflection, specular reflection, speckle, shadows and attenuation.
We’ll do most of our work in the MonoBehaviour.OnRenderImage
method, which means that from a 3D perspective all we’re doing is rendering a rectangle the size of our screen. Our vertex shaders will run four times per frame, once for each corner, and the fragment shader data will be interpolated between those four corners.
Prepare a 3D texture
Get a slice of the texture
The first thing we need to do is get the slice of the texture that would be scanned by our virtual ultrasound probe. We’ll use a shader to do this.
Since we’ll be processing this texture several times before we reach something that looks like an ultrasound image we can downscale it to improve performance. We do this by simply using a smaller destination texture for Graphics.Blit.
We’re passing our 3D texture to Graphics.Blit
as the first parameter, so Unity will set it to the shader’s _MainTex
property. We’re also passing the position of our screen’s bottom left corner and its x and y axes. These will be normalized to the texture’s space, so (0,0,0) and (1,1,1) will be opposite corners of the texture.
Texture3D volumeTexture; Material volumeSliceMat; void Start() { volumeSliceMat = new Material(volumeSliceShader); } void Update() { // ... volumeSliceMat.SetVector("_RectOrigin", rectOrigin); volumeSliceMat.SetVector("_RectXAxis", rectXAxis); volumeSliceMat.SetVector("_RectYAxis", rectYAxis); } void OnRenderImage(RenderTexture source, RenderTexture destination) { RenderTexture mainTex = RenderTexture.GetTemporary(256, 256, 0); Graphics.Blit(volumeTexture, mainTex, volumeSliceMat); // ... }
We use the UV coordinates (texcoord) passed into the vertex shader and the screen’s corners to calculate the position of the vertex in our 3D texture.
struct v2f { float4 pos : SV_POSITION; float3 texcoord : TEXCOORD0; }; v2f vert(appdata_base v) { v2f output; // ... float3 x = _RectXAxis * v.texcoord.x; float3 y = _RectYAxis * v.texcoord.y; output.texcoord = _RectOrigin + x + y; return output; }
All we need to do in the fragment shader is sample the 3D texture at the interpolated coordinates.
half4 frag(v2f input) : SV_Target { return tex3D(_MainTex, input.texcoord); }
Generate reflection texture
We’ll generate a separate reflection texture that we’ll combine with our main texture later in the process.
Since outlines of tissues are more clearly visible than other parts of the image, the simulation will look better if we use a full-size texture for reflections. The first argument passed to OnRenderImage
will be the Camera output, so we’ll get the texture size from there.
Material differenceMat = new Material(differenceShader); // ... void OnRenderImage(RenderTexture source, RenderTexture destination) { // ... RenderTexture reflectionTex = RenderTexture.GetTemporary(source.width, source.height, 0); Graphics.Blit(mainTex, reflectionTex, differenceMat); // ... }
To simulate specular reflections we can write a simple shader that outputs the absolute difference between the current row and the row above it.
We need to set the texture coordinates (uvThisRow
& uvAbove
) in the vertex shader so our fragment shader knows what coordinates to sample.
struct v2f { float4 pos : SV_POSITION; float2 uvThisRow : TEXCOORD0; float2 uvAbove : TEXCOORD1; }; v2f vert(appdata_base v) { v2f output; // ... output.uvThisRow = v.texcoord.xy; // _TexelSize.y is set automatically by Unity to a texel's height output.uvAbove = float2(v.texcoord.x, v.texcoord.y + _MainTex_TexelSize.y); return output; } half4 frag(v2f input) : SV_Target { half4 thisRow = tex2D(_MainTex, input.uvThisRow); half4 above = tex2D(_MainTex, input.uvAbove); return abs(thisRow - above); }
Some ultrasound probes/transducers produce curved images. To simulate this we can change the way we calculate the coordinates of the pixel “above” the current one in the vertex shader. We’ll use the top center (0.5,1) of the image as the source of the sound.
v2f vert(appdata_base v) { // ... float2 sourcePosition = float2(0.5, 1); output.uvThisRow = v.texcoord.xy; output.uvAbove = v.texcoord.xy + (sourcePosition - v.texcoord.xy) * _MainTex_TexelSize.y; return output; }
The output is quite dark, but all we need to do to enhance the outlines is to multiply the output of the fragment shader.
half4 frag(v2f input) : SV_Target { // ... return abs(thisRow - above) * 10; }
Apply noise
We make sure that our 3D noise texture is assigned to our shader and then simply run our main texture through the shader.
Just like our volume slice shader our noise shader will also need the screen rectangle coordinates, since we’ll be using slices of a 3D noise texture.
Texture3D noiseTexture; Material noiseSliceMat; void Start() { // ... noiseSliceMat = new Material(noiseSliceShader); noiseSliceMat.SetTexture("_NoiseTex", noiseTexture); } void Update() { // ... noiseSliceMat.SetVector("_RectOrigin", rectOrigin); noiseSliceMat.SetVector("_RectXAxis", rectXAxis); noiseSliceMat.SetVector("_RectYAxis", rectYAxis); } void OnRenderImage(RenderTexture source, RenderTexture destination) { // ... Graphics.Blit(mainTex, mainTex, noiseSliceMat); // ... }
To simulate speckle we multiply our image with noise. Since in real ultrasound imaging the speckle pattern moves with the probe we can obtain a convincing simulation by getting slices of a seamless 3D noise texture in the same way we slice our main texture.
At this point our working image is a 2D texture so we can easily multiply it with the noise slice.
half4 frag(v2f input) : SV_Target { return tex2D(_MainTex, input.mainTexcoord) * tex3D(_NoiseTex, input.noiseTexcoord); }
Generate intensity/shadow texture
As sound travels from the probe through tissues and tissue interfaces its intensity goes down, creating what looks like shadows in the final image.
We create a separate texture, just like we did with the reflections. We’ll do it in two steps: first we generate a difference texture, which we use for the final intensity texture.
To avoid harsh lines we can blur everything by downscaling, and even applying a blur shader.
void OnRenderImage(RenderTexture source, RenderTexture destination) { // ... RenderTexture differenceTex = RenderTexture.GetTemporary(128, 128, 0); Graphics.Blit(mainTex, differenceTex, differenceMat); RenderTexture intensityTex = RenderTexture.GetTemporary(128, 128, 0); Graphics.Blit(differenceTex, intensityTex, intensityMat); Graphics.Blit(differenceTex, differenceTex, blurMat); // ... }
We obtain a difference texture in the same way as before, but we don’t need to multiply it this time since we’ll use it to generate the intensity value. We just add together the values of the pixels along the vector from the current pixel to the sound source.
The above image the value at each point P in the output texture will be the sum of all the values on the vector from P to S, sampled from the input texture.
struct v2f { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; float2 toSource : TEXCOORD1; }; v2f vert(appdata_base v) { v2f output; // ... float2 sourcePosition = float2(0.5, 1); output.uv = v.texcoord.xy; output.toSource = sourcePosition - v.texcoord.xy; return output; } half4 frag(v2f input) : SV_Target { half4 output = half4(0, 0, 0, 1); float2 normalizedToSource = input.toSource / length(input.toSource); float2 texelToSource = float2(normalizedToSource * _MainTex_TexelSize.y); // _TexelSize.w is automatically assigned by Unity to the texture’s height for (int i = 0; i < _MainTex_TexelSize.w; i++) { output += tex2D(_MainTex, input.uv + texelToSource * i); } return output; }
This shader’s output will have brighter pixels in the areas with stronger shadows. Our plan is to multiply this with our main texture later so to make it easy for ourselves we’ll invert this now.
half4 frag(v2f input) : SV_Target { // ... return 1 - output; }
Since we’re applying shadows to the image after we applied noise, the intensity goes down the farther away we get from the sound source, effectively simulating attenuation as well without the need for an additional shader pass.
Apply reflections and shadows
Now we combine our main texture that already shows diffuse scattering and speckle with our reflection texture. We can just add the two together.
half4 frag(v2f input) : SV_Target { return tex2D(_MainTex, input.uv) + tex2D(_ReflectionTex, input.uv); }
Next we add shadows. For this we multiply our main texture with the intensity texture we generated earlier that tells us how intense the sound should be at each pixel.
half4 frag(v2f input) : SV_Target { return tex2D(_MainTex, input.uv) * tex2D(_IntensityTex, input.uv); }
Apply radial blur
In ultrasound images taken with curvilinear probes we can usually notice that areas farther away from the probe are blurred radially, as if the image is dragged and rotated around the probe.
Going into details on blur shaders is outside the scope of this material, but for each pixel we average its value with its neighbors in an arc around the sound source.
Done
We need to assign our processed texture to the destination
parameter of OnRenderImage
. This sets the texture as the camera’s output.
If we allocated temporary RenderTexture
s, it’s a good idea to release them.
void OnRenderImage(RenderTexture source, RenderTexture destination) { // ... // render Graphics.Blit(mainTex, destination); // clean up RenderTexture.ReleaseTemporary(mainTex); RenderTexture.ReleaseTemporary(reflectionTex); RenderTexture.ReleaseTemporary(intensityTex); }
Performance considerations
Make sure your 3D textures are not too large. There are various limitations to texture size, depending on the platform so your best bet is to build and test on your target platforms before deciding on a final size.
Fortunately medical ultrasound images aren’t very sharp so we can use smaller textures for processing and upscale them when we need to.
For simplicity’s sake the code in this post handles full color, but since the target image is likely grayscale we can use a single-channel texture format to further reduce the texture size. Make sure to test on your target platform since not all platforms support all texture formats.
If you’re building for mobile use RenderTexture
s with depthBuffer
set to 0. Allocate temporary RenderTexture
s and release them once you don’t need them anymore.
Do not re-use RenderTexture
s on mobile (avoid blitting from a texture to itself), instead allocate a new temporary RenderTexture
and release the old one if you need to do any processing.
Sources
Medical ultrasound (Wikipedia)
Reichl, Tobias & Passenger, Josh & Acosta, Oscar & Salvado, Olivier. (2009). Ultrasound goes GPU: real-time simulation using CUDA. Progress in Biomedical Optics and Imaging – Proceedings of SPIE. 7261. 10.1117/12.812486. (ResearchGate)
Perlin Noise Function for Unity (Github)
Reflection, Refraction, Scattering and Attenuation (vaultrasound.com)
Physical principles of ultrasound (Radiopaedia)