• ACM Studio Logo
  • Home
  • About
  • Blog
  • Events
    • SRS
    • Workshops
    • Calendar
  • Games
  • Instagram
  • Facebook
  • Discord
  • Instagram
  • Facebook
  • Discord
Aubrey Clark
Byte Sized Tutorials

Outline Shaders in Unity’s URP

A common effect in games is the outline effect! However, this effect is often not trivial to implement, since it requires use of shaders and there are multiple ways to implement it.

Here I will demonstrate 2 techniques for adding outlines to objects in 3D games using URP. These do not generally work for 2D.

This article will assume you know how to create and assign a material to 3D objects. In addition, this article does not explain the underlying shader concepts.

Method 1: Inverse Hull

Inverse hull is a very common method of doing outline shaders. The way this method works is to draw the original object twice:

  • draw the original object as normal
  • draw the object slightly bigger as an outline
General technique for inverse hull outlines
General technique for inverse hull outlines

Creating the Shader

To start, I will copy the base shader I want to use. In this case, I will copy the Lit.shader in Library/PackageCache/com.unity.render-pipelines.universal/Shaders/Lit.shader into a new shader, which I will call InverseHull.shader. You can also use Unlit or any ShaderLabs shader you have already created.

Opening up the shader, we get a standard ShaderLabs shader. First things first, lets change the first line from Universal Render Pipeline/Lit to whatever you want! I’m choosing Ketexon/InverseHull. This will allow us to create a new material to assign the shader to.

Now, we can go into any scene and add any object, to which you can attach the material you just created.

It should look no different than the normal shader.

Outline Pass

Now we need to add a new pass to the shader to draw the outline.

To define a pass, we can include a Pass block before the first pass block of the shader.

If you’re using Lit, it should look like this:

SubShader {
	Tags
	{
		...
	}
	LOD 300
	
	Pass
	{
		Name "InverseHull"
		...
	}
	
	// ------------------------------------------------------------------
	//  Forward pass. Shades all light in a single pass. GI + emission + Fog
	Pass
	{
		...

Now, let’s add our shader code. We only need to access the position of our vertices in the vertex shader, and the fragment shader outputs a solid color so it needs no other varyings.

Pass
{
    Name "InverseHull"

    HLSLPROGRAM
    #pragma vertex InverseHullVertex
    #pragma fragment InverseHullFragment

    // For core functions
    #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

    struct Attributes
    {
        float3 positionOS : POSITION;
    };

    struct Varyings
    {
        float4 positionCS : SV_POSITION;
    };

    void InverseHullVertex(Attributes input, out Varyings output) {
        VertexPositionInputs vertexInput = GetVertexPositionInputs(
            // scale object by 1.1
            input.positionOS.xyz * 1.1
        );
        output.positionCS = vertexInput.positionCS;
    }

    void InverseHullFragment(Varyings input, out float4 color : SV_Target)
    {
        color = float4(1, 0, 0, 1); // output a solid color
    }
    ENDHLSL
}

If we run this in Unity this is what we get:

This is obviously wrong! Our object is no longer showing :<

This is because we just drew a larger object on top of our original object, and, unless we want to disable depth testing for all objects, we cannot really draw the outline “before” the object (we can use stencil testing! but that is outside of the scope).

The solution in this case is to only draw faces inside of the red sphere using front-face culling. Think of it like we’re cutting the outline in half like one of those layers of the earth visualizations.

Visualization of inverse-hull culling
Visualization of inverse-hull culling

To add this to the code, add Cull front right below Name "InverseHull".

Pass
{
    Name "InverseHull"
    Cull front

    HLSLPROGRAM
    #pragma target 2.0
    ...

Wow, we get the right result :D

…or so you thought.

If we try this with any more complex model, we will see that this method produces some inconsistencies. The first is that when you scale an object, the wider parts of the object will create larger outlines than the thinner parts of the object.

The blender monkey with this shader. Note that the areas around the mouth have much thinner outlines than those around the head.
The blender monkey with this shader. Note that the areas around the mouth have much thinner outlines than those around the head.

In addition, the scaling is done in object-space. Thus, all faces near the origin of object-space will not get scaled at all, which is especially bad if some objects are not centered.

A sphere with the origin at the left of the sphere with the outline shader. Note that the outline at the origin has 0 width.
A sphere with the origin at the left of the sphere with the outline shader. Note that the outline at the origin has 0 width.

Using Normals

One way to fix these two problems is that instead of scaling the objects, we move each face “outward” by a certain fixed amount. Since we can chose a specific length to move “outwards”, and outwards still makes sense at the origin, this works to fix these two issues.

To access the outwards direction, we need to access the normal in our Attributes:

struct Attributes
{
    float3 positionOS : POSITION;
    float3 normalOS : NORMAL;
};

And then move each vertex in this direction by a fixed amount. I will do this in object-space, but you can also do it in world-space by using GetVertexNormalInputs.

void InverseHullVertex(Attributes input, out Varyings output) {
    VertexPositionInputs vertexInput = GetVertexPositionInputs(
        // scale object by 1.1
        input.positionOS.xyz + input.normalOS * 0.1
    );
    output.positionCS = vertexInput.positionCS;
}

Doing this, we see that the monkey and the sphere now have uniform thickness!

However, now there is a new problem. For non-smoothed shading, eg. for a box, two vertices at the same corner but of different faces will have different normals. This will lead two vertices being at originally the same position to now split up into two different positions, leading to a once continuous mesh now separated.

The outline of a box is not connected.
The outline of a box is not connected.

This is an intrinsic flaw of this outline method, and it cannot be fixed.

The glaring issues…

Taking a look at our final result, you may be like: wow, this looks terrible!

And you’re right! Some faces in the middle of the mesh have an outline when you may not necessary want them to.

One solution to this is manually adjusting the depth of the outline to be behind the object. This does depend on the distance to the camera, and will affect how other objects interact with the outline, too.

void InverseHullVertex(Attributes input, out Varyings output) {
    VertexPositionInputs vertexInput = GetVertexPositionInputs(
        // scale object by 1.1
        input.positionOS.xyz + input.normalOS * 0.1
    );
    output.positionCS = vertexInput.positionCS;
    output.positionCS.z -= 0.02;
}
Example artifact, where an object behind the monkey shows up in front of the outline.
Example artifact, where an object behind the monkey shows up in front of the outline.

Another very common solution is using stencil testing to “mask out” the outline when it is in front of the object. However, this would require a more complicated multipass shader, which would require using a scriptable render pass, which is outside of the scope.

Conclusion

Inverse hull outlining is a very quick method of outlining a single object, but it has very tricky artifacts that are hard to avoid. Use this method if it is ok to have those artifacts.

Method 2: Postprocessing

Using postprocessing to outline objects is another very common method. This method often looks very good and is extremely customizable, but requires that the whole game is outlined.

This method requires an “edge detection” method. There are many such methods (often defined as a matrix), but a popular one is the Sobel operator.

Creating the Shader

First, let’s create a new postprocess shader. These shaders are very simple, so the best way to do this is to create a new Image Effect shader and paste this in:

Shader "Ketexon/PPOutline"
{
    SubShader
    {
        // No culling or depth
        Cull Off ZWrite Off ZTest Always

        Pass
        {
            HLSLPROGRAM
            #pragma vertex Vert
            #pragma fragment OutlineFrag

            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareDepthTexture.hlsl"
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareNormalsTexture.hlsl"
            #include "Packages/com.unity.render-pipelines.core/Runtime/Utilities/Blit.hlsl"

            void OutlineFrag(Varyings v, out half4 color : SV_Target)
            {
                color.a = 1;
                color.rgb = SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, v.texcoord).rgb;
            }
        }
    }
}

Now, create a new material and make it use this shader.

The easiest way to use a material as a postprocessing effect is to add a Full Screen render feature to the URP Renderer Data. This data is by default in Settings , and can be added via the Add Renderer Feature at the bottom.

The Edge Detection Operator

Now we need to add an edge detection function.

You can paste this below the includes to use the Sobel edge detector:

float Sobel(Texture2D tex, float2 uv, float2 texelSize)
{
    float4 tl = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2(-1, -1));
    float4 tc = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 0, -1));
    float4 tr = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 1, -1));
    float4 cl = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2(-1,  0));
    float4 cc = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 0,  0));
    float4 cr = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 1,  0));
    float4 bl = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2(-1,  1));
    float4 bc = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 0,  1));
    float4 br = SAMPLE_TEXTURE2D_X(tex, sampler_LinearClamp, uv + texelSize * float2( 1,  1));

    float4 gx = tl + 2 * cl + tr - bl - 2 * cr - br;
    float4 gy = tl + 2 * tc + tr - bl - 2 * bc - br;

    return sqrt(dot(gx, gx) + dot(gy, gy));
}

To use this, all we have to do is pass in any texture, the current UV, and the texel size.

There are 3 reasonable textures we could use: the depth texture, the normal texture, and the color texture. The latter is very bad, since it would detect changes in color as an edge, so the former two are most often used, however, have slightly different results.

To use the depth texture to detect an edge, you can apply the Sobel operator to the depth texture and render a solid color if it is above a threshold.

void OutlineFrag(Varyings v, out half4 color : SV_Target)
{
    float depthEdge = Sobel(
        _CameraDepthTexture,
        v.texcoord,
        _CameraDepthTexture_TexelSize.xy
    );
    color.a = 1;
    color.rgb = depthEdge >= 0.01
		    // red if edge
        ? half3(1, 0, 0)
        // normal otherwise
        : SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, v.texcoord).rgb;
}

The result is very clean edges:

However, you’ll note that this hardly detects sharp corners on the cube. This is because this method will detect any jumps in depth, which primarily happens when there are two separated objects. If you want to detect changes in direction, you can use the normals texture:

void OutlineFrag(Varyings v, out half4 color : SV_Target)
{
    float normalEdge = Sobel(
        _CameraNormalsTexture,
        v.texcoord,
        _CameraNormalsTexture_TexelSize.xy
    );
    color.a = 1;
    color.rgb = normalEdge >= 0.01
        ? half3(1, 0, 0)
        : SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, v.texcoord).rgb;
}

This detects sharp edges nicely, but also leaves some artifacts on smooth surfaces. To really tweak, we can add some material properties to can blend the two values together with a custom threshold.

Put this at the top:

Shader "Ketexon/PPOutline"
{
    Properties {
        _DepthBlend ("Depth Blend", Range(0, 1)) = 0.5
        _Threshold ("Threshold", Range(0, 1)) = 0.5
    }

And this below:

uniform float _DepthBlend;
uniform float _Threshold;

void OutlineFrag(Varyings v, out half4 color : SV_Target)
{
    float depthEdge = Sobel(
        _CameraDepthTexture,
        v.texcoord,
        _CameraDepthTexture_TexelSize.xy
    );
    float normalEdge = Sobel(
        _CameraNormalsTexture,
        v.texcoord,
        _CameraNormalsTexture_TexelSize.xy
    );

    float edge = depthEdge * _DepthBlend + normalEdge * (1 - _DepthBlend);

    color.a = 1;
    color.rgb = edge >= _Threshold
        ? half3(1, 0, 0)
        : SAMPLE_TEXTURE2D_X(_BlitTexture, sampler_LinearClamp, v.texcoord).rgb;
}

I tweaked the depth blend to 0.96 and the threshold to 0.078 to achieve this: