Hi, my name is Michael Oppitz and I’m a 3D Render Specialist at ShapeDiver. In this blog post, I want to talk about a typical problem that most applications using WebGL face at some point: transparency issues. I’ll present a specific challenge we had and the solution we came up with that works for us in most cases.
As you may or may not know, WebGL is the technology we use in the ShapeDiver 3D viewer, in order to display the parametric objects our users upload to the platform. WebGL is a library for rendering 3D graphics on the web. It has been a combined effort by many top players like Mozilla, Apple, Google and Opera. If you wish to learn more about it, there's plenty of information here.
The Problem With WebGL
Transparency in WebGL is not trivial, as the structure of WebGL was not designed to handle these types of objects. We will explain WebGL's structural design that causes these transparency issues and show examples of what happens in different scenarios. Furthermore, we will present a solution to our specific problem and will show in which situations this solution works and in which ones it doesn’t.
How WebGL Renders Objects
Let’s start with some explanations of how WebGL approaches the rendering of different objects to get a basic understanding of the issues at hand.
In general, WebGL always renders one object at a time. Then, it combines everything in a final rendered image. Just like in real life, the objects closer to the camera occlude the ones that are further away. WebGL simulates this behaviour using a depth buffer.
An object is only rendered if its depth at a position is closer to the camera than the depth that was written to the depth buffer before. After an object is rendered, it writes its current depth to the depth buffer. This ensures that the objects that we see on screen are visible according to their depth. As you can see in the image below, on the left side, depth testing is disabled. This means that now all objects are rendered in the order they were initialized.
How WebGL Renders Transparent Objects
While this is a very practical approach for opaque objects, it is not a good fit for transparent objects. Imagine two glass spheres behind each other. Even though the first one occludes the one behind, we also want to render the sphere that is farther away.
Usually, a solution would be to first render all opaque objects and let them write to the depth buffer. Afterwards, all transparent objects are rendered from the back to the front without writing to the depth buffer anymore, but still testing against it. This ensures that transparent objects are occluded by opaque objects but not by each other.
Although this solution works in most cases, especially when intersecting transparent objects, some issues can still occur in complex situations. If you are interested, we refer you to some approaches for further reading at the end of this blog post.
In our case, we found this solution to be sufficient for our needs. However, the case of self-transparency remained.
The Issue With Self-Transparency
The previous section discussed how to render multiple transparent objects. However, the most challenging case remained to be solved, regarding artefacts within a single, transparent object. These issues occurred when we wanted to render a transparent object with both its front and back faces visible.
We mentioned earlier how to sort transparent objects in a specific order, so that we can render them correctly. We are facing a similar problem here, at a smaller scale. Indeed, when WebGL is rendering an object, its polygons are split and rendered one after another. Their depth is again written to the depth buffer.
Blue sphere with visible artefacts.
For opaque objects this results in the desired effect: polygons in the front being rendered on top and polygons in the back being discarded. However, for transparent objects we face many problems that result in noticeable artefacts (see above).
We could try to treat each object as a group of small polygons and use the method we described above for rendering several transparent objects. In this case, the polygons would play the role of the multiple objects to be rendered. The problem is that sorting all polygons every time we render is such a demanding task that this would slow down the rendering process immensely.
Gemstone with visible artefacts.
Our Simple Solution to Self-Transparency
So here comes our easy solution to this problem: we split the transparent object into its front and back side. This means that we render the object once with just its front side visible and another time with just its back side visible. Therefore, we end up with two independent objects. Once that’s done, we let the sorting take place in a different step, at the object level which we have discussed before, instead of the polygon level.
Gemstone without artefacts.
This operation takes place in the rendering loop of our application, which means that it is completely transparent for the user. Therefore, in the eyes of the user, using transparent objects does not differ in any way compared to opaque objects. As you can see below, the results are now as we expect them to be compared to our previous results.
Although this solution is working in most cases, there may still be some artefacts visible with more complex geometry. As soon as the transparent object is overlapping itself (concave geometry) the same problem occurs again.