<![CDATA[Zyanide's Lab]]>http://www.zyanidelab.com/http://www.zyanidelab.com/favicon.pngZyanide's Labhttp://www.zyanidelab.com/Ghost 3.36Mon, 10 Apr 2023 19:30:38 GMT60<![CDATA[World Space Path Resampling]]>http://www.zyanidelab.com/world-space-path-resampling/643076116a905651d596f840Mon, 10 Apr 2023 06:17:40 GMT

How hard is it to correctly estimate indirect lighting? Light tends to bounce around a lot and usually in an offline path tracer, like the one you'd use to render a movie, you can calculate several bounces until no significant contribution is brought in. Anyway, as long as you are not doing real-time path tracing that's fine. However,  real-time path tracing can get really tricky, we are constrained by the number of rays per pixel and of course we want high quality samples that are not that noisy. Also high FPS, lots of frames. Basically we want it all and we want it fast. Could we improve the quality of our indirect bounces? Specially from the 2nd bounce on? I think so, but for that let's go through some concepts real quick.

How are paths built? This is a very important and delicate process that of course I'm about to break beyond repair. [1] First of all we shoot a ray from our camera/eye/sensor into our scene, that ray will (hopefully) hit a surface and now all we need to do is compute the radiance that goes back to our camera/eye/sensor, the equation we use for that is:

World Space Path Resampling
Rendering Equation

This is known as the Rendering Equation[2] Where ω is the hemisphere of directions around the surface normal, ρ is the BRDF, Li is the incoming radiance, N is the surface normal and wi is the  light direction vector. This is the kind of integrals we can't solve using our calculus book so we need to use a numerical method, in this particular we will use Monte-Carlo integration

World Space Path Resampling
Monte Carlo estimator

This basically means we need a strategy to get a random sample wj within ω and divide by the PDF of said sample, usually a lobe in the material is selected at random and then that lobe is sampled. That sample tells us the direction of our next ray, we shoot a ray and find another surface, now we need to know the radiance coming from that vertex toward our original vertex, for that we use the following equation:

World Space Path Resampling
Rendering Equation

Yep, the Rendering Equation™ again, which means we Monte-Carlo it, get a new sample, shoot a ray, find another surface aaaaand ... rendering equation again and again and we keep doing this until we get bored or we find a light, in which case we propagate the light back through our path and have a nice beautiful color for our pixel. If no light is found we discard our path and start all over again. Keep generating samples and adding the result to your pixel and after a while you will have a result that converges toward the solution to the Rendering Equation™.

World Space Path Resampling
Tracing a path

This sounds somewhat slow and wasteful, fortunately there are ways to improve the performance of this algorithm, like Next Event Estimation or better strategies to sample the different material lobes, however since we are dealing with real-time path-tracing, we need to do even better, remember, high FPS. For that we can use ReSTIR GI[3], or ReSTIR PT[4] if you feel adventurous.

Either algorithm is based on Resampled Importance Sampling (RIS). RIS allows us to importance sample a function p̂(y), which can be a simplified version of the Rendering Equation, using a cheaper function p(y) that is easier to sample.

World Space Path Resampling
RIS Weight

Using weights like these we can get a RIS estimator of our integral that looks like so[3]:

World Space Path Resampling
RIS Estimator

This is a much better estimator than if we just plugged p(y) to our Monte-Carlo estimator.

So how would we go about generating a sample? First we shoot a primary ray from our sensor/camera/eye into the scene, once at a surface we can sample a material lobe p(y) which yields a direction wj and using said direction we then build one of the aforementioned paths, we calculate the radiance it brings in and that can be used as our p̂(y) [3].  A very interesting observation is that we can represent a whole path with a RIS weight, this will become handy later on.

Plug that to a reservoir and then resample using other reservoirs both spatially and temporally[3][4]. Of course there are some reconnection conditions that must be met for a neighboring sample to be used, more on reconnection later.

Results can be really good.

World Space Path Resampling
ReSTIR GI (1 SPP)

Now let's see the following scene:

World Space Path Resampling
Evil scene, may you never ever run into something like this

In ReSTIR GI what will happen is that one of the pixels will randomly find the correct path, store that in a reservoir (represented by a bucket) and then share it with its neighbors, sharing is caring.

World Space Path Resampling
ReSTIR GI Spatial resampling

Pixels at the top are happy, but what happens with the pixels at the bottom? What are the odds of those finding the right path? It can take them a while, so can we do something about that?

Remember that each bounce's contribution can be computed using Monte Carlo Integration of the Render Equation which means we could use a RIS estimator. Same thing as we did with the whole path but for a path segment. wj is the direction we cheaply sampled from our BRDF with probability p(y) and p̂(y) can be the luminance that comes from the path back to that vertex.

World Space Path Resampling
Each bounce could be stored in a reservoir

How is this helpful? Well, if we can use RIS we can use reservoirs! Wait... what? Bear with me, one of the strengths that ReSTIR[7] has is that of spatial resampling, sharing reservoirs among neighboring pixels, this happens on screen space, however that might no be possible when we are talking about world space as rays are bouncing around the scene. For that we got ReGIR[5], ReSTIR's world space cousin, in it we store light samples in a world-space grid and use said grid to get high quality samples that help on NEE.

World Space Path Resampling
ReGIR Grid

Extending ReGIR's idea, we can store reservoirs that represent paths in a grid, in that way could find a better path faster. Let's call that ReGIR GI from now on. So how would the whole thing work?

First we need to store in a grid paths as we generate them, just make sure they do bring in some radiance and that they are reconnectable, there is no point on sharing sample that cannnot be reused.

There are several opportunities to use our ReGIR GI samples, like when generating a new path we could resample at a particular bounce both what is in the grid and the newly generated path segment using a bounce reservoir. I've found this particularly useful for second bounces. Of course , mind reconnectability(Is this even a word?) On how to combine reservoirs and the equations you need, you can check [3] and [4].

If for some reason the grid did not contain a useful sample, our newly generated one will be selected so it's all good.

So after all that the bounce contribution can be evaluated using our well known:

World Space Path Resampling
ReSTIR Estimator

Where Wbounce is the Weight of our bounce reservoir. This estimates the rendering equation at the bounce.

Recap:

1) Store reservoirs with paths in the grid as they are generated.

2) When generating a new path, at a bounce vertex, if the current lobe allows for reconnection, we can check the grid to find good known paths and stream the reservoirs in it through a bounce reservoir along with a new sample.

3) Reconnect to the selected path.

So it would look like this:

World Space Path Resampling
Sampling the grid to find useful paths

Once a good path is found it can be stored in a screen-space reservoir and shared normally as we would do with ReSTIR GI/ PT.

Let's talk about reconnection. We need to be able to connect one path to another, for that if we are at vertex xi and we are trying to connect to vertex yi we need to make sure that the direction wi (from xi to yi) is within the lobes we are reconnecting, this works well for rough (20%>) and diffuse materials. (You can always uniform sample the specular lobe, for that kind of non-sense check [6])

World Space Path Resampling
Green rays are within the lobe, reconnection can happen.

But what happens when we are dealing with high specular materials where the lobe is very narrow? Can we do something about them?

ReSTIR PT brings in the idea of random replay and delayed reconnection, this means that when we can't reconnect, we replay the random numbers, which will generate a somewhat similar direction and at the next vertex if we can reconnect, we do so. For in dept discussion of reconnection and random replay, check [4].

Let me illustrate how that would work on our case using the following scene:

World Space Path Resampling

The floor is a high specular(0% rough) dielectric material, trying to reconnect on the first bounce would yield a bunch of black pixels as the specular lobe is really narrow. So when doing resampling, instead of trying to reconnect, we replay the random numbers of the selected reservoir, then at the next vertex if the material is rough enough we reconnect.

World Space Path Resampling

In this particular when generating new paths, once the first bounce is reflected off the floor into the ceiling, we use our grid to find a better path, which in this case sends us straight down to the diffuse lobe of the floor that is illuminated by the lamp above it. Said path is shared when resampling and we get more stable samples.

World Space Path Resampling
2 Bounce ReSTIR (Indirect lighting only)

My constant misuse of ReSTIR/ReGIR is getting out of hand, but fear not, I've got even more offensive uses for them.

Footnotes

[1] Ray Tracing Gems II. Chapter 14 "The Reference Path Tracer"
[2] The Rendering Equation, James Kajiya
[3] ReSTIR GI: Path Resampling for Real-Time Path Tracing
[4] Generalized Resampled Importance Sampling: Foundations of ReSTIR
[5] Ray Tracing Gems II, Chapter 23 "Rendering many lights with grid-based reservoirs"
[6] ReSTIR GI for Specular Bounces
[7] Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting

]]>
<![CDATA[Lighting participating media with thousands of lights]]>http://www.zyanidelab.com/lighting-participating-media-with-thousands-of-lights/6307be7890cc066ce8d5ce25Tue, 30 Aug 2022 00:09:49 GMT

Participating media, AKA fog/steam/smoke is one of those things that can help make a scene look real but also is crazy hard to render as it involves calculating how light bounces around within the media.

id Software decided to add fog volumes to the idTech 3 engine and of course both Jedi Outcast and Academy have them, so we must add them back, but since this whole project is about physically accurate rendering, we want the real deal, physically correct fog with all its light interactions, no old school hacks, no alpha tricks, so let's talk about some new hacks and tricks.

Warning: the algorithm that I'm about to describe is cursed, if you use it Monte Carlo's lawyers will send you cease and desist letters, light transporters will demonstrate in front of your house, the photon mapping guild will disown you, you've been warned.

First of all, let's talk about the problem we are facing. Participating media involve some light interacting with particles and said light been absorbed or scattered,  we need to know how much light is in-scattered in the camera's direction, here's a typical diagram of how that looks.

Lighting participating media with thousands of lights
Participating media

The thing here is that there is only one light in that diagram and we are facing a slightly more complex problem, here's an updated diagram:

Lighting participating media with thousands of lights
Participating media but worse

Waaay too many lights in an area, and although there is a list of potentially visible lights, many of them are completely or partially occluded. To deal with visibility tutorials/blog posts/videos usually suggest using shadow maps, well, guess what, we don't have those. We also have dynamic lights, you know, storm troopers are usually shooting at the player with their blasters. I guess the only upside here is that we are dealing with homogeneous media, that means the particles within our volume are evenly scattered. I'm happy they didn't add back then some crazy smoke plume or something similar.

Of course we want interactive frame rates, the player should be able to move around, so we can't really stay still to accumulate anything.

Our requirements are:

  • Physically accurate fog rendering
  • Thousands of lights, static and dynamic
  • No shadow maps
  • Interactive frame rate

First things first, since we want physically accurate fog, we will need math, I'm truly sorry.

The following equation deals with the light in-scattered along a ray of origin o and direction d that travels through a volume S distance [1][2]:

Lighting participating media with thousands of lights
In-scattering along a ray

Where T is the transmittance function, σs is scattering coefficient ,f𝜌 is the phase function and Li is the incoming light, since we are dealing with homogeneous media, scattering is a constant and our transmittance equation is [2]:

Lighting participating media with thousands of lights
Transmittance

Where σ_ext is extinction coefficient, our equation finally looks like:

Lighting participating media with thousands of lights
In-scattering along a ray through a homogeneous medium

Oh noes, 2 integrals, don't worry, we'll deal with both. The inner integral sums up all light interactions on a particle and how much of that light is in-scattered in the direction of our ray. Of course along our ray there will be a huge amount of particles and we need to sum all those contributions, and that's what the integral on the outside actually does.

Let's deal with the integral along the ray first, it's the boring one. There are millions of particles in the path of our ray and we can't just add them up, remember interactive frame rates, so we need to approximate the value of it, to do that we will use a Riemann sum, which basically means that we move along the ray in small steps, at each step we take a sample particle, evaluate it's light contribution and multiply that by the length of our small step and at the end add the values of all our steps. Yes, I described ray marching, how original [2].

Lighting participating media with thousands of lights
Ray marching

But don't leave just yet, the other integral is where the fun begins, it tells us that the in-scattered light in a certain direction is the result of adding up all the light interactions on a particle. Since we have possibly hundreds of lights and none of them have a shadow map, how do we do it? We will approximate the value using Monte Carlo integration, so our equation looks like:

Lighting participating media with thousands of lights
Monte Carlo integration of in-scattering

Now we need a strategy to get some light samples and their PDF. We can't directly sample that function but we can use Resampled Importance Sampling (RIS) where we use a cheap p(x) function to sample a more complex p̂(x) one, we do that and draw a good sample, then for visibility we could shoot a shadow ray, but those are very expensive and the idea of interactive frame rates goes out the window, so we need something else.

What if we want to share good samples among rays? And if we have good samples, could we reuse them across frames? At this point you are certain that I'm about to bring up reservoirs[5][6], and yes, that's what we will use, but instead of using ReSTIR, we will use ReGIR[3], let me explain why.

In ReGIR we use a world-space grid, and in each voxel we store some reservoirs that represent light samples, these light samples are likely to contribute more to a certain area or volume in our case. As with any form of reservoir, we need a p̂(x) function, in this particular it is the incoming radiance to the voxel, that's it, although this might be a sub-optimal function, we can use it to resample a better '(x) function. When calculating the incoming radiance remember to take into account the transmittance of the medium. Testing for visibility each sample is paramount, it will really help alleviate the problem of visibility as the number of needed shadow rays is greatly reduced and we can consider the samples within the voxel to be reasonably visible.

Lighting participating media with thousands of lights
ReGIR

Another clear advantage of using ReGIR is that as we update those samples each frame, dynamic lights will make it into those voxels and will be discarded if they either disappear or are too far from the voxel. Also the samples within a voxel are independent of the camera movement, to a certain degree as we will discuss later on, and can be resampled each frame to bring in high quality and stable results. This is just an overview of why we want to use ReGIR, for more details on how it can be implemented check Chapter 23 of Ray Tracing Gems Vol 2 [3].

One thing that ReGIR doesn't specify and that we get to choose is the kind of grid that we will use, for that I found the adaptive hash-grid that Guillaume Boissé describes in his paper [4] quite convenient, voxels are small close to the camera and get larger as they are farther away, of course moving the camera changes the size of voxels.

Lighting participating media with thousands of lights
Adaptive hash-grid

This means we get greater detail up close and coarser the farther we are from the camera, using our shadow rays where they matter most.

Also, if you think about our transmittance function, we get exponentially less light the farther we move on our ray, so we can adjust the size of each step based on the size of the voxel we are at, taking larger steps and reducing memory accesses and computation, and adaptive ray marching if you will. It would look something like this:

Lighting participating media with thousands of lights
Adaptive ray marching

I told you there would be hacks. Now let's go back to the function we are trying to sample:

Lighting participating media with thousands of lights
In-scattering contribution

Where Li is the incoming light, ,f𝜌  is a phase function which could be isotropic scattering or something like the Henyey-Greenstein phase function. So using our ReGIR reservoirs within the voxel we are at, we create a new reservoir and stream them to sample this better '(x) function using the following weight[5]:

Lighting participating media with thousands of lights
Weight to use in a new Reservoir for sample x

Where r_regir is our ReGIR reservoir, '(x) is the function we are resampling. Once we do that, we can calculate the contribution weight W and use it along our sample [5] to estimate the integral we need.

Lighting participating media with thousands of lights

Recap, we will use an adaptive hash-grid that we will fill with ReGIR light samples, we will ray march through said grid and use the light samples to estimate the in-scattering of a sample on each step, the steps will increase in length proportional to the size of a voxel. How does it look?

Lighting participating media with thousands of lights
Fog ... voxels?

Well, we can see the voxels and it doesn't look very nice, what are we missing? Jittering! But not only the distance within the step but also try to get neighboring voxels.

Does it improve?

Lighting participating media with thousands of lights
Final result, 1 SPP

Looks much better, quality can be improved depending on how you fill the grid, in this particular, as you can tell, I'm skipping several voxels. It gives the volume a look and feel of a heterogeneous volume, so I'm gonna leave it as is and call it artistic choice.

Will a phase function work correctly? Yes.

Lighting participating media with thousands of lights
Different values for the asymmetry factor (g)

Also adjusting the extinction coefficient.

Lighting participating media with thousands of lights
Different values for the extinction coefficient

Denoising? 3x3 Gaussian blur ¯\_(ツ)_/¯(Didn't use it for most of the screenshots though).

What are the downsides of this technique? Since visibility is related to whether those light samples are visible to the voxel and it can be big, we don't get those sharp God Rays/light shafts/etc, it all looks soft, to fix that I guess you could substantially reduce the size of the steps when ray marching and actually shoot a shadow ray to check for visibility. Also this only covers direct lighting, indirect lighting within the volume is not covered here. Aaand jittering might bring in light leaking.

Is it all worth it? Here are some screenshots of how the whole thing looks like.

Lighting participating media with thousands of lights
Lighting participating media with thousands of lights
Lighting participating media with thousands of lights
Lighting participating media with thousands of lights
Dynamic lights! Those lasers are all independent emissive lights.

Footnotes

[1] Ray Tracing Gems, Chapter 28 "Ray Tracing Inhomogeneous Volumes"
[2] Monte Carlo methods for volumetric light transport simulation
[3] Ray Tracing Gems II, Chapter 23 "Rendering many lights with grid-based reservoirs"
[4] World-Space Spatiotemporal Reservoir Reuse For Ray-Traced Global Illumination
[5] Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
[6] How to add thousands of lights to your renderer and not die in the process

]]>
<![CDATA[ReSTIR for Virtual Reality]]>http://www.zyanidelab.com/restir-for-virtual-reality/62e7f1baf6035229a712af40Tue, 02 Aug 2022 17:03:06 GMT

Would VR benefit from ray tracing? Can you imagine immersing yourself in a virtual world that is indistinguishable from reality?  How hard is it render stuff in VR anyway? Well, VR headsets come with 2 displays, one for each eye, they have pretty high resolutions that can go up to 3840x2160 (that is a PiMAX 8k) and if that's not enough, the refresh rate is 90hz. So huge frames at crazy high refresh rates, all ray traced, terrible situation. Do we even stand a chance?

I believe so, I'll present an idea I've had for a while that leverages ReSTIR[2][3][4] powers and saves on our precious ray budget. It all started when someone pointed me to an existing VR mod for JKII[1] and asked if my port would include VR capabilities, so it's been in the back of my head for a while.

Disclaimer: I currently do not own a VR headset and it's going to take me a while to get to the point where I could realistically start working on implementing VR so what follows is highly theoretical, it works in my head, but I wanted to share this idea as it can help someone, or might open the discussion to get to an actual good solution. There is nothing really new here per say, just an application of already known stuff.

Of course the first thing that we need to keep in mind is that we need two cameras in our world, one for each eye and that we need to render a frame for each, of course we could generate each one independently, that would be pretty expensive. Now, check the following image:

ReSTIR for Virtual Reality
Left and right eyes (simulated)

As you can see, both images are pretty close to each other, they are almost identical, eyes tend to be close to each other, can we leverage that? Definitely, we can reuse a lot of the work done for one eye on the other.

We'll use the right eye frame as the source and we'll reuse stuff on the left eye from now on, can be done the other way around if you want.

First we can perform a normal ReSTIR pass on our right eye frame and store the resulting reservoirs in a buffer, at this point we are using one shadow ray per pixel. We could use ReSTIR El-Cheapo[7] to further save shadow rays. We also store the resulting radiance of the indirect illumination bounces, super easy for diffuse bounces, trickier for specular bounces, more on that later.

Then what we need is a way to map pixels from the right eye to the left eye, something that tells us that a pixel at some coordinate on the right eye can be found at some other coordinate on the left eye, or not at all if occluded. Since we could consider that we are moving the camera a little to the left, we could just calculate motion vectors from one eye to the other. There are some interesting technologies like MVR[5] that help speeding up view positions calculations, I don't know if they could be used in this particular scenario to compute said motion vectors or if there is something similar that does. Anyway, once we calculate those motion vectors we can for each pixel on the left find the reservoir that was calculated on the right and shade it, no shadow rays or other sampling, just like that, also we can add the contribution of the indirect illumination bounces, for diffuse no tracing needed, for specular bounces, again, things get tricker. It would look something like this:

ReSTIR for Virtual Reality
1) Calculate Direct Illumination on the right eye using ReSTIR 2) Calculate motion vectors from the right eye to the left eye 3) Use the reservoir from the right eye on the left eye

Alright, it seems we are almost done, but we have some disoccluded pixels, what do we do about those? Full process is needed, we need to keep an extra ReSTIR temporal buffer for the left eye to ensure high quality. In the worst case scenario the left eye is looking straight at a wall while the right eye is looking down a hall, you know, the user peeking around a corner.

ReSTIR for Virtual Reality
Worst case scenario that would need a lot of ReSTIR on the left eye

Reusing reservoirs has several advantages, for one we are reusing a lot of computation and saving our valuable rays, but another very important advantage has to do with user perception of the scene. I wasn't really aware of this issue until a question was posted at the NVIDIA DEV forums[6], basically if we were to render each eye independently we could end up with slightly different lights being selected and that can ruin perception, and it's even worse with specular stuff, different view vectors for each eye. So light reuse can really help here, but still, if we do all the sampling from the right eye's perspective, specular stuff will look great on the right eye, but not so much on the left eye, we need to do something.

Specular lobe, we meet again....

Let's (attempt to) tackle this specular issue for both direct and indirect lighting.

When it comes to Direct Lighting, ReSTIR[2] requires that we establish a function that represents our target function, which can be suboptimal. It could be something like:

ReSTIR for Virtual Reality

Where ρ is the BRDF, Li is the incoming radiance from our sample, N is the surface Normal and L is the sample direction vector. Inside the BRDF we can have a specular part and a difuse part, since specular is dependent on the view vector, which one should we use? Right eye or left eye? What if we use both? We could calculate an average of both or take the max value of either lobe. In this way we are representing both specular lobes in our p̂(y).

ReSTIR for Virtual Reality
Proxy specular functions

This would give us a better chance of having specular evenly distributed on both eyes, I guess, this needs some serious testing.

What about indirect lighting? Can we reuse a specular sample as we did with diffuse? It could be doable as long as the sample falls within the specular lobe of both eyes, if not, well, we need to generate a new sample. The higher the specularity of a surface the less likely a sample can be reused on the other eye, it depends on specularity and distance to the eyes. I wonder if sampling where both specular lobes intersect would be beneficial in this instance.

ReSTIR for Virtual Reality
Specular samples cannot easily be reprojected on the other eye, just check the reflections on the floor.

All that we've talked about works fine on one card, but since VR usually means two cards in SLI/Crossfire, how do we share stuff? It would be wasteful to have one card render one eye and have the other wait for the resulting reservoirs. It could be done as Q2RTX does, and JK2/3 do, that is, render half of the frame in one card and the other half in the other, that way they can run in parallel, stay independent and not wait for resulting reservoirs or indirect light samples.

Remember that using upscaling tech might be needed, so don't be shy and use DLSS / FSR / XeSS or whatever you have at your disposal.

So here it is, I hope this is feasible and that I can implement this in the future so we can have some ray traced Jedi Outcast/Academy VR goodness.

Footnotes

[1] Jedi Knight II: Jedi Outcast VR
[2] Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
[3] Rearchitecting Spatiotemporal Resampling for Production
[4] How to add thousands of lights to your renderer and not die in the process
[5] Turing Multi-View Rendering in VRWorks
[6] Specular flickering with RTXDI in VR
[7] ReSTIR El-Cheapo

]]>
<![CDATA[ReSTIR El-Cheapo]]>http://www.zyanidelab.com/restir-el-cheapo/6271bf4583cd36d2bf63f7ddWed, 04 May 2022 05:47:36 GMT

Want top quality direct illumination? Are you constrained by a very tight ray budget? Are you in keto diet and carbs are calling? Don't worry about the first two, in this short blog post I describe a simple yet powerful idea that can help you achieve really good results reducing both the needed ray budget and the reservoir storage in half. This is based on the paper entitled Rearchitecting Spatiotemporal Resampling for Production [1] which I will refer to as The Paper from now on. About the carbs, can't help, they are delicious.

Quick recap, ReSTIR allows us to have high quality samples of lights, one for each pixel, first we sample lights using unshadowed path contributions, then we combine the new samples with the previous frame reservoir and surrounding pixel reservoirs. After all that sampling and combining, we will have a light which we check for visibility, basically we trace a shadow ray, and if it is visible, we use it to shade our pixel. If none of that made sense or you need more info, you can check my other post describing the whole process in more detail here [2].

The Paper makes an interesting observation about that shadow ray that we need to trace, specifically when we are combining reservoirs from neighboring pixels. If they are close to each other, we could forgo tracing the ray .... wait, what? why? Think about it, visibility has been determined by that other pixel, it did the tracing and since it is pretty close we can trust it and just shade the pixel. Of course, closeness is a must, otherwise light leaking would be noticeable. Yes, this method will introduce bias :(

So how can we use that observation to our advantage? We can forgo tracing half of the shadow rays if we use checkerboarding. Half of the pixels would go through the whole ReSTIR process, they would be responsible of introducing new light samples and tracing shadow rays, you know, breadwinners. The other half would be use a simplified version of ReSTIR, they would take a neighboring pixel's reservoir as its own temporal reservoir and combine it with other neighbors' reservoirs, basically freeloaders . These freeloaders would not trace a shadow ray and of course would not sample new lights, their reservoirs are discarded as they can't be trusted, we didn't check for visibility. And this is where storage is cut in half.

ReSTIR El-Cheapo
ReSTIR El-Cheapo

As you might have guessed, those freeloaders must combine reservoirs from really close pixels to avoid light leaking, or at least to minimize it. I've called the resulting algorithm ReSTIR El-Cheapo.

There is one key advantage of this method over upscaling. Despite the fact that I'm calling half of the pixels freeloaders in reality they are resampling those reservoirs based on their individual characteristics, meaning they are using their particular geometry and material to choose a light and once chosen they are shaded individually, yielding high quality images.

ReSTIR El-Cheapo
ReSTIR El-Cheapo, no denoiser (1 SPP)

Could something similar be done in a quarter resolution fashion? Like have one quarter of breadwinners and 3 quarters of freeloaders? Let's call it ReSTIR El-Very-Cheapo, the pixel diagram would look something like this:

ReSTIR El-Cheapo
ReSTIR El-Very-Cheapo

1/4 of the shadow rays, 1/4 of the needed storage. Interesting, but of course the resulting image is noisier and the checkerboarding is more noticeable, also there are less new samples introduced to our reservoirs.

ReSTIR El-Cheapo
ReSTIR El-Very-Cheapo, no denoiser (1 SPP)

Maybe the whole thing could be fine tuned to minimize all that noise. To be honest this whole El-Very-Cheapo idea occurred to me while I was drafting this blog post, so let's say it's very new, fresh out of the oven ... mmm, carbs.

The following image compares 3 ReSTIR implementations, I've added vanilla ReSTIR with all it's shadow rays and whatnot. You'll notice that there's some loss of detail and that shadows look blockier at the edge the cheaper we go.

ReSTIR El-Cheapo
Comparison of different ReSTIR implementations

Wait a second, all the screenshots are from Q2RTX, why not Jedi Outcast? What's going on? ... 🙊

Footnotes

[1]Rearchitecting Spatiotemporal Resampling for Production
[2]How to add thousands of lights to your renderer and not die in the process

]]>
<![CDATA[ReSTIR GI for Specular Bounces]]>http://www.zyanidelab.com/restir-gi-for-specular-bounces/62450753cc879bc7690b388aTue, 12 Apr 2022 23:44:22 GMT

When it comes to multi-bounce global illumination (GI) how important is it to choose the best direction to shoot rays? Since in real-time rendering we have a tight ray budget, we must make the most out of those rays. In this post I'm aiming at describing an idea to improve the efficiency of said rays, specifically when sampling the specular lobe of rough materials. I'm basing my ideas on an algorithm called ReSTIR GI [1] , I'll summarize some concepts of said paper in the next paragraphs and build on top of that. BTW, I'll be using some other concepts like RIS and ReSTIR, if you are not familiar with them, you can check [3], an article where I introduce said concepts.

ReSTIR GI for Specular Bounces
Ground truth (10k SPP)

This is the scene I'll be using first to explain ReSTIR GI. There are two bright lamps on top of that door and illumination to the building facade comes from the light reflected off the floor.

Path tracing dictates that when it comes to deciding where to direct a bounce ray what we usually do is  either select the specular or diffuse lobe and importance sample that BRDF and shoot a ray, right now we will focus only on the diffuse lobe. If we hit a surface we calculate the effect said bounce would have on light and then we shoot another ray from that surface following the same procedure, so on and so forth until we hit a light or we decide we had enough and terminate that path and start all over again. How effective is that?

ReSTIR GI for Specular Bounces
Path traced BRDF sampling (1 SPP)

Doesn't look very promising. To improve things, we can use Next Event Estimation, that is, at each bounce we look for a light, calculate its contribution and propagate that back through our path. How does that look?

ReSTIR GI for Specular Bounces
1 Bounce BRDF Sampling + NEE (1 SPP)

Better, but still every time we sample the diffuse lobe the ray can go pretty much in any direction, the sky, the mountain in front of the building, some other part of the building and we don't get a whole lot light from those bounces, if any at all. Even if the ray goes in the right direction, that is the floor, and we get a good amount of light, the next frame will end up following a different path and maybe not getting a whole lot of light back. Could we reuse a path in a future frame if we found it has a good amount of light? Even better, could we share that info with some neighboring pixels so they could also follow that path? Of course we can if we use ReSTIR[2][3].

Unlike original ReSTIR, we won't be sampling and storing lights, we will store paths, and I can't emphasize this enough, we will be dealing with lambertian diffuse scattering, I'll explain why later on.

So we generate a new sample by 1) Sample the lambertian diffuse BRDF to get a direction 2) Shoot a ray in said direction 3) Get the outgoing radiance at the surface we just hit (Lo) .

ReSTIR GI for Specular Bounces
How to generate ReSTIR GI samples

That's it, we just need to come up with a suitable RIS weight that can represent our path and that we can stream through or reservoir, a RIS weight looks like this:

ReSTIR GI for Specular Bounces
RIS weight

What do we plug where? p(y) can be the PDF of sampling our BRDF, whether it is uniform sampling or cosine-weighted sampling.  p̂(y) can be:

ReSTIR GI for Specular Bounces

Where ρ is the BRDF, Li is the incoming radiance from our sample (We call it Lo when leaving the surface we hit), N is the normal and L is the sample direction vector.

Now that we have our sample, we can stream it in our reservoir and enjoy all the ReSTIR goodness, temporal and spatial reuse.  

ReSTIR GI for Specular Bounces
ReSTIR GI spatio-temporal resampling

What are the results?

ReSTIR GI for Specular Bounces
1 Bounce ReSTIR GI + NEE (1 SPP)

Much better, it's like pixels know that they should be shooting rays at the floor. Care must be taken when performing spatial reuse, you will need to calculate a Jacobian determinant to account for the geometric differences, said equation and a great explanation is in the ReSTIR GI Paper, section 4.3 [1].

Why did I begin with Lambertian? Think about it, the View vector has no influence in our samples, we can move the camera around and everything, reservoirs, weights, paths, all stay the same, reusing paths is easy. No stress.

Specular could benefit from ReSTIR GI, even more so when dealing with low specularity, aka rough materials that have wider/bigger specular lobes and can have rays going on more directions. The next scene is what we will be working with:

ReSTIR GI for Specular Bounces
Gold pillars of different roughness. (10k SPP)

100% metallic pillars, gold, rings of different roughness (20%, 40%, 60%, 80%, 100%). Light source is behind the camera and is partially occluded by a wall, so those pillars are lit indirectly.

Here's what happens when we combine BRDF sampling and NEE as we did earlier.

ReSTIR GI for Specular Bounces
1 Bounce BRDF Sampling + NEE (1 SPP)

Not great results, can't we just use ReSTIR GI as we did before? Specular is more problematic, it is influenced by the View vector. You have a lot of light coming from one direction and then you change the view vector and it can drastically change, even go to zero, how bad can that be?  Take a look again at our RIS weight:

ReSTIR GI for Specular Bounces

We could plug the same equations, namely p(y) can be the PDF of sampling our specular BRDF and p̂(y) can be (ρ)(Li)(N.L) as we did before, however both equations are dependent on the View vector. Change it and both equations yield completely different results. So how do we tackle that? Do we reweigh our temporal reservoirs once the camera has moved a little bit? Do we discard them and start all over? If we constantly discard reservoirs, why bother with ReSTIR then? How do we resample spatially since the view vector is going to be slightly different for each reservoir? Do we need another jacobian?

ReSTIR GI for Specular Bounces
Effect of the view vector on a specular lobe

My head hurts, stupid view vector, I guess we could tackle these issues head on and concoct some clever math or ... dance around it all. Guess which one I've chosen, so put on your dancing shoes because that's how we will do it.

One important characteristic that RIS has is that it can be used iteratively [4], so we can begin by having a sub-optimal p(y) that we use to sample an ok-ish p̂(y) and in turn we can use that to sample a better p̂'(y), so on and so forth. Also we can use a cheaper function as p̂(y), something like Phong instead of GGX. We can use these ideas to defang the view vector.

Let's begin by choosing a sub-optimal p(y), we need a sampling strategy whose PDF value remains the same despite the View vector changing, and that super-power belongs to the humble uniform sampling. Wait, what? Yes, if we uniform sample the specular lobe the view vector can move around and p(y) of our sample will not change, provided said sample is within the specular lobe. This is great even for spatial resampling since the view vector will be different at neighboring pixels. Where can we get such an odd tool? I came up with a strategy to uniform sample a specular lobe [5]. Now that post makes sense.

We have a p(y) that doesn't change but p̂(y) still does, can we come up with another function that is not affected as much by the view vector? Let's look at p̂(y) again:

ReSTIR GI for Specular Bounces

It is only the BRDF part that changes when the view vector changes, neither N.L nor Li do. Hmmm, the ReSTIR GI paper suggests using Li as p̂(y), of course it is a sub-optimal function but it is unaffected by the view vector and even by the current surface normal so it will get the job done. Plugging that simple p̂(y) into our RIS weight means that our reservoirs do not change when the view vector changes.

ReSTIR GI for Specular Bounces
Uniform sampling a specular lobe

So spatio-temporal resampling is much easier since the view vector has less impact on our reservoirs, we will call these uniform reservoirs. Of course we must check if the sample is in the specular lobe, if not then we reject it, I know I've written this like 3 times, it's very very important.

Does it all even work? Here's the result.

ReSTIR GI for Specular Bounces
ReSTIR GI (Diffuse + Specular) + NEE (1 SPP)

Not bad, are we done? Not yet, both functions are sub-optimal since they will over-sample the edge of the specular lobe and under-sample the center bringing in some bias, the previous image doesn't really show it, but I've got a new one and this is how bias looks like:

ReSTIR GI for Specular Bounces
Bias introduced by our sub-optimal functions.

At the right of the image I just have indirect illumination showing. You can notice how the reflections on the pipes are somewhat gone, they are under sampled. How do we fix that? We can use RIS iteratively and sample a better  p̂'(y) using our uniform samples. This new p̂'(y) function could use a real specular BRDF and N.L, the RIS weight would look like this:

ReSTIR GI for Specular Bounces
Improved RIS weight

Where p̂'(y) is our new function with a specular BRDF, W is the Weight from a uniform reservoir. Having the new RIS weight, we can stream it through a new reservoir and the following would be our estimator:

ReSTIR GI for Specular Bounces
Improved estimator

Where p̂'(Xi) is our new function with a specular BRDF, Xi is the sample in our uniform reservoir, Wi is the Weight from our uniform reservoir.

The thing is that we can't just use this new RIS weight in all our reservoirs and store it, this would return the effects of the dreaded view vector. How can we use it then? When performing the spatio-temporal resampling we could use two reservoirs, one where we just use our sub-optimal  p̂(y) and which we will store for future use, and another one, a disposable one where we sample our better  p̂'(y), this last one we'd use when shading.

ReSTIR GI for Specular Bounces
How to use our Improved RIS estimator

Here are the results:

ReSTIR GI for Specular Bounces
Improved RIS estimator

Reflections look much better.

My implementation uses separate reservoirs for diffuse and specular  lobes, they are selected at random with a 50% chance, unless the material is 100% metal, if so it always goes specular. If the temporal reservoir is valid then it randomly decides to refresh that reservoir or to get a new sample, the result is used to calculate light contribution and throughput. After that spatio-temporal resampling happens and the result is stored for the next frame. I'm not using ReSTIR GI for materials with high specularity, nothing below 25% rough, there I just use BRDF sampling. When a sample runs into a light, I shade the result and discard it, there is no point on saving the reservoir, ReSTIR DI takes care of direct illumination quite well. You'll notice that this is a pretty watered down implementation compared to the one described in the ReSTIR GI paper, of course that brings in noise but I'm trying to keep interactive frame rates and I'm a hack.

ReSTIR GI for Specular Bounces
ReSTIR DI + GI, no denoiser (1 SPP)

For future work I've been thinking on testing this algorithm on a more serious renderer, maybe Falcor. Also there might be value on resampling specular reservoirs when dealing with the diffuse lobe, which could be particularly helpful on disocclusion.

ReSTIR GI works on screen space, how could we connect to more interesting paths that are off-screen? I have ideas, stay tuned.


Footnotes

[1] ReSTIR GI: Path Resampling for Real-Time Path Tracing

[2] Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting

[3] How to add thousands of lights to your renderer and not die in the process

[4] Part 1: Rendering Games With Millions of Ray Traced Lights

[5] Uniform Sampling Phong BRDF

]]>
<![CDATA[Uniform Sampling Phong BRDF]]>http://www.zyanidelab.com/uniform-sampling-phong/6230f1fad5d92d87292a62eaWed, 16 Mar 2022 05:17:51 GMT

Why would anyone need this? Phong is one of those things that refuses to go away.  It has been patched to work somewhat as a BRDF [1] and despite the fact we have physically based specular lobes that look much better we keep going back, I guess it is nostalgia, revisiting those good old days when a render required a real camera and building a cornell box out of plywood. Another reason might be that it is also super easy to use and yields good results.

When it comes to path tracing, we are always looking for better sampling techniques, we importance sample everything we can to minimize the computational cost and Phong is no exception[2] so uniform sampling feels like a regression, like doing things in the worst possible way, and I could not agree more.

Soooooo , why do this? Why bother with something worse? I swear I'm up to no good. I did look around for this and as I couldn't find a solution I tasked myself with deriving one.

If you made it all the way here, let's get to it, the stone age is waiting.

We'll be using the Phong BRDF specular lobe [1], here's the equation we'll be working with:

Uniform Sampling Phong BRDF
  • s shininess, the higher the number the sharper the specular reflection looks like
  • V view vector
  • R reflected vector of L

If we plot that dot product, which happens to be a cosine, using different shininess values we get the following:

Uniform Sampling Phong BRDF
Different shininess values

The higher shininess is the steeper the curve gets and it seems that it reaches zero faster, however in reality the equation is only zero at +-2/π, everywhere else it has a value, albeit a very small one. So we could just uniform sample the hemisphere and be done. Cool, let's go do something else.

Wait, what if we uniform sample the interesting part, you know, where the specular highlight is, the specular cap if you will? This would be somewhat of a mixture of sampling strategies, we find the important part, aka the specular cap and uniform sample that. Let's try that strategy.

Uniform Sampling Phong BRDF
Spherical Cap [3]

We are interested in the blue part, where the specular action is happening, here are the equations we can use to uniform sample a spherical cap[4].

Uniform Sampling Phong BRDF
Uniform sampling equations for a sphere cap

Where u, v are random numbers in the range of [0,1) and cosθmax is the maximum value of the cosine of theta. That max value is kind of intriguing, how are we going to determine it? It must be related to shininess somehow.  I thought about this a lot, thinking on using derivatives or other tricks, but then looking at current equations I realized that we can pull that value from the importance sampling equations [2]. The cosθ equation looks something like this:

Uniform Sampling Phong BRDF
This is the cosθ importance sampling equation.

And if we plug a very small value in u, say 1e-10 , we will get a cosθ right where things start to get interesting. Hence the equation we are looking for is:

Uniform Sampling Phong BRDF

Where ε is a very small number, 1e-10 for example. Let me show you how cosθmax looks like when different shininess values are used:

Uniform Sampling Phong BRDF
cosθmax represented by the red line

Now that we have cosθmax, how does uniform sampling look like when plotted on a sphere ?

Uniform Sampling Phong BRDF
Top row: Importance Sampling , Bottom Row: Uniform Sampling

At this point we have almost everything we need to test this idea. We just need a PDF and we are good. The PDF would be 1 / cap area since we are uniform sampling that area and the PDF equation is [3]:

Uniform Sampling Phong BRDF

Alright, if there is something I've learned is that providing the Monte Carlo estimator is the right thing to do.

Uniform Sampling Phong BRDF
Monte Carlo estimator

Now we have it all. I plugged these equations in my mighty turtle-tracer to see how things look like. (Note to self : I must finish porting it to Vulkan)

Uniform Sampling Phong BRDF
Top Row: Importance Sampling, Bottom Row: Uniform Sampling (5000 spp)

So as expected, it's noisy and it takes longer for it to converge, remember, uniform sampling. If we add more samples, it gets there, somewhat.

Uniform Sampling Phong BRDF
Left: Importance Sampling, Right: Uniform Sampling(30k spp)

So here it is, it works! Kinda....

Just a note, while I was preparing this blog post I noticed that Ray Tracing Gems, Chapter 16[2] has a method to sample Direction in a cone, which looks a lot like what we just did here, and guess what? I tried those equations and they work as well.

Uniform Sampling Phong BRDF

Footnotes

[1] Using the Modified Phong Reflectance Model for Physically Based Rendering
[2] Ray Tracing Gems Chapter 16
[3] Spherical cap
[4] Points on surface of spherical cap

]]>
<![CDATA[How to add thousands of lights to your renderer and not die in the process]]>http://www.zyanidelab.com/how-to-add-thousands-of-lights-to-your-renderer/622a7b1abae361469b5121b4Sun, 13 Mar 2022 01:43:26 GMT

How many lights are too many lights in a scene? What if they are dynamic? What if we want them all to cast shadows? This question was answered in a recent paper from the University of Google Search and reveals that you can't have that many. It requires a lot of very clever rendering tricks (that for some reason always involve baking) but basically not all of those lights will cast shadows, some might light some objects in the scene and miss others, some might just look like a light and in the end emit no light at all and also there's light leaking and other issues, careful scene tuning is needed to make everything look good.

Can we do better? Can we free ourselves of these shackles and run freely on the prairie of physically correct dynamic lighting? Can we have our cake and light it correctly while we eat it? The answers to these questions is a resounding YES. How though? Well, we will rely on path tracing because that's what we do here. Wait a second, isn't that like crazy slow? Don't throw that cake at me just yet, we will do it all and at interactive frame rates. Bear with me while I explain how to get to the solution. This post builds up on the idea of Direct Lighting, also called Next Event Estimation. You can check Peter Shirley's blog for an excellent explanation. [1]

How to add thousands of lights to your renderer and not die in the process
Reference Image (2,000 Samples per pixel)

This is the scene we will be working with. This image was rendered using 2,000 samples and the whole map has 6025 lights (I have not added them all yet, there will be more), of course this render took several seconds. How are we going to render this scene at interactive frame rates, like 60 FPS or higher (20 FPS if you grew up playing N64)? It sounds like a nightmare, but do not despair. One of the very first things that we can do is create smaller lists of lights based on which area within the map they are in and what other areas they could potentially light. Like knowing which room a light is in and which other rooms this room is connected to. Doing that we know this area has 222 possible lights. It still looks like an unwieldy number but we have some options, let's explore some.

We could simply use them all at each pixel, calculate their contribution and add them all up, of course we can skip many of them that are facing the opposite direction of a surface or that are behind said surface. However we'd still end up with quite a big number and calculating the contribution of each light would definitely bring the GPU down to its knees, the coil whine would be unbearable. On top of that there is another problem, what if some lights are behind a door or on the other side of a wall or occluded by an object? If we ignore this issue we will end up with light leaking:

How to add thousands of lights to your renderer and not die in the process
No shadow rays were used :(

Can you notice how there is light coming from who knows where? Seems to come from under the walls, from under the door, shadows are gone, definitely looks really bad ... and familiar if you've been gaming for a while ... how do we tackle this? We use what are called shadow rays to test for visibility.

How to add thousands of lights to your renderer and not die in the process
Shadow rays

What we do is we trace a ray from the pixel we want to light toward a point in the light. If it doesn't intersect anything we can consider that the light is visible and we can confidently calculate that light's contribution. If the shadow ray intersects something, well, it is not visible (occluded) and we ignore that light.  So going back to our idea of using all lights we'd have to trace hundreds of shadow rays, one for each light. That would yield very good results however tracing rays is a very expensive operation, we don't get that many per pixel, so kiss the interactive frame rate good bye. Unfortunately our idea is a no go but we've gathered an important observation, we definitely need shadow rays, and since they are so expensive we can only check visibility for one or two lights if we want to keep a good frame rate.

Given that, we could select one light at random, use our very valuable shadow ray to check for visibility and if everything is ok, we calculate that light's contribution. How does that look? Here it is:

How to add thousands of lights to your renderer and not die in the process
1 light sampled at random per pixel

Ok, it looks pretty noisy, dark and far from good, but hey, frame rate is good. Hmmm, can we use that shadow ray in a smarter way? Yes, look at the scene again.

How to add thousands of lights to your renderer and not die in the process

I've highlighted two areas (red and blue squares),  it's clear that each is illuminated by different lights, that there are more important, influential lights for each pixel, if we select those we will get better results. That technique is called Resampled Importance Sampling (RIS) [2]

How to add thousands of lights to your renderer and not die in the process
Resampled Importance Sampling

Don't let the name scare you, we can make it work for us as follows: 1) we randomly select some lights from our list since we can't process them all, we call them samples, 2) run them through a function that closely resembles the light contribution on our pixel, which as you might imagine will help us select those lights that contribute more, and using that information 3) select one sample at random and trace our shadow ray, selection will favor those lights that contribute more to the pixel. Here's the result:

How to add thousands of lights to your renderer and not die in the process
RIS Direct lighting, 1 sample per pixel

Much better, right? And actually this is the technique that Q2RTX uses [3], and to be honest that game's visuals are astonishing. We could stop here, I mean, we have what we wanted, physically based lighting with hundreds of lights running at an interactive frame rate. However it is still somewhat noisy and we want to handle even more lights, Jedi Outcast has waaaay more lights than Quake 2.

How can we do that?  Think about what happens with RIS, in one frame it might find an awesome light and maybe the next frame it selects a not so good one and maybe the next frame it selects a terrible light, all of that brings in noise. Could we somehow reuse those samples from previous frames to improve the overall quality? Yes, we can use Reservoir-based Spatio-Temporal Importance Resampling (ReSTIR)[4].

How to add thousands of lights to your renderer and not die in the process
ReSTIR basic functionality

Ohh noes, another acronym and this one looks scarier! Don't sweat it, this works by sampling elements from a stream. Wait ... what?!?! Think of it as a streaming service like Netflix, whenever you watch a movie your TV is downloading some frames ahead of what you are watching and once presented they are discarded (yes, I know I'm oversimplifying, but bear with me), your TV doesn't store the whole movie. In the same manner, we can think of a reservoir as the TV and those RIS light samples as frames in the stream. As we stream a sample through our reservoir, the reservoir randomly keeps it or discards it, it only holds one sample along with how many samples it has seen and all the probabilities and weights that are needed to properly use our current sample. So we can stream our RIS samples through the reservoir and have the same result as using RIS, at this point you might feel cheated, like we are accomplishing the same with extra steps, but no, reservoirs hold a super power, they can be combined.

How to add thousands of lights to your renderer and not die in the process
ReSTIR temporal reuse

When we combine them the resulting reservoir can be considered as if it had all the samples of the previous reservoirs streamed through it. How is this good for us? Let's go back to our pixel in the scene, we create a reservoir, stream RIS light samples and then save that reservoir, on the next frame, we create a new reservoir, stream more samples but this time we can combine the previous frame's reservoir and the result is a reservoir with twice the samples! Next frame, even more samples and so on and so forth, and if we keep doing this we end up with a reservoir that has a very high quality sample. Here's the result of doing this:

How to add thousands of lights to your renderer and not die in the process
ReSTIR using the previous frame reservoir

Much, much better, but we can do even better. We could combine not only the reservoir of the previous frame but also those of the neighboring pixels, like copying somebody else's homework, just like in the old days. Those neighbors might have found a better light and we want that info. This can really get crazy as those neighbors' reservoirs will have also previous frames info. So in just a few frames each pixel ends up with thousands of samples, good stuff. Note: Care must be taken when asking a neighboring pixel for its reservoir as that pixel might be pointing in another direction or be at another dept, if you disregard these differences you might end up sampling lights that have nothing to do with the current pixel and all sorts of evil things will be unleashed upon your frames, bias will overtake, pixels will go on strike, the ghost of NAN past will visit you, just be careful. Anyway, here's the result of asking neighboring pixels for their homework:

How to add thousands of lights to your renderer and not die in the process
ReSTIR using previous frame and neighboring reservoirs

This looks pretty good, noise has been greatly reduced and we got everything we wanted, a bunch of lights correctly lighting a scene running at interactive frame rates using path tracing, even shadows are there, interestingly this can be achieved using just one shadow ray per pixel.

Are you ready to use ReSTIR? Check the page linked below[4], bunch of math and very important details that I left out are there. Don't feel like reading all that and want the goods? RTXDI uses this same principle so go check that out.[5]

We've talked about direct lighting and how to greatly improve it, however we need indirect lighting as well, can we use a similar technique? Stay tuned for a future post.


Footnotes

[1] What is direct lighting (next event estimation) in a ray tracer?

[2] Importance Resampling for Global Illumination

[3] Ray Tracing Gems II, Chapter 47

[4] Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting

[5] RTX DIRECT ILLUMINATION (RTXDI)

]]>
<![CDATA[Ray Traced Rabbit Hole]]>http://www.zyanidelab.com/real-time-ray-tracing/5fa756855b7d7672330fbdc7Mon, 09 Nov 2020 02:18:01 GMT

Have you heard of those Nvidia's RTX cards? If so you've heard of their new technology that allows real-time ray tracing and of course, you witnessed how badly they failed to convey how big of achievement that is, how so you may ask? Well, it's like trying to explain to someone not into cars why a V8 is so awesome, they won't get it and will refuse to pay for a more expensive car just to drive around and pick up groceries. Ray tracing is the same, unless you are into computer graphics it feels like an expensive gimmick, and since computer graphics are not my bread and butter how did I end up going down this ray traced rabbit hole? Let me elaborate.

I've known ray tracing and that it is really hard since a very young age, thanks to a demo of Bryce I pulled from a CD that came in a magazine, you know, those that PC Magazine and others would embed along with the magazine to lure young minds, like mine, to buy their otherwise business oriented, and boring publication. I have mixed feelings about those CDs, sometimes they were damaged, sometimes demos wouldn't work, but every now and then you'd get something cool.  I digress. Anyway, that program allowed you to create a 3D landscape and have it rendered, more accurately, ray traced. It was mind blowing to see those landscapes come to life, albeit, after an all night rendering session in a mighty Pentium 133Mhz. I clearly remember something that looked like a scan line adding more and more pixels to the final image. Since then ray tracing and long waiting times were firmly cemented in my mind, to this add those reports of Pixar's render farms working non-stop for hours to render a single frame of Toy Story.

Around that time I had decided I wanted to write a game, I guess that's a dream a lot of teenagers, and some adults, share, and with that goal firmly set in my mind I went ahead and started learning computer graphics, of course it felt like learning some sort of dark art with those tutorials written in an arcane language that only an archaeologist could decipher and that could only be mastered after spending not few years in the Tibet. Of course none of that had to do with my overall ignorance of computers and coding. Among those ancient scrolls I found Dentor's tutorial which dealt with a bunch of stuff but one of the things I clearly remember was 3D graphics and there I was writing a rasterizer. It was awesome to see such a thing. I wonder if I still have the source for that thing, I'm sure I'd find that code horrifying, I mean, I embarrass myself even to this day with older code. Imagine something I wrote 20-something years ago.

Fast-forward 5 years and I had another brush with 3D graphics, this time OpenGL was to blame. Now I didn't have to write a rasterizer from the ground up, there was an API that would allow me to make use of that mighty NVidia GeForce 256 that my new rig had, and off I went. This time it was easier, I had more years of coding under my belt, I could do pointer arithmetic, could tie my shoes and I could do more complex math, however life caught up with me and I had to set computer graphics aside once again.

On 2018 I was watching the NVIDIA event on YouTube, the one where they unveiled their new graphics cards, I don't even remember why, all these years I had been pretty much apathetic about graphics, there was nothing new, just faster cards drawing more polygons in higher resolutions at higher frame rates, it all felt the same, same trick just done faster. I was bored I guess. Anyway, when they announced real-time ray tracing I could not believe what I was hearing, on one hand I was super excited, on the other I was skeptical, could they really do those Bryce landscapes in real time? Could we see Toy Story rendered on the fly? I wanted to believe, I wanted it all to be real but then they started talking about Battlefield V and how it had real reflections, I felt disappointed, just like when you see the burger on the menu, it looks delicious and tasty, you order it and what comes back to you looks like a Big Mac. Don't get me wrong, that game is basically rasterized and has a little bit of ray tracing going on, which in itself is great, just as a Big Mac is better than no burger, but what about a game that was completely ray traced? I thought to myself that it would take longer for such a thing to happen and moved on.

Enter Quake 2 RTX. Some weeks later I heard of it, A COMPLETELY PATH TRACED GAME. I could not believe the reports, or the videos on YouTube for that matter. I had to see it with my own eyes, and after 10 years of utter apathy I went ahead and bought a graphics cards, it had to be 2080 Ti, I knew I would need as much computational power as I could get to run that thing, it was real time ray tracing after all.  When I first ran the game I felt like that kid back in the 90's that was running Doom in that Pentium machine. It was amazing, all the reflections, all the materials, dynamic lights, refraction, sun light correctly simulated.

Ray Traced Rabbit Hole
Ray Traced Rabbit Hole

And after so many years I found myself pulled back into the computer graphics hole.

]]>