Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using more information from earlier samples #63

Open
tom-adsfund opened this issue Feb 15, 2022 · 3 comments
Open

Using more information from earlier samples #63

tom-adsfund opened this issue Feb 15, 2022 · 3 comments

Comments

@tom-adsfund
Copy link

I've been experimenting up-weighting the earlier samples on the Gltf_Viewer demo. I did this because I noticed everything started from 0/black when the samples were reset. It's also because since we're sampling, by laws of probability and learning theory we know that most of the information will be presented in the earliest samples.

I would have liked to provide a video or demo code to illustrate, but because of the difficulty of running experiments (as explained in the issue about "More Abstractions"), I think the specific parameters I chose when I was experimenting by hand and having to move the camera by hand with the controls I'm not too familiar with, might all cloud the main point.

But my experiments did show the type of effect I was hoping for: instead of starting black, it was essentially instantly the right kind of color. I did this with a simple linear up-weighting (decaying to multiply-by-one) of the pixelColor (just for testing), but experimenting by hand I also noticed some issues I wouldn't have guessed (but are somewhat easy to understand when you see it).

For example, the method of up-weighting I used creates different effects depending on the brightness of the scene etc. With a proper set of abstractions it would be really good to just be able to switch or interpolate between up-weighting schedules, etc, and also be able to switch easily between materials and number of objects, etc.

I think it's obvious from my understanding of probability and learning theory that the rendering could be made extremely quick. On the Tesla V100 it was running easily at 60fps with 1:1 pixel ratio for at least a 1080p frame.

@tom-adsfund
Copy link
Author

This is an example:
https://user-images.githubusercontent.com/3634745/154076575-40c22b72-6ba3-4916-942f-23ac3c0927cc.mp4

Notice that the bottleneck is on the 60fps cap. With the upweighting and no cap, you'd have realtime.

@erichlof
Copy link
Owner

erichlof commented Feb 16, 2022

Hey @tom-adsfund

Thank you for bringing the 'starting at black' issue to my attention. I have not thought about this piece of code for literally years. In all the non-dynamic, static geometry scenes (which is most), whenever the camera starts moving after sitting for a while, I found long ago that I couldn't just blend with the previous frame anymore. This is because the previous texture is an unbounded ping pong type buffer that keeps accumulating and accumulating, without averaging the results (as we must do in Monte Carlo simulations). The 'screenOutput' shader is the only one that is responsible for averaging the unbounded linear pixel colors by the number of samples, and then applies tonemapping to bring all colors into the 0.0-1.0 rgb range for monitor output.

The clear to black (or previousPixel = Vec4(0)) in all these static scenes was meant to clear the accumulation pingpong buffer, which might be in the unbounded 1000s, depending on how long the camera was sitting still prior to moving suddenly. Initially, when I didn't clear anything, I got a blast of white that settled back down as the camera kept moving - this was because I was averaging currentPixel and previousPixel(which is unbounded). I came up with a quick way around this: just set the previousPixel to black, then it won't blow out the next frame when it is averaged. This worked pretty well, but your post here has inspired me to find a better, more Monte-Carlo-correct solution - and I have!

I made a new uniform called uPreviousSampleCount, and whenever the camera starts moving suddenly after being still for a while, it records the exact number of samples taken so far (while the camera was sitting still) before the camera starts moving again. This in turn is handled in all the static-scene path tracing shaders, and correctly averages the currentPixel color with the previousPixel color (the accumulated unbounded one), but this time it divides the previousPixel color by the recorded uniform uPreviousSampleCount. Now when the camera snaps into motion after being still, it blends perfectly with the previous frame (when it was stationary), resulting in butter-smooth motion.

If you check out the glTF Viewer demo again, as well as all the dozens of other demos that use static geometry, you should now see much smoother camera motion (especially after being still for a while).

Thanks again for bringing that seemingly small part of the code to my attention, and giving me a spark to go fix it. Now the whole renderer, and all users will benefit! :)

-Erich

@tom-adsfund
Copy link
Author

Yes, that's much better, I was just testing it.

But the learning more from the earlier samples will still be useful. I'll probably play with that tomorrow. But it would be best if there was work on the abstractions so that I could do it more rapidly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants