There was old compositing method using special camera and monochromatic background: https://en.wikipedia.org/wiki/Sodium_vapor_process. Results were so good I wonder why it wasn't used more widely: use background lit with monochromatic light and special camera that has modified Bayer pattern including subpixels for that background wavelength, and you basically have perfect compositing.
Is it feasible for the Bayer mask to be swapped out as an aftermarket change on a modern digital sensor? Or does that require clean room manufacturing processes to repackage it?
I had a similar thought. Imagine a custom camera body to add a beam splitter for parallel sensors much like the original process with parallel film. One regular RGB sensor for the visible scene, and one monochromatic sensor for the narrow background spectrum.
But, I have no idea if the beam path could be devised to do this with commodity sensors and lenses, to minimize what must be bespoke.
You would have to adapt DSLR bodies for this because they already have a beam splitter (some old Sony/Minolta DSLRs had fixed mirrors, even)
New mirrorless designs and cinema cameras would have a hard time with this.
It would be easier to do what Fuji used to do: have four sensor sites, but instead of RGBW have RGBX where X is filtered to your wavelength, even then, demosaicing would create artifacts (like how you can get 10MP out of an old Nikon D1 but sharp edges are jaggy because the demosaic isn't uniform.
'At the time of its use, the sodium process yielded cleaner results than did bluescreen, which was subject to noticeable color spill (a blue tint around the edges of the matte). The increased accuracy allowed for the compositing of materials with finer detail, such as hair or Mary Poppins' veiled hat. It was also useful that the "sodium yellow" light (and its removal via the matte) had a negligible effect on human skin tones.[2] As the bluescreen process improved, the sodium vapor process was abandoned, its screen and lamps monopolizing huge studios and incurring a higher cost.'
There was only apparently one working prism ever made for the process. But it's not clear whether this is because it was so hard to make or there was not much further attempts (or a combination of both). It is not super simple to make a filter for a very narrow wavelength, dichroic filters are generally the main technique and I don't know if they could be made into elements of a bayer-like filter.
There is a massive difference between a research project and doing something in production at scale. Also Netflix doesn't shoot their own movies so I wonder who this is developed for? There are better ways to do background replacements and pull mattes without forcing actors to work under blue and red light.
Megapixel's Ghostframe is another interesting approach to mattes with LED walls which have their own spiky chroma issues with only RGB emitters. Adding more LEDs of other colors to improve the spectrum could alleviate that but I'm guessing there are cost and manufacturing hurdles there which make it more cost effective to fix it in post where a grade is usually applied anyways.
Netflix acquired one high-end VFX studio recently - Scanline. A lot of the compositing work is edge integration, so this could in theory translates in cost savings on their pipeline.
Also, this research is under the supervision of Paul Debevec, who spearheaded more than 20 years ago the image-based lighting and high dynamic range workflow used in virtually every single production now. If we see chrome balls in film sets, it is because of him!
That said, I don't think this will find its way on sets, mostly because DOPs/directors are very precious about the way they capture their images. The alternative they offer in the paper, using a reduced shutter speed and adding synthetic motion blur, is also a big ask - with the added problem of syncing the cameras to the lights, which would slow production.
In my experience, extracting the alpha is not an issue - compositors can get close to the GT using old tools/algorithms. However, getting the original colour around the semitransparent areas is the biggest hurdle - and each artist uses a bunch of approximations and local fixes, which are very time-consuming. Recoloring the edges would be enough to revolutionize this workflow. Given their virtual stage + deep learning technology, they are in a good position to make this happen.
I imagine their interest in parallel channels is partly to avoid any motion artifacts where the visible image and the mask channel would diverge between the sample intervals. I think this is why sub-pixel color filters have won out over filter-wheel cameras in general, i.e. where a monochrome sensor reads the R, G, B fields in sequence.
But given that it comes from Netflix, it is probably also constrained by wanting to work with commodity cameras and optics. It seems they would want techniques that can scale to many productions at once with low marginal cost.
you will get 'rainbow' artifacts during motion like you get in DLP projectors when you move your eyes. You'd need an incredibly high framerate to get a small enough motion between frames (and thus, a very expensive camera and a lot of light).
I don't think that is true. MKBHD just came out with a video demonstrating using a video wall that updated in sync with the camera frame rate. Allowing two cameras to see different backgrounds, e.g. two different colors or even two different images, as the demo was showing a parallax effect.
Actually, checking the paper, they tested that - reducing the shutter speed, realigning nearby frames and adding motion blur.
As a vfx compositor, while I’d be OK with this workflow, it doesn’t come without their own issues and artifacts.
Thanks for looking at the paper. It forced me to go read it:
"We address this by increasing the repeating rate of the two lighting
conditions at 72HZ, so that the lighting changes from one color to
the next every 144th of a second. The lighting then appears nearly
constant, with a remaining effect being that rapidly moving objects
leave a trail of magenta/green outlines when seen against the screen,
as in Figure 6 (bottom)."
"The remaining drawback of time-multiplexing in this manner
is that the shorter shutter angle reduces the amount of motion
blur, which is considered desirable for cinema."
Interesting that the complaint is that it reduces motion blur.
Large LED walls with motion tracking cameras is the next progression in chroma keying. You get all the benefits of correct colored lighting without any of the green screen issues like bleeding.
This may not be a fully "new" method. In the FMV video game Gothos, from 1997, the actors have visible purple light fringing. A commenter on youtube gave this explanation:
> If the foreground actor or object was too close to the fabric or the green Kino banks were too bright, green light spilled onto them, creating headaches during keying.
> To counter this, a DP could set up Kinos with purple gels to fill the green spill. If done incorrectly, the actor or object would now have purple lighting on them instead of green.