Idea - Depth Map Channel Recording -- THE WINNER!

MrDorf

Member
Place some kind of widescan high resolution rangefinding device on the Red, and actually record and process it's data as a depth map channel internally...Resulting in an extra channel of information with grayscale image depth.

Any Compositor among us knows how difficult it is to rotoscope footage. A depth channel is something most 3D/motion applications output to help arrange objects front to back within a composition, and having a prerecorded accurate depthmap instantly would allow greenscreen like manipulation much faster and more accurately than ever before.

I've found this link to a white paper detailing a rangefinding CCD test done almost 10 years ago, so the technology is out there. Resolution from these tests would only make it possible to do something like instantly remove an interview subject from a background, but even that much control would be amazing. Who knows how detailed this kind of tech is currently.

I'll be honest, I have no idea what kind of rangefinding equipment or how much processing power this kind of operation would take, but if any camera has a chance to even come close, it's Red One.
 
I absolutely agree. Depth channel mapping would be fantastic on live footage. I would completely revolutionize the way we work in post production. Not only with regards to regular compositing, but think about the possibilites for depth based colorgrading, 3d tracking and digital depth of field. Whoa!

The way I've been thinking of the technology have always been that it should some kind of developement of the autofocus technology. I'm actually not certain how the AF lenses measures the distance to the object, but perhaps that would be a way.
 
I agree that this is a much manted feature for post production. Well, at least that's what I think. This would make it possible to accurately 'rotoscope' any shot, even where the foreground object and background have similar (dark) colors which makes the edge hard to see.
Take a look here: http://www.3dvsystems.com/products/zcam.html
This actually does what we're discussing here, be it at NTSC or PAL resolution.
 
I also agree

I also agree

This feature would make all other digital cams obsolete within a few years, with all the cgi that's coming out these days.
 
I like it

I like it

I think this idea is great. Technically too challenging perhaps on RedOne but the idea would be super cool. Can you imagine the saving in time and the post-workflow. File in the data and just select your depth for an almost auto roto pull. WOW!!!! please send me this for Christmas and I will be a really good boy!
 
Great idea. Probably not feasable for the RedONe and for this contest, but a great idea nonetheless.

I've wondered about this for years. That white paper was interesting. I wonder why noone has implemented this yet. It would be a great way to remove the need for greenscreens, and it would make CG compositing much easier.
 
There are ways to get this information in post by running motion vector analysis on the footage. An on camera range finder is a little out of the realm of possibility right now.

Typically on big budget movies with lots of visual fx, 3d information of a set location is collected by LIDAR (Laser Radar) scanning. This isnt exactly the type of thing you can slap onto the front of the lens nor is the information suitable for very fine data such as pulling an actress with lots of little hairs flying around out of a scene. So keep dreaming of the days when we say goodbye to greenscreens. :)

Emery
 
I had this same idea (http://www.dvxuser.com/V6/showthread.php?t=59232)
:)

I'm glad to see it's getting attention because this device would be the most revolutionary device since sliced bread (especially with a lot of movies going the way they are with tons of effects)

With the 3dvsystems they were even able to get 3d models from the footage (unfortunately only in NTSC and PAL resolutions). They've changed their site a little making it hard to find some of the examples and stuff (other than the gallery stuff), but they used to have a picture of 3d model clinton charicatature.

Pretty cool man.

visionmind
 
Another fun, simple benifit of this concept is the ability to digitally slide ND gradient like filters in behind your primary subjects/actors and control the background exposures in post practically instantly and with little post experience. Just set your transparency, gradient, and z-distance, and even the most post-phobic cameramen can get much better, virtually instant control over exposure (assuming large amounts of the shot aren't clipped out of course) with little effort.

If you were to film with a large DoF, the opportunities for digital depth of field effects become to many to mention.

I would imagine the best way to accomplish this would be to create an enhanced CCD with a 4th channel to recieve whatever kind of ranging light was being flooded across the sceen on a per pixel level, like the white paper suggests is possible. Luckily, it sounds like the Red's CCD is "upgradable", so maybe it could be in the future for Red One after all?
 
Think again guys.

This isnt possible right now. You can grab bits and bobs of 3d information in realtime but nothing suitable for the types of effects you guys are talking about.

If you were to gather 3d data via lidar or such, this information is best handled in post production, not in camera.

If this were possible today, it could be in standalone devices, its really not camera dependent. The $200 million dollar movies painstakingly recreate the 3d environments and reverse triangulate camera position data in post production not because they have fun doing it that way but because thats the only way to currently do it. Is also not an automatic process. It would be one thing if this were all achievable in real time by putting a couple quad core intel chips in RED but its not just about horse power. There is all sorts of user interaction which is why VFX studios have entire divisions devoted to camera tracking.
Of all the processes that would be required to achieve a system being discussed, optical flow is the most automated and even then requires alot of very finely articulated masks to explain to the algorithm which pixels belong to which planes in 3d space.
Also oflow is extremely processor intensive and on a 4k plate would need some mega mega processing power to handle it in real time.

I work in VFX so Im just offering my professional insight. It's a nice idea in theory but not practical today.
 
Emery Wells said:
If this were possible today, it could be in standalone devices, its really not camera dependent. The $200 million dollar movies painstakingly recreate the 3d environments and reverse triangulate camera position data in post production not because they have fun doing it that way but because thats the only way to currently do it.

If that's the logic you want to follow then I suppose Jim Jannard should just give up on the Red and its Mysterium sensor then, since Sony already makes a $200,000 camera thats the only way to currently do it. :)

The key to this whole idea is that it's NOT a stand-alone device, go check out the white paper in the original post...it's CCD based ranging, which would result in a perfect match of the matte to the color channels, something not possible with ANY other technology. That aspect greatly cheapens the process. What you would need is a specialized light emitter that pulses energy in a form that a modified CCD with a 4th channel can pick up. That light is reflected from objects in the scene, returning to the CCD, which records the minute timing differences of the light return and translates that into a grayscale color. Result, depth map at the same resolution of the recorded color images with no offset.

A full-fledged Hollywood movie production may still want to scan in an entire set and make a 3D model to track and manipulate, but that's not what this idea is about. It's about a simple, 2D depth map resulting in streamlined Post process for those of us without a post house.

That's really the beauty of this whole discussion though, Red is something entirely revolutionary that no one thought possible, let alone thought could be produced for less than several hundred thousand dollars a unit. Mr. Jannard has shattered the cost-to-quality ratio in a way none of us saw coming. We're just now seeing decent midrange HD cameras on the market and now we have a 4K option for a moderate cost increase? It's unreal. We'd be cheating ourselves not to explore every angle of each possiblity on this board...if even one of them becomes reality, then WE win. Slapping assumed limits on technology is a good way to go out of business...a long standing truth that Mr. Jannard is now bringing hard and fast to the doors of the major camera manufacturers....why not join the fun and think big and expensive, then we can all hope it comes out even cooler and cheaper than we hoped, like Red One.
 
Ok Im guilty of not reading the white paper before posting and only skimming the the posts... Im at work guys, can't blame me. :)

Im certainly a proponent of new technologies and economizing old traditional expensive ones. I was commenting more on the ability of getting real time 3D position data as opposed to simple depth map (which i now see is what the post is about.)

Range finding CCD's is an interesting concept. If the boys want to slap it in there I'd be a happy compositor.
 
Maybe a more sophisticated technology can implement similar or better results. Manufacturers? Anyone out there?

This would be an excellent combo with Red. I can also forsee this being used in 3d Camera animation work for match-ups in animation compositing
 
or this was a blabla contest and didn't i notice?..

it was an idea contest -- so i guess you dident notice (although seeing as the title was Think Tank im rather suprised you dident)

id be intrested to know what jannard or any of the red employees have to say about this little ditty of an idea. :)
 
They loved the ideas.. Both Jim and Ted were very impressed with some of the suggestions. Ted was the one that read all the ideas and chose the winner remember.... Hes been a little quiet the last couple weeks cause they are elbows deep in engineering the camera... I will see if I can get Ted to break away tonight and chime in if he has the opportunity.
 
There was a device at IBC a few years ago that offered a Z-buffer with realtime depth information. The name of the company was Zcam, and they were based in Israel. They did a fantastic demo involving virtual objects on stage that the presenter could walk around, and they showed color effects based on distance from the camera, ie someone in black and white stepping forward into a world of color. They also showed keying based on depth.

This was in '99. The price was huge - over $100,000 for the add-on to the Sony camera, and it was a huge studio rig.

If you guys make this practical, it'll be huge.
 
Last edited:
i dont expect this to show up on any of the first 3 generations of the Red camera.. the concept is what was great.

to bad though that Zcam bailed.. that woulda made alot of comp guys dreams come true.
 
Back
Top