Results 1 to 10 of 10
  1. Collapse Details
    Transcoding Methods from 8-bit 4:2:0 files to HDR Edit Timelines
    #1
    Default
    I never understood the science behind the information from a single frame from a compressed video file. I remember years before, when the Canon 5D MK2 starting booming for users “transcoding” their files to a better format. I always thought rendering any compressed file or rearranging the information inevitably affected the original file anyways. For example transcoding an MP4 to ProRes MOV. What is the best method to uncompress file information to a more ideal color grading format? For example, isn’t better to export an H264 file to a DPX or TIFF sequence? This way each frame has a constant bit rate and representation without guessing info from frame to frame like variable bit rate science? I always noticed more stability working from a TIFF sequence when color grading than playing with the original MP4 in the timeline.

    Any tips?


    Reply With Quote
     

  2. Collapse Details
    #2
    Senior Member
    Join Date
    Feb 2009
    Location
    Long Island
    Posts
    9,829
    Default
    Transcoding for any color grading benefits was old ideology back then, and the only benefit was to lighten the load for the computer.

    Garbage in, garbage out...you can dress it up a bit but it still is what it is.

    As far as any generational IQ loss, there is basically zero these days with so much source footage being extremely high-resolution, 10-bit and/or RAW (when working with the better formats like ProRes).

    Apple's Compressor is a great tool to have complete, specific control of your conversions, but I'm sure there are dozens of others out there.


    Reply With Quote
     

  3. Collapse Details
    #3
    Default
    I think the 5D MK II shots in Act of Valor were run through DarkEnergy and exported to DPX; however, this method isn't needed in software like Resolve that operates in a 32-bit floating point.


    Reply With Quote
     

  4. Collapse Details
    #4
    Default
    Quote Originally Posted by Imamacuser View Post
    I think the 5D MK II shots in Act of Valor were run through DarkEnergy and exported to DPX; however, this method isn't needed in software like Resolve that operates in a 32-bit floating point.
    So in a way, After Effects, you can work in a 32-bit project. Basically high depth calculation on the 8-but 4:2:0 file?

    For example, there’s a famous YouTube video of a GH3 shot by an Italian guy called “Watchtower of Turkey.” He did an interview, article for his workflow. He said he imported the footage into Photoshop! I believe he even converted everything to TIFF before coloring it. So I’m curious about image sequences. I assume it creates each frame into a DPX sort of print since TIFF is 16-bit. I’m not sure what type of sampling is occurring but he said he got better results with the TIFF sequence than working with H.264


    Reply With Quote
     

  5. Collapse Details
    #5
    Default
    Quote Originally Posted by NorBro View Post
    Transcoding for any color grading benefits was old ideology back then, and the only benefit was to lighten the load for the computer.

    Garbage in, garbage out...you can dress it up a bit but it still is what it is.

    As far as any generational IQ loss, there is basically zero these days with so much source footage being extremely high-resolution, 10-bit and/or RAW (when working with the better formats like ProRes).

    Apple's Compressor is a great tool to have complete, specific control of your conversions, but I'm sure there are dozens of others out there.
    I’ve worked with old MXF files and converted to uncompressed AVI in a 32bit project in After Effects. I know the files are huge upon doing this but my Thunderbolt RAID is a 16TB, so it’s no inconvenience. When applying minor grading and my S-Curves I could see better real-time depth of colors vs applying the same technique on the original MXF file.


    Reply With Quote
     

  6. Collapse Details
    #6
    Senior Member
    Join Date
    Feb 2009
    Location
    Long Island
    Posts
    9,829
    Default
    How'd you see the real-time depth difference? (Just genuinely wondering.)

    Like you saw it on your monitor (how big?) side-by-side with your own eyes by zooming in?

    There may truly have been a difference, don't doubt it - but sometimes there's a difference because we look for one and want to see it, find it.


    Reply With Quote
     

  7. Collapse Details
    #7
    Default
    Image sequences are common in 3D and compositing, also for film clips and animation.

    But I think a rule of thumb that people use, is that you want to keep your image pipeline as fat as it needs to be, but lean as possible.

    Taking some compressed long GOP footage and transcoding it won't improve its fidelity in itself, even if the destination format is of a higher quality—you'll just embed the same data in a larger file (less compression and/or higher bit depth). You can of course combine the transcoding with noise reduction, de-banding and re-grain techniques to get a "better" file, but that is another topic.

    Since a modern NLE can decompress h.264 on the fly and read it into memory, where all computations are made at 32 bit precision, you don't really need to transcode unless your files are of a higher resolution than your computer can handle. If your computer struggles with playback, you can transcode because that will free up CPU resources during playback that can be used for other things.
    @andreemarkefors


    Reply With Quote
     

  8. Collapse Details
    #8
    Default
    Quote Originally Posted by NorBro View Post
    How'd you see the real-time depth difference? (Just genuinely wondering.)

    Like you saw it on your monitor (how big?) side-by-side with your own eyes by zooming in?

    There may truly have been a difference, don't doubt it - but sometimes there's a difference because we look for one and want to see it, find it.
    Thanks for asking. For example I shot a scene in a low lit room, old abandoned factory warehouse, and lots of sunlight creeped into the room, so you’d get this warm look of course cream colored walls with one direction of sunlight. So when I played with the S-curve in 32-bit on the MXF vs the TIFF sequence, as I pushed the exposure, I had more conserved detail when peaking to highlights on the wall, like the TIFF looked more dynamic or organic vs the MXF breaking up. I just assumed that TIFF’s convert video frames into more representation of “photos?” This is why I’m asking about the protocol or color science when rendering different formats. 10 years ago storage space wasn’t as practical today, so old renders that were a burden of past can now work?


    Reply With Quote
     

  9. Collapse Details
    #9
    Senior Member
    Join Date
    Feb 2009
    Location
    Long Island
    Posts
    9,829
    Default
    Yeah, I mean...if it's working for you and that's best, why not? Nothing else matters.

    But also keep in mind that your NLE/software might actually sometimes be creating issues and/or not have support for certain images/formats (a common problem in the industry over the years). Meaning sometimes people see some artifacts in their images which they don't see in other software/applications.

    But as far as how many actually partake in those old renders/conversions, I would say not many.

    Most are importing their native high-resolution, high-quality RAW or best codecs straight into their NLE and either editing that, or temporarily editing ProRes (etc) proxies that the NLE creates and re-linking to the original import when mastering.

    Back in the day, a variety of formats had to be transcoded - regardless if storage was affordable or not - as they were so heavy on even decently-priced machines. Even today H.265 was pretty rough on everything up until recently with Apple's new M1s.

    ___

    But to answer your main question...there's truly no simple answer to how inferior footage may react to a particular conversion.


    Reply With Quote
     

  10. Collapse Details
    #10
    Default
    That's why some people used 5DtoRGB for transcoding DSLR footage, as it's supposed to be superior to StreamClip & Canon's E1 plugin, plus it corrected for Canon's erroneous use of BT.601 color space.


    Reply With Quote
     

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •