HDMI Capture Problem SOLVED - AviSynth RULES!

Ralph, why is no one using AviSynth as you are? Is it just a technical fear?? Is it still worth it in your mind given the quality of the AVCHD patches available now?
 
Actually there are people using it, but yes, there is a lot of technical fear. As to whether it's still worth it, that's a question that can only be answered by how much of an insane perfectionist you are. The gap between the HDMI picture and the hacked H264 is very small now, whereas once it was huge, as you know. Just the other day I was watching one of my comparison tests projected onto a 10 foot screen. Fine detail was basically equal, which is a major achievement for the hack. But yet, there were still differences. At high ISO's the noise in the HDMI picture was less intrusive than the H264. The HDMI picture 'pops' more because of it's slightly different gamma curve. And the HDMI picture has a certain feeling of solidness that the H264 doesn't quite have.

But is that worth it to go the HDMI route? Damned if I know. I'm just being your reporter. For me personally, I'm more than happy with the hack. Nowadays I use the HDMI as a yardstick to see how far along we're coming. But I still maintain this thread for those few 'insane perfectionists' who may happen to come this way.
 
Hi! My name is Santi. I´m from Spain and i´m new here. I have a GH2 and Ninja and i can´t use the avisynth with my videos. They always give me errors. Can somebody explain step by step the process to transform a Prores ninja video with Virtual Dub? I´m sorry buy i can´t understand why i can´t use the virtual dub and the avisynth. I made the steps of the first post but i can´t do nothing with my videos. The only thing that i can do is transform with the mpegstream clip. Thank you very much for all. Sorry for my english.
 
Hi! My name is Santi. I´m from Spain and i´m new here. I have a GH2 and Ninja and i can´t use the avisynth with my videos. They always give me errors. Can somebody explain step by step the process to transform a Prores ninja video with Virtual Dub? I´m sorry buy i can´t understand why i can´t use the virtual dub and the avisynth. I made the steps of the first post but i can´t do nothing with my videos. The only thing that i can do is transform with the mpegstream clip. Thank you very much for all. Sorry for my english.

I'll try to help you. Exactly what type of error are you getting?
 
Hi! Thanks Ralph. My problem is that when i open the script of avisynth in virtual dub, always give an error. For example: Sintax error, or error in line 9. I´m tried to modify the code but stills the error. I´m in windows 7 64 bits, and i tried with avisynth 2.5 and 2.6. I don´t Know what can i do. Thanks again.
 
Hi! Thanks Ralph. My problem is that when i open the script of avisynth in virtual dub, always give an error. For example: Sintax error, or error in line 9. I´m tried to modify the code but stills the error. I´m in windows 7 64 bits, and i tried with avisynth 2.5 and 2.6. I don´t Know what can i do. Thanks again.

Please post the script and the EXACT error you're getting for that script.
 
HDR mode with FIRMWARE 1.1 and HDMI

I just tested the new HDR modes with HDMI output. Here's the scoop: 30P is a disaster, hopelessly mangled up. Extra fields are added in, just like for 24P. However, there's good news for PAL users - 25P comes through cleanly - no extra fields. There is still the issue of the jaggy chroma in the red channel, even in 25P. So if you want a perfect picture, you would still need to run it through Avisynth. The script would consist of only the chroma fix. I suspect most people won't bother. It's only really noticable in areas of heavily saturated red. The one exception is if you're doing greenscreen keying with the HDMI picture. Then you must use the chroma fix because the red jaggies affect the edges of flesh. Without the chroma fix, it's going to be very hard to pull a good key.

NOTE: I added this information into the first post.
 
Last edited:
Ok! Thanks Ralph! One question more. I recorded 1080 50i in the ninja. I can´t record in 1080 60i, maybe this is the problem too i think. I don´t know.
 
Ok! Thanks Ralph! One question more. I recorded 1080 50i in the ninja. I can´t record in 1080 60i, maybe this is the problem too i think. I don´t know.
On the Ninja, make sure the input rate is 1080i60, and not 1080i59.94. Remember, the GH2 is putting out a true 60.00i signal, not the standard 59.94i.
 
Mmm... i can´t put the ninja or the camera in 1080 60i. I only can record in 1080 50i. What´s the camera configuration to record in 1080 60i in PAL versions?
 
Mmm... i can´t put the ninja or the camera in 1080 60i. I only can record in 1080 50i. What´s the camera configuration to record in 1080 60i in PAL versions?
To change the camera to NTSC, you must use the hack. If you haven't hacked the camera, then 1080 60i will not be available.

BTW, did you solve the problem with the script errors?
 
HDMI vs AVHCD for Feature Film Production Quality

HDMI vs AVHCD for Feature Film Production Quality

Hi Ralph,

Great work with putting together the solution for HDMI out recording. I'm currently working with an independent film group in Pre-Production for a Feature Film and we've looked at using the GH2 as we like the image quality, particularly using the hack/Driftwood settings.

Do you think using the HMDI out recording (then using Avisynth) yields a better image than using hack/Driftwood settings, particularly when thinking about how that will project onto a large theater screen?

Also, do you think the workflow is rock solid - in other words, reliable for shooting important production footage?

Lastly, I had looked at using the Hyperdeck Shuttle 2, as I believe that can record uncompressed. Would this work with the Hyperdeck Shuttle 2, and do you believe that would be even better image quality vs. Ninja?

Thank you so much for your work and your response to my questions,

Matt
 
Astroimaging - video test

Astroimaging - video test

Hi

I have a question about the relative merits of this feature. I use my GH2 for astro imaging. There is a trick in astro in which images of planets (which are very small e.g. 100 pixels +/-) are created by stacking the frames from video cameras. (from 640 pixel webcams to dedicated 640 pixel ccd cameras shooting raw). The individual frames can look like mush but when several hundred are stacked in software the results can be amazing. The key aspect is to get clean uncompressed data. (see http://www.theimagingsource.com/en_US/products/cameras/usb-ccd-color/dfk21au618/ for dedicated camera spec). Smaller pixels of GH2 do offer some benefits and for info images are generally at iso160 to 320 - they are quite bright, the issue being atmospheric disturbance, you just need to find approx 400 images in a 3 or 4 minute window when the atmosphere is steady for a fraction of a second. So 24mins is plenty of video.

So the question I am asking myself is do you think the quality of the individual images in prores would be noticeably free from artefacts and detail loss (which is small and subtle) more so than one of the GH2 hacks? To be clear the final result here is a static image composed from information in several hundred video frames (which is then post processed in specialist software) not a video

In terms of final image quality is there any difference between the ninja and the blackmagic or is it just implementation difference

regards
Steve

PS In canon's live view is the preferred way of doing image stacking
 
To matt_gh2 and billhinge:

Both of you are asking whether the HDMI picture is better than the hacked picture. There is a difference, but it's subtle. The problem is each person has their own perception of what's acceptable. Mine may not match yours, so there's no way I can say do it or don't do it. The best advice I can give you is to get a hold of an HDMI recording device or capture card, and run your own tests. The good news is you can record simultaneously in-camera and externally, so you can compare the two later.
 
Thanks Ralph for quick response. I will test using Hyperdeck Shuttle 2 and compare/contrast with the Driftwood 176 Mbps hack. Will then take footage to theater screening for assessment. Thanks for all your work.

Matt
 
@billhinge

Just wondering why you would use video mode for astro pictures. Seems like you should shoot in still mode for the highest quality. And perhaps use the 40 frame burst mode for bright objects.
 
@billhingeJust wondering why you would use video mode for astro pictures. Seems like you should shoot in still mode for the highest quality. And perhaps use the 40 frame burst mode for bright objects.

Not a silly question.

Deep sky images like star fields do tend to contain stars and nebula which are generally faint and in the case of stars they are point objects. Here the name of the game is to take multiple single frames and stack them using clever software. Pro's would used peltier cooled ccd astrocameras but they are expensive, fortunately DSLR's hold up reasonably well except the normal canon/nikons usually have fairly severe internal UV/IR cut filters to maintain sharpness. The GH2's internal filter is lazy being sensitive to some uv and ir which softens the image. Adding an external UV/IR cut with a narrower range sharpens the image (on GH2 - works on cameras with weak internal filters) and improves the colour, drawback is that a decent filter is expensive.


Anyway, planets and the moon differ because they are much brighter, have a visible area rather than point image. Being bright they only need a short exposure e.g. Mars through a telescope could be 1/100s @iso160 fl=2800mm f10. The images are tiny though, typically jupiter will cover approx 100 pixels on the GH2, so surprisingly shooting at 640x480 produces a larger looking imageNow the problem is that the atmosphere causes the image to literally boil at such a high magnification e.g. few hundred to even x1000. But for say approx 1/100 of a second every second we may some useful info the rest being garbage as the atmosphere steadies briefly.

If you shoot as many 640x480 frames in say 3 minutes as you can at the highest possible data rate without compression you just might get some useful data (individual still image will still look crap) but by using special stacking software you take say 400 to a thousand of the best and stack them and a useful still image appears. Then using other software such as deconvolution, wavelets etc to post process to bring out hidden detail, so really its collecting 'bits' of data and averaging not video as you would think of it(each data compression
negatively affects the final result thats why dedicated planetary cameras would use a small 640x480 ccd)hope that makes sense :)

here are some stacked images (split into raw images, rgb channels and combined)
 

Attachments

  • 028_mpeg4_R.jpg
    028_mpeg4_R.jpg
    70.4 KB · Views: 0
  • 028_mpeg4_B.jpg
    028_mpeg4_B.jpg
    51.9 KB · Views: 0
  • 028_mpeg4_G.jpg
    028_mpeg4_G.jpg
    4.3 KB · Views: 0
  • saturnmay.jpg
    saturnmay.jpg
    97 KB · Views: 0
  • saturn5 0811.jpg
    saturn5 0811.jpg
    58.4 KB · Views: 0
  • saturn5 0815.jpg
    saturn5 0815.jpg
    58.6 KB · Views: 0
Last edited:
Back
Top