Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19
  1. Collapse Details
    #11
    Sound Ninja Noiz2's Avatar
    Join Date
    Jan 2006
    Location
    Detroit & SF
    Posts
    6,212
    Default
    A lot of folks don't use Protools
    :~), except in sound post for film where 90+% use protools.
    Cheers
    SK


    Scott Koue
    Web Page
    Noiz on Noise


    ďIt ainít ignorance that causes all the troubles in this world, itís the things that people know that ainít soĒ

    Edwin Howard Armstrong
    creator of modern radio


    Reply With Quote
     

  2. Collapse Details
    #12
    Sound Ninja Noiz2's Avatar
    Join Date
    Jan 2006
    Location
    Detroit & SF
    Posts
    6,212
    Default
    I know you are getting somewhat conflicting advice, but it is kind of the same advice, but it depends on the project. So a voice in "nowhere" can be lonely, or it can be intimate. The difference has more to do with the VO and the project. There really is no one size fits all. I was advocating for going with a BG first because it's something that the sound folks actually have control over. The right sound scape can make just about any visual feel lonely, but of course the VO can undo it in a heart beat.

    The thing is that what works for sound will have a LOT to do with the choices they make in picture. If picture nailed it then A good VO maybe all you need. If the pict. dept. didn't totally sell it then sound becomes really important.
    Cheers
    SK


    Scott Koue
    Web Page
    Noiz on Noise


    ďIt ainít ignorance that causes all the troubles in this world, itís the things that people know that ainít soĒ

    Edwin Howard Armstrong
    creator of modern radio


    Reply With Quote
     

  3. Collapse Details
    #13
    Senior Member
    Join Date
    Mar 2012
    Location
    Beverly Hills, CA
    Posts
    1,307
    Default
    Quote Originally Posted by Noiz2 View Post
    :~), except in sound post for film where 90+% use protools.
    That's exactly right, and the main reason why those folks use Pro Tools is so they can exchange assets and have no compatibility issues. Pro Tools was a lot more popular years ago when Digidesign's hardware enabled many more effects/tracks vs. what a CPU could do. I had a Protools|24 system and it did the job back then and later I owned a Digidesign 11 Rack (still have an MBox 2 Micro- something that needs to be sold...). What I found was Pro Tools is really just a harness for plugins, and that I could use any tool I wanted as long as it provided the same capabilities, either built in or with plugins.

    A few years ago when Avid/Digidesign changed something with policy/pricing, many Pro Tools users bailed and went to Reaper. Pretty much every plugin available for Pro Tools is available as a VST or AU plugin, so most things transfer over. If one desires to work expressly in an audio field where assets are shared in Pro Tools format, one must bite the bullet and invest in Pro Tools (now down to $25/mo or $600 for a box + 1 year software updates. I paid around $12,000 for Protools|24 and plugins). I see they're still using dongles. One advantage of using Logic X, Reaper, and Audition (uses online activation/transfer; not really a limit), is no dongles when switching to the laptop or between Mac and PC.

    Most of the folks on this site appear to be on a budget- lots of indie filmmakers and hobbyists. Given what's built into Audition (in depth sample editing and decent multitrack editing), Logic X (many really excellent built in filters/effects), and Reaper (mostly for recording/MIDI, not sample editing, and also for 3D/VR ambisonic editing), there's no need for Pro Tools for indie projects, since assets won't be shared.

    In most cases no audio processing hardware is needed, even with Pro Tools, however a good friend still needs every ounce of hardware accelerated performance for his high-end film level editing (crazy number of tracks), and Avid will still sell ya a $12,000+ system with hardware: https://www.sweetwater.com/store/det...o-16x16-analog .

    The future for high-end audio processing is via the GPU (when the CPU isn't enough): https://www.pro-tools-expert.com/pro...ld-be-pictures . Some comments about GPU and latency- recent cards have crazy amounts of RAM, the entire audio process could run on the GPU, sending only processed sample data across the PCI bus, resulting in basically 0 latency (still have to get samples to the audio device, sometimes over USB etc.). GPU API design can further deal with latency issues for audio applications.

    EDIT: there's a free version of Pro Tools with up to 16 tracks- perhaps sufficient for indie projects: https://www.avid.com/pro-tools-first (can always downmix multiple tracks as a work-around)

    Back to the OP original question- experimenting with EQ/filters in post will give you a lot of flexibility to match the emotion of the performance of the shot, provided the original recording is in a sound booth / non-echoic space and there's no proximity or other artifact that significantly colors the recording. I think the future of audio are the new mics (currently around $1500) which can very accurately emulate all of the top famous mics, including proximity effect on those mics. Being able to quickly grab a mic you know well, which sounds good with talent is still valuable, especially if little or no EQ is needed in post. When it's not clear what will be the best final sound, being able to 'change mics' and/or using the proximity effect or not, in post, opens up many creative possibilities and can save a lot of time and money (e.g. not having to experiment with physical mics and mic distance or having to bring talent back in to re-record).
    Last edited by jcs; 12-08-2018 at 11:04 PM.


    0 out of 1 members found this post helpful.
    Reply With Quote
     

  4. Collapse Details
    #14
    Sound Ninja Noiz2's Avatar
    Join Date
    Jan 2006
    Location
    Detroit & SF
    Posts
    6,212
    Default
    There are some valid points in there. Avid has gotten a lot of folks to look around. The problem is that there are not a lot of good alternatives. Logic's big problem, for professionals, is that Apple is one of the most fickle companies on earth and you just never know when they are going to dump something or decide it should be a toy. Also Apple apps have a bad rep of not "playing well with others".

    Reaper is solid but has no OMF capabilities and the only way around that is with a windows only app.

    Audition might be OK it at least is more post related but it's part of Adobe's rental suite. It may be available separately? I don't really know the app but I'm going to assume that like the rest of Adobe's apps it is pretty solid and probably "plays well with others".

    One you didn't mention are Mixbus, which right now is still lacking some work for video functions but could become a player. And Resolve. Resolve right now is the best swap out for PT, from a professional standpoint (on the Mac side of things). I added the last because outside of the US PC's are a much bigger hunk of the pie in film work, but I haven't been in that ecosystem in so long I don't really know what is good there. Resolve plays very well with others and now has some really solid sound for post features, and you can get all of that for $0.

    Your definition of "indie" seems to be OMB projects where the director/ editor/ sound editor/ etc. is all one person and it's all done on one machine with one app, or in one suite. If your sound app is not the same as your NLE then you do have to exchange data somehow, even if it's on the same computer.

    My definition of "indie" stretches quite a bit further and includes what I call the "mini Indie's". Technically an "Indie" is a film produced independent of the studios. So while that certainly would include no-budget shorts, it also includes multi million dollar projects. The vast majority of that range is handing sound off to someone who does sound post, and is not leaving it up to the picture editor. Personally anything beyond OMB projects I think you are shooting yourself in the foot not getting a post sound person to at least polish what your pict editor did. I have worked the full range including a ton of student shorts. If it's going anywhere but YouTube you are probably hurting yourself by not having someone do a polish, or better yet actually do a full sound post.

    RE: OP's post

    There are basically two "schools" of thought on a lot of post situations. The first is that you can create or alter anything with plugins etc to sound like anything you want. This school is almost exclusively "young bucks" who have not a lot of experience. I was one once upon a time. There was the "sure knowledge" that with the right number of oscillators and filters you could create any instrument and it would be indistinguishable from the "real" thing. For some thing we got pretty close but the killer was always the performance. So people made controllers that made you act like playing the real thing, and that helped. But you really got to a "what is the point I might as well be playing the real thing!". Things have come a long way and the sounds are great now, mostly because of sampling and not trying to "re invent the wheel". Also MIDI and the ability of being able to record virtual performances gave some serious advantages. But you are still constrained by your performance abilities.

    What is the point? Even with all the toys you are still limited by your experience. The longer you work at it the better you get. Supposedly it takes 10,000 hours of doing something to get really good at it.

    The other "school" is made up of the folks who have done the 10,000 hours.

    Filters and plugins are amazing and very useful, but they are just another tool. There was a thread on a different forum a few years back about synthesizing ambiences. Like 12 pages on how to create believable wind. That is school one thinking. School two folks grab a recorder and go record some wind, or they pull some wind they recorded out of the library. If it's not exactly what you need you add another track or two and build up the wind you need. You are done a week before school one has got something that almost sounds like real wind.

    It's not an anti plugin position I have spent many thousands on plugins and use them a LOT. It is about knowing what is probably going to help and what is going to probably be a waste of time. I use plugins #1 to fix problems (NR, EQ issues, level issues), and #2 for FX (intentionally clipping something, turning sounds into weird ambiences, pitching things up and down, adding beef with a subharmonic synthesizer), and #3 for general enhancement (reverb, eccho, etc). But 99% of post is slicing and dicing. Cutting out what you do not want and adding things you do want. Other than some very "electronic" sounds I never create the sound from scratch, it's always a "natural" sound that gets layered up or transformed.

    In the above posts you have been told that nothing sounds lonelier than a voice in a dead space, and that you should use reverb so its a live space, mic at a distance to get that "alone" sound and mic up close to get that intimate sound, use lonely BG sounds, or use contrasting "happy" sounds.

    It is all good advice, for the right project. Try them all and see what works. Where school two comes in is an experienced sound editor can look at the scene and probably get closer on the first try. Not always, sometimes the thing you were sure would be killer just falls flat. But usually they will be in the ballpark. You will probably be able to take a pretty good stab at it since you have seen it, we on the other hand are taking shots in the dark based on assumptions.
    Last edited by Noiz2; 12-09-2018 at 12:54 PM. Reason: spelling and clarity
    Cheers
    SK


    Scott Koue
    Web Page
    Noiz on Noise


    ďIt ainít ignorance that causes all the troubles in this world, itís the things that people know that ainít soĒ

    Edwin Howard Armstrong
    creator of modern radio


    1 out of 1 members found this post helpful.
    Reply With Quote
     

  5. Collapse Details
    #15
    Senior Member
    Join Date
    Mar 2012
    Location
    Beverly Hills, CA
    Posts
    1,307
    Default
    Quote Originally Posted by Noiz2 View Post
    Reaper is solid but has no OMF capabilities and the only way around that is with a windows only app.

    Audition might be OK it at least is more post related but it's part of Adobe's rental suite. It may be available separately? I don't really know the app but I'm going to assume that like the rest of Adobe's apps it is pretty solid and probably "plays well with others".
    Pro Tools, Reaper, and Logic X don't provide a native waveform editor (last checked); that's where Audition excels (where they added multitrack editing later). https://theproaudiofiles.com/fundame...eform-editing/
    That said Pro Tools + some fairly expensive plugins can produce significantly better results vs. Audition producing sometimes-just-passable for YouTube/Instagram level content (which is where our content lives). If there was a business reason to invest again in Pro Tools + high-end plugins, I'd do it without reservation (when I owned Protools|24 I hired a full time musician/sound guy (he was also an amazing 3D artist/animator)). Currently we shoot once or twice a week and audio editing is typically very minimal (don't clip when recording directly into the camera (Schoeps CMC641 + C300 II; has a digital limiter)), remove some extraneous sounds when needed, normalize, export and upload). I post here to take a break when writing code (day job currently real-time 3D rendering PBR/AR/VR).

    Regarding indie- sure there's lot of meanings, and for our small productions (2- 4 people typically), I 'get' to do all the technical work- camera, lights, sound recording, audio/video editing, stills + plates, color correction + grading and sfx + vfx. I see many posts here with folks doing similar projects (including folks who's 'day job' is part of a larger production (e.g. as a DP, grip, sound, etc.) doing closer to OMB for their own projects). Agreed if there's a budget or connections/favors for specialized talent at all aspects of production, that is preferred and will tend to produce better results if the team is well managed by the acting director and/or producer.

    OP: each bit of feedback comes from a specific point of view based on personal experience. In the end, reading books on the subject, reading blog posts, watching examples on YouTube, asking on forums, asking friends in the business (specialists), and perhaps most important- doing your own tests and experiments will get you to your goal.


    Reply With Quote
     

  6. Collapse Details
    #16
    Sound Ninja Noiz2's Avatar
    Join Date
    Jan 2006
    Location
    Detroit & SF
    Posts
    6,212
    Default
    Hey, a moment of waht is great about forums!

    It may be one of the few things we agree 90% on.

    I'm guessing by "dedicated waveform editor" you are talking about something like Sound Designer, which was built to edit samples to send to a sampler (back when such beasts were all hardware). So yes PT doesn't really do that. It does allow you to edit at the sample level so for post that is all you are really going to use unless you are preparing a bunch of samples to send to a sampler. As a sample editor I don't gravitate to PT since setting loop points is harder than with other apps.

    I tend to do with DSP Quatro and I just started playing with Wave Lab.
    Cheers
    SK


    Scott Koue
    Web Page
    Noiz on Noise


    ďIt ainít ignorance that causes all the troubles in this world, itís the things that people know that ainít soĒ

    Edwin Howard Armstrong
    creator of modern radio


    Reply With Quote
     

  7. Collapse Details
    #17
    Senior Member
    Join Date
    Mar 2012
    Location
    Beverly Hills, CA
    Posts
    1,307
    Default
    DSP Quatro looks decent for basic waveform editing on OSX. Before I purchased Protools|24, we were 100% PC, no macs. The primary audio/waveform editing tool was SoundForge (changed hands over the years; previously owned by Sony). Even after Audition came out (was previously known as Cool Edit before Adobe bought them), SoundForge still did certain things better and/or had features not available elsewhere. WaveLab was around back then, and our music guy preferred Steinberg's Cubase for his MIDI composition work. I played with WaveLab years ago, back then I already had a license to SoundForge, so never purchased it. Since WaveLab is from Steinberg (creators of the VST standard), I'd expect the quality/features to be pretty good in 2018. Steinberg's Neundo made a splash a few years ago, with another mass Pro Tools migration to Neundo for many users. If one likes WaveLab, Neundo is certainly a contender to replace Pro Tools: https://www.steinberg.net/en/product...roduction.html . Neundo is now a bit more expensive vs. Pro Tools, however it's a lot more capable with built in features (Pro Tools needs to add a lot of plugins; even then I'm not sure it has feature / usability parity with Neundo).

    WaveLab (full version) uses a dongle- apparently required even for the 30 day trial. WaveLab Elements (their lite version) doesn't appear to need a dongle. Looking at WaveLab's features, I'd figure their spectral editing is probably as good as or better than Audition's (which is decent). WaveLab's new wavelet display could be useful too: https://www.steinberg.net/en/product...avelab_95.html . I do waveform editing mostly for special effects, videogames, and very occasionally to deal with troublesome dialog.

    Back on topic, at some point when using these new 'modeling microphones' there'll be presets for dark, moody, bright, happy etc. to apply to recordings to quickly hear variations; coupled with a real-time video display, the sound person + director will be able to quickly dial in the desired emotional / narrative goal. This will be limited to filtering/convolution-environment kinds of effects. A bit farther in the future it will be possible to remap a dry vocal recording with adjustable emotion (automatically changing inflection, cadence, etc.). Or take an emotional recording and make it dry. How? AI and machine learning: there are demos already out there that remap a completely difference voice for so called "deep fakes", which is a separate philosophical discussion. It's currently primitive, though in the future it will be hard to tell real from remapped.


    Reply With Quote
     

  8. Collapse Details
    #18
    Sound Ninja Noiz2's Avatar
    Join Date
    Jan 2006
    Location
    Detroit & SF
    Posts
    6,212
    Default
    I started earlier. There was a Windows app, I'm blanking on the name, that was doing twice the tracks of PT and completely stable. PT at the time crashed a couple of times a day, this is PT 3.x. And at the time I also had Sound Forge, a rock of functionality. But at the time PT was 99.9% of the film sound post world in the US. So I spent my days in the Mac/PT ecosystem and the rest of the time in the Win 95-8 ecosystem. It was just too big a PITA so I ended up moving to PT and Mac.

    Video games are a PITA, or at lest were when I was doing some. All kinds of whacko requirements and time constraints, plus you need a bunch of versions of everything... Gets very 9to5 quick.

    The whole software instead of acting thing I mostly disagree with. I'm sure that as time goes on it is going to get closer to a standard but personally I liken it to CG. CG used in subtle ways completely fools everyone, but there are areas it just doesn't work. With HUGE amounts of data and a lot of "real world" dirt added in maybe? But if you just look at the Bond openings, the real stunts are clearly obvious VS the CG openings. StarWars has much the same problem, the new versions are not "dirty enough".
    That said one of the things about sound is once you get past the information point for the audience the "reality" levels jumps a lot so?

    +++
    I realized that last might be confusing.

    Basically once you add enough detail to the sound track it suddenly sounds "real". I think it's the point where you exceed the audiences to keep track of everything and force them to focus on what they think is important. Which is what happens 24/7 in the real world. You are swamped with data and you are constantly filtering out the important from the "noise". I'm not sure it is Murch's "three" things but it does happen and it just kind of jumps out at you when you get there.

    ++++

    When I started theatre there was no computer automation and sound and lights were basically a performance on their own. Actors hated computer lighting because it didn't interact with them and they felt they were playing to the technology. But it saved a lot of money because your operators could be much less experienced. It's kind of an MP3 analogy. It wasn't better but it was OK and it was cheaper. Same thing eventually happened to sound in the theatre, great for the designer, not for the operator. As a designer I loved that I had the control, but the operators now had just about zero connection to the performance. So the show was as good as I designed it, but never better. A good operator was a joy to work with and made your design better than what you planned. They were also able to deal with changes on stage and "save" the show when things went sideways in stage. None of that is possible with a computer run show.

    Now film doesn't have live performances so it's not as big a deal but anything involving actors hurts when you remove them from the "performance".

    In some ways it comes down to the opposite views on the singularity. I am absolutely in the it would be a nightmare camp. But other smart people are in love with the idea so? I take solace in the knowledge that I am very unlikely to be around when it becomes a serious issue and not a futurist wet dream.

    There are two things that keep some of this from happening. First, at lest for now, it is much faster to hire an actor than try to have a programer try to "act", second is that there are at least a few folks who acknowledge that just because you can do something doesn't mean you should do it.

    What erodes this is of course that people keep lowering the bar of expectation. I saw an article on how some of the big fashion models were actually just CG. THey had some photos, but to me none of them looked real. But apparently they did look "real" to a lot of people. Maybe if you spend enough time playing video games plastic starts looking like flesh?
    Last edited by Noiz2; 12-09-2018 at 08:16 PM. Reason: clarity
    Cheers
    SK


    Scott Koue
    Web Page
    Noiz on Noise


    ďIt ainít ignorance that causes all the troubles in this world, itís the things that people know that ainít soĒ

    Edwin Howard Armstrong
    creator of modern radio


    Reply With Quote
     

  9. Collapse Details
    #19
    Senior Member
    Join Date
    Mar 2012
    Location
    Beverly Hills, CA
    Posts
    1,307
    Default
    I brought up automated emotion as I recently watched a video demonstrating changing the actor's inflection at the end of words so they'd sound more confident. This is relatively easy to do with modern spectral editors (including in Audition. Melodyne created waves years ago (pun intended) when they provided single note pitch and formant editing for voice/singing with very high quality: AutoTune on steroids). Here's a recent list: https://audioassemble.com/best-pitch...tion-software/ . After seeing what's happening in the machine learning space, it's clear that will see more automated tools in the near future (vs. manual).

    As for the singularity and/or merging with machines, right now it doesn't sound very appealing. After doing a lot of inner/spiritual work with oneself, one gets to a point where the goal is to simply live life as a human, perhaps following the Zen or Taoist paths, and simpler, analog things become more appealing. Perhaps it's also a side effect of having lived in an age of technology and after enough years one yearns to get back to nature, realism, and analog things. Though for OMB/ultra-low budgets, tech can help create works of art otherwise impossible.

    Automation is coming to every field, the question is how far will it go before people push back and say- enough! One area where AI/machine-learning will really help is personalized nutrition and health diagnostics. So AI doctors/doctor-assistants = cool. For performance art, I think we may see a resurgence of practical effects in film (that happens from time to time already), less green screen, less ADR (probably not haha ;)), and a growing market for live performances- from plays to music. Part of the 'analog resurgence' is it will bring people together in the real world vs. the isolation that has been growing due to cellphone-addiction and social-media-addiction. At the root of the 'technology pressure' has been market competition for resources, fundamentally driven by energy. When ultra-efficient energy systems are made public so the technology pressure is lifted for basic survival, I think that's when the analog resurgence will really take off: a new renaissance for the arts.


    Reply With Quote
     

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •