Logic X audio settings question
Moderators: admin, mdc, TAXIstaff
- ChrisEmond
- Getting Busy
- Posts: 55
- Joined: Tue Dec 22, 2009 8:04 am
- Gender: Male
- Location: Montreal, Canada
- Contact:
Logic X audio settings question
Greetings to everyone,
New to Logic X and I'm trying to figure out how to setup my global and project settings particularly the bitrate and sample rate.
Also, is it best to keep my recording preferences at AIFF 24 bit?
Chris
New to Logic X and I'm trying to figure out how to setup my global and project settings particularly the bitrate and sample rate.
Also, is it best to keep my recording preferences at AIFF 24 bit?
Chris
- andygabrys
- Total Pro
- Posts: 5567
- Joined: Sun Jan 02, 2011 10:09 pm
- Gender: Male
- Location: Summerland, BC by way of Santa Fe, Chilliwack, Boston, NYC
- Contact:
Re: Logic X audio settings question
Its a crapshoot if your settings are going to line up with anyone else, which is why its easy to convert sample rate and bit depth.
the "accepted" standard for audio for picture (TV / VIDEO) has been 48 kHz, 16 bit depth.
The "red book" standard for mastering CDs has been 44.1 kHz, 16 bit depth.
As audio technology moves onward, some formats are asking for higher resolution than either of those.
What I do:
I record 48 khz 24 bit.
For final masters for TV use I make 48 kHz 16 bit masters.
I use dithering on the limiting stage of my master to reduce 24 bit to 16 bit (built in in Logic - I happen to use the UV22HR algorithm).
I use WAV files even though I use Logic predominantly and Apples standard is AIF.
This covers 80% of uses.
The other cases if a library wants
44.1 kHz 16 bit, I convert the files in Logic or Pro Tools or Compressor or even iTunes
or AIF I do the same
In some cases they want 48 kHz 24 bit masters, and then you set your bounce options to 48 24 and deselect dithering in Logic.
the main reason that 24 bit depth recording became popular was that 16 bit depth gives you a lot less dynamic range, and unless
you recorded hot levels to the track, the noise "floor" was noticeable.
That might have been older converters / equipment etc. but most people use 24 bit depth now.
the "accepted" standard for audio for picture (TV / VIDEO) has been 48 kHz, 16 bit depth.
The "red book" standard for mastering CDs has been 44.1 kHz, 16 bit depth.
As audio technology moves onward, some formats are asking for higher resolution than either of those.
What I do:
I record 48 khz 24 bit.
For final masters for TV use I make 48 kHz 16 bit masters.
I use dithering on the limiting stage of my master to reduce 24 bit to 16 bit (built in in Logic - I happen to use the UV22HR algorithm).
I use WAV files even though I use Logic predominantly and Apples standard is AIF.
This covers 80% of uses.
The other cases if a library wants
44.1 kHz 16 bit, I convert the files in Logic or Pro Tools or Compressor or even iTunes
or AIF I do the same
In some cases they want 48 kHz 24 bit masters, and then you set your bounce options to 48 24 and deselect dithering in Logic.
the main reason that 24 bit depth recording became popular was that 16 bit depth gives you a lot less dynamic range, and unless
you recorded hot levels to the track, the noise "floor" was noticeable.
That might have been older converters / equipment etc. but most people use 24 bit depth now.
Irresistible Custom Composed Music for Film and TV
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
-
- Total Pro
- Posts: 5351
- Joined: Mon Dec 07, 2009 4:13 pm
- Gender: Male
- Location: Peculiar, MO
- Contact:
Re: Logic X audio settings question
Bit rate is best set at 32 bit float or 64 bit float, whatever the highest is. That means you will have more accuracy in the sampling, less errors, less rounding, (the math) more bits to work with, especially because you will be using effects(math), but in the end before rendering a wav or aiff, you will dither down to a 16 bit or 24 bit.
Sampling rate, either 44.1 khz or 48 khz. CD is 44.1, so is mp3. There is also 96khz, 196 khz, and there is some controversy whether it's better or not, yes if you keep the sampling rate, but if you have to downsample, probably not, the sample points would be the same, one just samples more often than the other. Of course if someone asks for a particular sample rate or bit rate that is another story. If you needed to upsample, say from 48khz to 96khz, you would still basically have a 48khz sample, no benefit other than you could play it on a setting with 96khz setting.
The advantage of supplying a 16 bit versus a 24 bit if you have the option, is that you control the dithering process if it's dithered to 16 bit, with a 24 bit, if they decide to dither(or not) to 16 bit, it's out of your control.
Probably the advantage of aif over wav is metadata?? If you wanted a wav to have metadata you would use a broadcast wav, but it really depends on what software uses it. Some players use metadata, some don't.(artist, song, album info). It's possible in Logic that it could use the metadata to organize and find files easier with metadata included? I don't know. But I'm sure Logic can handle either, and many other formats.
Sampling rate, either 44.1 khz or 48 khz. CD is 44.1, so is mp3. There is also 96khz, 196 khz, and there is some controversy whether it's better or not, yes if you keep the sampling rate, but if you have to downsample, probably not, the sample points would be the same, one just samples more often than the other. Of course if someone asks for a particular sample rate or bit rate that is another story. If you needed to upsample, say from 48khz to 96khz, you would still basically have a 48khz sample, no benefit other than you could play it on a setting with 96khz setting.
The advantage of supplying a 16 bit versus a 24 bit if you have the option, is that you control the dithering process if it's dithered to 16 bit, with a 24 bit, if they decide to dither(or not) to 16 bit, it's out of your control.
Probably the advantage of aif over wav is metadata?? If you wanted a wav to have metadata you would use a broadcast wav, but it really depends on what software uses it. Some players use metadata, some don't.(artist, song, album info). It's possible in Logic that it could use the metadata to organize and find files easier with metadata included? I don't know. But I'm sure Logic can handle either, and many other formats.
- andygabrys
- Total Pro
- Posts: 5567
- Joined: Sun Jan 02, 2011 10:09 pm
- Gender: Male
- Location: Summerland, BC by way of Santa Fe, Chilliwack, Boston, NYC
- Contact:
Re: Logic X audio settings question
Ok - this is bound to be a hot thread now.
Bit depth and sample rate in the way that I was speaking of them are something quite different from what Len911 wrote.
Sample rate is the number of times per second the computer records amplitude, and the bit depth is how much amplitude =ve or -ve from 0.
16 bit depth gives a dynamic range of 96 dB.
24 bit depth gives a max dynamic range of 144 dB.
The idea being, the greater the sample rate, the more measurements taken in a second, and the truer the resulting curve constructed by those data points is to the actual analog signal.
At this point there isn't a commercially available AD convertor that works at greater than 24 bit (maybe a couple - I think Antelope Audio has one). There isn't any point and not much to be gained.
I do agree though that the benefit of going above 96 kHz is debated as well.
What Len is talking about is related - but is actually the computational bit depth that is used in the summing mixer.
In the old days 24 bit fixed math was used in the Pro Tools TDM engine.
that gave way to almost every DAW using 32 bit floating point math.
Recently just about everybody has gone to 64 bit floating point math.
This gives a silly amount of headroom and its very hard to clip the internal workings of the summing mixer in your DAW unless you don't pay ANY attention to gain staging.
because you usually supply all that meta data on a spreadsheet or in a web form.
Bit depth and sample rate in the way that I was speaking of them are something quite different from what Len911 wrote.
Sample rate is the number of times per second the computer records amplitude, and the bit depth is how much amplitude =ve or -ve from 0.
16 bit depth gives a dynamic range of 96 dB.
24 bit depth gives a max dynamic range of 144 dB.
The idea being, the greater the sample rate, the more measurements taken in a second, and the truer the resulting curve constructed by those data points is to the actual analog signal.
At this point there isn't a commercially available AD convertor that works at greater than 24 bit (maybe a couple - I think Antelope Audio has one). There isn't any point and not much to be gained.
I do agree though that the benefit of going above 96 kHz is debated as well.
What Len is talking about is related - but is actually the computational bit depth that is used in the summing mixer.
In the old days 24 bit fixed math was used in the Pro Tools TDM engine.
that gave way to almost every DAW using 32 bit floating point math.
Recently just about everybody has gone to 64 bit floating point math.
This gives a silly amount of headroom and its very hard to clip the internal workings of the summing mixer in your DAW unless you don't pay ANY attention to gain staging.
That doesn't refer to capturing the signal - that refers to computing the math while mixing in your DAW.Len911 wrote:Bit rate is best set at 32 bit float or 64 bit float, whatever the highest is. That means you will have more accuracy in the sampling, less errors, less rounding, (the math) more bits to work with, especially because you will be using effects(math), but in the end before rendering a wav or diff, you will dither down to a 16 bit or 24 bit.
True but there are few that have ears golden enough to tell the difference between a decent mp3 and a WAV or AIF file so its not much of an issue - just dither when you can, and if somebody asks for 24 bit, don't worry about it. They will handle it.The advantage of supplying a 16 bit versus a 24 bit if you have the option, is that you control the dithering process if it's dithered to 16 bit, with a 24 bit, if they decide to dither(or not) to 16 bit, it's out of your control.
True AIF can retain Meta-data - but that doesn't matter much if the client specs WAV, or if they Source Audio or Soundminer etc. (whatever all the platforms for cataloging music are)Probably the advantage of aif over wav is metadata?? If you wanted a wav to have metadata you would use a broadcast wav, but it really depends on what software uses it. Some players use metadata, some don't.(artist, song, album info). It's possible in Logic that it could use the metadata to organize and find files easier with metadata included? I don't know. But I'm sure Logic can handle either, and many other formats.
because you usually supply all that meta data on a spreadsheet or in a web form.
Irresistible Custom Composed Music for Film and TV
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
-
- Total Pro
- Posts: 5351
- Joined: Mon Dec 07, 2009 4:13 pm
- Gender: Male
- Location: Peculiar, MO
- Contact:
Re: Logic X audio settings question
I don't disagree with what Andy said! *
I was referring to the settings in the daw
In simple terms, the a/d converter, only measures 3 things, amplitude (db), frequency (hz), and phase (degree of a sine wave). The sampling frequency just means how often in hz, times per second, a measurement is made in the amplitude. Really only amplitude is measured, the frequency and phase is a by-product, sort of like if you connected the dots, a sine wave and it's phase would appear. I think it's Helmholtz theory that all sounds are sine waves. It's how a speaker moves in and out and at rest or zero. A chip uses voltage, usually like only 5 volts, to divide all of those db amounts into ones and zeros, and how many bits, or the resolution. The clock setting controls the sampling frequency. The jitter is basically how accurate the clock is. And just for the heck of it,lol, to eliminate aliasing, or artifacts in the audio, you have to sample at least twice the frequency you can hear, thus 44.1Khz, for 20K. If you can only hear up to 16khz, then 32khz, that's the theory anyway.
* Except for maybe Antelope Audio,lol. They are not a chip maker as far as I know. A/d chips are used in many applications besides audio where 32bit or higher might be more useful. To quote from their website:
Now the complicated part,lol! Helmholtz Theory and sine waves. It sounds simple, it is, until you start thinking, wow, I'll just use an additive synth and add sine waves until I get, say, the sound of a violin. The problem is that apparently, for a complex waveform, there are so many sines, amplitudes and phases, that there isn't a computer or a human with that much computing power or time to do it properly.
Not to mention the limitations of fft to show every sine, merely a picture of averaging the waveform, so you couldn't actually determine from looking, all the sines you would need. It shows how amazing an a/d converter really is. If you had a hex editor, you could actually edit all the ones and zeros in a wav or aif file, if you could only know what you were doing and not ever need a daw or effect, theoretically, definitely not practical.
I've went on way long...
I was referring to the settings in the daw
which ishow to setup my global and project settings particularly the bitrate and sample rate.
. It's not in capturing the signal in the way we think of an a/d converter capturing the signal, but if you have a captured 24bit signal, and the setting in your daw is 16bit, you could lose 8bits. It's basically like if you calculate 10 divided by 3=3.3333333333, it's how many of those 3's you carry out, which does make a difference the more calculations you do. If you never "effected" your signal, it wouldn't make a difference, but the more effects you put the signal through, the more calculations,(of course it's possible just one effect may have thousands of calculations?) and thus the more chance of calculation errors. Can you really tell the difference? Maybe, maybe not. That's where the 32 bit or 64 bit float comes in. It allows for more bits, thus more accuracy in the binary math, the ones and zeros, and of course it is either truncated or dithered down to at most a 24 bit wav or aif file before it is rendered, because there is no 32 bit or 64 bit wav or aif, yet anyway,lol!That doesn't refer to capturing the signal - that refers to computing the math while mixing in your DAW.
In simple terms, the a/d converter, only measures 3 things, amplitude (db), frequency (hz), and phase (degree of a sine wave). The sampling frequency just means how often in hz, times per second, a measurement is made in the amplitude. Really only amplitude is measured, the frequency and phase is a by-product, sort of like if you connected the dots, a sine wave and it's phase would appear. I think it's Helmholtz theory that all sounds are sine waves. It's how a speaker moves in and out and at rest or zero. A chip uses voltage, usually like only 5 volts, to divide all of those db amounts into ones and zeros, and how many bits, or the resolution. The clock setting controls the sampling frequency. The jitter is basically how accurate the clock is. And just for the heck of it,lol, to eliminate aliasing, or artifacts in the audio, you have to sample at least twice the frequency you can hear, thus 44.1Khz, for 20K. If you can only hear up to 16khz, then 32khz, that's the theory anyway.
* Except for maybe Antelope Audio,lol. They are not a chip maker as far as I know. A/d chips are used in many applications besides audio where 32bit or higher might be more useful. To quote from their website:
http://en.antelopeaudio.com/2015/05/add ... er-clocks/In spite of a wave of 32 bit converters – which employ marketing rather than audio bits – the performance of state of the art converters is currently about 21-22 bits. Personally I don’t think more bits matter any more.
Now the complicated part,lol! Helmholtz Theory and sine waves. It sounds simple, it is, until you start thinking, wow, I'll just use an additive synth and add sine waves until I get, say, the sound of a violin. The problem is that apparently, for a complex waveform, there are so many sines, amplitudes and phases, that there isn't a computer or a human with that much computing power or time to do it properly.

I've went on way long...

- andygabrys
- Total Pro
- Posts: 5567
- Joined: Sun Jan 02, 2011 10:09 pm
- Gender: Male
- Location: Summerland, BC by way of Santa Fe, Chilliwack, Boston, NYC
- Contact:
Re: Logic X audio settings question
I think we have beaten this one into the ground huh Len911?







Irresistible Custom Composed Music for Film and TV
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
http://www.taxi.com/andygabrys
http://soundcloud.com/andy-gabrys-music
http://www.andygabrys.com
-
- Total Pro
- Posts: 5351
- Joined: Mon Dec 07, 2009 4:13 pm
- Gender: Male
- Location: Peculiar, MO
- Contact:
Re: Logic X audio settings question


Who is online
Users browsing this forum: No registered users and 12 guests