It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Why did people from the 1920's to the 1950's voices sound funny?

page: 3
<< 1  2    4 >>

log in


posted on Jul, 19 2008 @ 12:06 AM
Early 8mm, 16mm and 35 mm film was shot at a frame rate much lower than
today's 24 frames per second, sometimes as slow as 16 to 18 frames per
second and when that frame rate is converted to 29.97 fps of NTSC
(North American) television the effective speedup of action is 24%, 33%
or 50% faster than real life depending upon the original film camera
and projection rate of the Telecine-film-to-video transfer machine.

There is a common mathematical equation used when converting 24 fps film
to NTSC Video called 3:2 PULLDOWN and other mathematical equations
used for slower film frame rates in order to keep action at the same
speed as real life. Using the wrong formula is a MAJOR cause of
speedup on early film-to-video transfers.

In early days pre-1970's, much early film was converted frame-by-frame
to NTSC TV using a "Telecine" film projector to a video deck without using
the 3:2 pulldown formulas thus you get the speedup common to early
16mm and 35 mm film transfers.

I've transferred millions of feet of 8mm, Super-8mm, 16mm film & even
35mm to video using computers which is MUCH easier than it was in 1970.
And depending upon accuracy of the original film camera motors,
there is a wide variation in projection rate and thus a variation in
"Real Action" speeds.

I've become quite expert in determining the original motion within
the original films and can thus adjust on the fly by eye to the real
speed of the original action.

For Example:

I can tell the original Patterson Bigfoot footage was probably shot
at 16 to 18 frames per second rather than 24 fps, but some TWIT
projected it at the wrong frame rate (i.e. 24 fps) on the Telecine to
a 29.97 fps NTSC Video deck so we get a 33% to 50% speedup on the
ACTUAL real world speed of a Sasquatch/Bigfoot walking in some
mountain terrain in Oregon in the 1960's.

Because of the original WRONG Telecine transfer, the rest of the world
sees a bigfoot walking FAST across a screen. For an experiment, look
on Youtube for the Patterson Bigfoot footage and if you see it in speeded up
mode, download it and slow it down from 29.97 fps to 18 fps and then
to 16 fps using a video editing program like Adobe Premiere or
Final Cut Pro or Ulead Video Studio...only THEN
does the walking motion look natural.

Hope the explanation helps shed light on film vs video speedup problems.

posted on Jul, 19 2008 @ 12:08 AM
I highly doubt its evolution, more like a combination of old film equipment, old lighting and make up techniques and materials. Those people that made those films managed to trick some of you into thinking evolution happened some what over night.

No offense but evolutions takes thousands and maybe millions of years. Not 80. What your seeing and hearing is caused by old equipment. Not the new fangled HD, Dolby didgital remastered stuff you see and hear now. But yes people do change over time, people ideals of beauty change over time, whats considered attractive and unattractive. The women in those films back then probably weighed about 130-140 pounds. now they weigh 100-110 and look like bone yards almost. That could be one possible thing your noticing. The men were even different in thier own way. People may have talked differently back then because of fads and what was cool and hip at the time. Take now for instance could you imagine one of those guys from the 30s seeing a white kid talking like a black guy. he would probably freak out. Or thing he was in a foreign land or some horrible future.

posted on Jul, 19 2008 @ 12:10 AM
I've thought about that too.
I know what you mean because the people from those old movies really had high pitched nasally sounding voices.
Even the inflection was different, I figured it was because they were intentionally over dramatizing their lines.
I know many silent movie actors lost their jobs when talking movies were introduced.
The story goes that it was because they sounded funny and Hollywood didn't want to risk dissapointing the fans by allowing them to hear what their favorite stars really sounded like.
Something tells me they probably sounded more like modern day folks.

posted on Jul, 19 2008 @ 12:14 AM
Why did they sound different?

Well, in the early days of radio, sound and film they did not have the processing gear yet.

When they recorded they hung a mike, amplified it and generaly pumped it through a "horn" type speaker.

It was not until the mid 1950's that sound engineers had a crude form of sound processing.

If you look at Sam Phillips old studio in Memphis the first thing you see is the LACK of sound processing gear. SUN studios...side note...before they were famous...Roy Orbinson, Jerry Lee Lewis, Johnny Cash and Elvis Presley all recorded a gospel song together!!! They were all sighned to Sun and none had broke yet! cool huh!

So, without boring anybody to much I will give part of the reason as the other reasons have been voiced above. Notably the type of voice training the actors had at the time had a lot to do with it.

It was not until the mid 1950's that a more "natural" vocal track was really sought after.

Many things influence a vocal track. Most notable, of course, is the mike and the room.

Better rooms and better mikes were made and used. Mikes that could pick up not only the persons voice but the very resonce of his body. When a sound engineer is able to dial in a room and pick the resonance properly the voice sounded very natural.

Now, with the new gear you can change a singers pitch, tone etc in the studio. And of course everybody knows that they have singers who "layer" or "patch" for many the not so talented who are famous.

Of course there are many types of mikes....Piazo mikes, Dynamic mikes, capacitor mikes, electret mikes etc etc and each have a role to play in both studio and live sound.

Also mikes have different pickup patterns.

The basic mike is omnidirectional. Picks up sound in an omni pattern.

Caradoids are another. Cardoids are used by vocalists. (mostly) A Caradiod pattern is "heart shaped" and does not pick up background noise like a PZM would.

A PZM mike is used to pick up rooms with people speaking.

A figure of eight pattern is used in recording large orchestras. (M&S Stereo pair) far as voice you might expect a flat response right acoss the spectrum but we have found that a good vocal mike...a good vocal mike should roll off at about 50hz.

Without this 50hz roll off you get to many pops, loud low breaths, etc.

Also with a vocal mike you want a 3k to 6k presence boost. This gives the vocal detail and tone.

Then you need to train the vocalist on how to use proximity. Proximity will allow the vocalist some control ....if you see a vocalist kissing his mike then this allows for lower frequncies and tones to blossom. If a vocalist holds the mike away it allow for the use of dynamics.

A standard mike that is road worthy would be a Shure SM57,SM58. or a AKG D112, Sennheiser 421 are all Dynamic microphones that are not that expensive...maybe 200 to 300...but these mikes are used at all major studios.

If you are a purist then you will probably want a tube driven mike. They sound very warm....but they are expensive and sometimes noisy.

Anyhow I hope I did bore you. I did'nt want to go into details.

[edit on 19-7-2008 by whiteraven]

posted on Jul, 19 2008 @ 12:26 AM

Originally posted by whiteraven
Why did they sound different?

Well, in the early days of radio, sound and film they did not have the processing gear yet.

When they recorded they hung a mike, amplified it and generaly pumped it through a "horn" type speaker.

It was not until the mid 1950's that sound engineers had a crude form of sound processing.

It's definitely not that. They Might Be Giants made some recordings on an Edison wax cylinder machine and they just sound like tinny versions of themselves. They did have a lead-in announcement imitating the speech patterns of earlier times to introduce the song and it sounded pretty much right on (it's actually Flansburgh voicing it that way). Find "I Can Hear You" by TMBG - the "Factory Showroom" album version.

[edit on 19-7-2008 by EnlightenUp]

posted on Jul, 19 2008 @ 12:27 AM
Just different times, different people. 50 years from now someone might watch a few movies from the 90's and be like "why do those people have long hair, grungy clothes and talk so weird or those people have baggy clothes and horrible grammar?"

posted on Jul, 19 2008 @ 12:29 AM
This is Atlantican @ a friend's house:

The accent of the first settlers was strong. Over generations it was bred out and flattened yet there are hybrid Southern accents, Dakota/ Northern, Californian & NY accents that are unique.
Over the years when various races attained their freedom of speech and association, both accents, slang & looks changed as those fine people associated. Even the African & Mediterainians are lighter skinned to a degree over here after just a handful of generations. Very cool to witness our micro evolution from our Euro/Afro ancestors isn't it!?
Eventually, north americans will be one race.

The mics used back then were largely of the ribbon type(RCA dx 44 pill / capsule a la larry king splash screen) and d-104 types(old aircraft controller / military desk mics). There was little to no eq or compression and the medium was tape running on tube powered equipment. The voices were mostly reproduced faithfully! The cadence you hear is quite accurate!

Cool huh?

posted on Jul, 19 2008 @ 12:30 AM
reply to post by whiteraven

I understand how the audio technology of that time could affect the quality of the recordings but it didn't affect their style. Listen to old movies and you'll notice people back then also spoke differently. The main actors usually spoke whiny and abruptly, as if they were in a race to get their words out. If you listen carefully you can almost detect North Eastern accent.
It was usually the villains that had lower, deeper, modern sounding voices.

posted on Jul, 19 2008 @ 12:52 AM
reply to post by EnlightenUp

yes..I read about that.

I am not familiar with Edison Wax recordings.

I listen to Django a lot and those recording are from 1930's.

When he does his arpegio runs I can tell the sound is compressed.

Most of the early recording are lacking in presence and tone because of the gear.

I have recorded and used analog and digital gear and I find I tend to lean toward analog gear.

Edison Wax recording might be an analog mans wet dream!!

I would love to see and use one. Very cool.

[edit on 19-7-2008 by whiteraven]

posted on Jul, 19 2008 @ 01:10 AM
I would suggest a different direction, namely that it has more to do with the training of actors. Most of these people were trained in theaters and their audiences expected a certain clip from the Vaudeville shows they watched.

posted on Jul, 19 2008 @ 01:20 AM
I think it is because the old movies were really trying to make things look and sound "ideal." People cussed back then, and were coarse and nasty just like we are, but in the old movies, they were very "golly gee, take that" even in bar fight scenes.

Lots of the people, though they looked more mature for their age than many of us do today, SOUNDED like little kids when they talked. I dont think it was all due to the speed of the film, Lauren Becall, Katherine Hepburn and some of the other actresses and actors had lower less childish sounding voices, I think that it more reflected the role that movies played in that day.

They were distractions from and idealizations of real life. Not reflective of real life. Today, we like things a little more gritty, and raw, and we dont go in as much for the sugar coating in our films.

posted on Jul, 19 2008 @ 01:27 AM

Originally posted by Illusionsaregrander
Today, we like things a little more gritty, and raw, and we dont go in as much for the sugar coating in our films.

All the ridiculous CGI-enhanced car chases, crashes and stunts aren't a form of sugar coating? Is there a better substance with which to compare that? Personally I'd say they've gone way over the top.

The modern "gitty" and "raw" in the movies today seems almost as idealized a form as the sugar coating of old.

posted on Jul, 19 2008 @ 01:29 AM
reply to post by EnlightenUp

It may be coated with something, but it isnt sugar. I am not saying we like things realistic, just grittier.

There is a difference.

posted on Jul, 19 2008 @ 01:52 AM
reply to post by Alxandro

Yes, I understand the in the last century that accents were very regional.

East Boston, New York etc all have a nasal tone. If you speak, and your voice resonates in that part of your head then it will accent certain frequencies.

Try it...if you use your diaphram you will get a lower tone. Listen to the vocals on Jerry Lee Lewis Great Balls of Fire.. His voice is very compressed...very little presence...then listen to Jerry Lee Lewis's later stuff and you will hear presence in his voice....again that is a 3k to 6k adjustment on a EQ.

Presence gives detail and tone. Early recordings very seldom had presence.

A really good example would be to listen to Judy Garland on Wizard of OZ before it was remastered and afterword.

If you listen to the early release Judy Garland's voice lacks detail and tone.
(get a VHS release for this) Then listen to the remastered version...big difference in her presence and tone.

Also..listen to early Chuck Berry. Get something is not remastered. Then listen to something remastered from his Box Set.

Also..listen to old John Cash recordings....same thing...I have a early LP of John Cash....I also have his box set....early John Cash sound compressed, lacks detail, brings back detail etc.

Also.. telephone..1950...voice frequncy was 300hz to 3400 it is all digital..but still compressed..notice how you sound compressed on phones!

This also has to do with the fact that you are taking ACOUSTIC energy and tranfering it to the ELECTROMAGNETIC spectrum. In the early days the average guy doing the recording knew very little about how sound changes when it morphs from acoustic energy to eletromagnetic energy.

We operate from 20hz to about 20k hz....the typical male human voice, at least for the past 100 years or so (lol) , will have a FUNDEMENTAL frequency of from 85hz to 155hz. SO the HUMAN MALE fundemental frequency does not even fit into early microphone pickup patterns....which were about 300hz to 3400hz..

yea..wrap your head around gets better...

If you only have the ability to pick up 300hz to 3400hz then how do get somebody to sound anywhere near normal. This was the issue they had in the very early years.

The answer, of course, is if you can get enough of the harmonic series will be present for the MISSING fundemental to create the impression of hearing the fundemental tone!! cool huh


Harmonic series...if you look at vocal chord it vibrates...something like the string on a guitar or piano...

A good piano or guitar (any analog instrument) is based upon on a harmonic oscillator. This simply means that you will have a variety of tones or ocsillations going on at the same time.
These ocsillations are called "standing waves".

Interaction with the surrounding air causes "traveling waves"..this is what you hear.

Because of the self filtering nature of resonance, these frequencies are limited to interger multiples called harmonics...starting at the lowest possible frequency and these multiples form the harmonic series.!!cool!!

Then this frequency determines the musical pitch...this also influences tone!

SO the frequency being expressed causes other cool things to take place. This is known as OVERTONES.

Overtones are short wave tones that give the instrument its "personality"
Each fundemental frequency has overtones that are developed through "room" resonance as well as instrument resonance.

Harmonics are ALL partial waves within a sound or noise that are interger multiples of the fundemental frequency.

Among recording engineers harmonics and partial mean the same thing....although now with some of the DSP stuff a partial can be a structure that is not related to the fundemental frequency.

That is because we are not dealing with "real" sound anymore we are dealing with digital signal processing that can place a "strange" partial into a fundemental.

Anyways if you are dealing in analog, as the early guys were, the places were you might get these partial would be on a TAM TAM or a gong.

So then we need to use our knowledge of harmonic structure to build amplitude, timbre, and tonality.

The relative amplitude of various harmonics will determine the timbre of the instrument...including the human voice.

If the mike can only pick up 300hz to 3400 hz your timbre will suck.

That is why many early recording sound compressed...or if you like nasel..or "high" pitched.

They fool you into "hearing" any of the fundemental frequencies of 85hz to 155 hz through harmonic series!

I hope that makes sense. I can go into more detail on amplitude and timbre if you want to realy get juicy.

posted on Jul, 19 2008 @ 02:05 AM
reply to post by whiteraven

Hey! I definitely understand that stuff since I've done some DSP programming.
It's called a "false fundamental" inferred from the overtone series. That frequency range (or something in the ballpark) I seem to recall having been called the "intellgence band" or something similar since it was important for having speech be intelligible over phone connections.

posted on Jul, 19 2008 @ 02:19 AM
reply to post by EnlightenUp

YES...thats right.. with DSP tec in telephony today they have tried to use something called HOS (higher order spectrum) to influence the fundemental and its harmoic structure in order to make it sound "natural"

A telephone is a form of a microphone and a speaker and so we see alot of the new stuff being used in that industry.

Yea...a false fundemental can be inferred by the overtones...thats how some subwoofers work...

Although they are not "truesubs" as Carver says....he invented the Sunfire True Sub!!
Also if you want to research low frequencies go to REL website.

He was a British submariner who worked in ciommunications. He became very interested in ULF since subs communicate using ULF and when he retired he designed the REL subwoofers. They are unbelievable.

Here is the

I am not a brit but I love some of their gear!

posted on Jul, 19 2008 @ 02:22 AM
reply to post by StargateSG7 are complely correct. I was going to post on film transfer, frames per sec and 3/2 pulldown but you beat me to it!!

Good stuff!

posted on Jul, 19 2008 @ 02:33 AM
reply to post by EnlightenUp

also Single Sideband ...if the inteligence spectrum to be translated is applied to a single input port of a balanced modulator...the carrier is applied to the other port....what this does is translate the zero reference spectrum to the carrier frequency to produce upper and lower sidebands...although if you are working in telephoney I think they now have DSP gear that does this....

So where you working in that field as well?


I work in construction and I have built a lot of least I used to until the money dried up in Atlanta and Tulsa... now I work in internet, data, telephone, communications install in new homes, business and existing homes, business up in Canada.

And I do Boilermaker work when I want to with Local 555!!

[edit on 19-7-2008 by whiteraven]

posted on Jul, 19 2008 @ 10:16 AM

Originally posted by pavil
As to people looking "different" back then, I would suggest that it is "us" that look different. Face it, in 1920 the world was a lot less diverse, meaning there wasn't as much mixing of the genetics of various nationalities ie most nations in Europe had only their nationality, there wasn't a large immigrant population. Even in the U.S., we were still not the true melting pot of the world. The blending of physical traits from around the world didn't start in earnest till after WWII.

That's a great point, I hadn't thought of that. The early immigrants tended to stick with their own and not intermarry. A couple of generations later, and it's a free for all. From what I can tell, genetic diversity is a pretty cool thing.

posted on Jul, 19 2008 @ 04:46 PM

Originally posted by whiteraven
reply to post by EnlightenUp

also Single Sideband ...if the inteligence spectrum to be translated is applied to a single input port of a balanced modulator...the carrier is applied to the other port....what this does is translate the zero reference spectrum to the carrier frequency to produce upper and lower sidebands...although if you are working in telephoney I think they now have DSP gear that does this....

This sounds like amplitude modulation where multiplying a signal by a carrier gives you frequencies fc+fsignal and fc-fsignal. I guess for a single side you filter out the one you don't want.

So where you working in that field as well?


I'm not working in any DSP-related field. I've just done some minor audio and MPEG stuff for fun. Very limited and I really desire to get a better mathematical background at some point.

Have we derailed the thread by now?
I don't want to summon moderation of the posts. To keep it on topic I'll mention that people I speak with on the tele don't sound like they're from the 1920's to 1950's even with the restricted audio bandwidth.

top topics

<< 1  2    4 >>

log in