Producing Talk And Voice-Overs–Videos

Posted on October 15, 2010. Filed under: Acoustics, Audio, Communications, Digital Communication, Loudspeakers, Radio, Recordings, Sound, Speech, Television | Tags: , , , , , , , , , , |

How to Set Up PA Systems : Basic Microphone Placement for PA System Setup

School radio studio tour

How a Radio Station Works : Radio DJ Microphone Placement

Audio-Technica Studio Recording Microphones w/ AVGIANT at NAMM

1. The production chain (in non-music production) generally begins with the talking performer and therefore involves considerations that relate to producing speech.

2. How speech is produced depends on (1) the type of program or production; (2) the medium–radio, TV, film–and, in TV and film, whether the production technique is single– or multicamera; (3) whether it is done in the studio ori n the field; and (4) whether it is live, live-on-tape, or produced for later release.

3. The frequency range of the human voice is not wide compared with that of other instruments. The adult male’s fundamental voicing frequencies are from roughly 80 to 240 Hz; for the adult female, they are from roughly 140 to 500 Hz. Harmonics and overtones carry theses ranges somewhat higher. (Ranges for the singing voice are significantly wider).

4. Speech intelligibilty is at a maximum when levels are about 70 to 90 dB-SP. Certain frequencies, particularly in the midrange, are also more critical to speech intelligibility than others.

5. Acoustical phase refers to the time relationship between two (or more) sound waves at a given point in their cycles. Electrical phase refers to the relative electrical polarity of two signals n the same circuit. When these waves or polarities are in phase–roughly coincident in time–their amplitudes are additive. When these waves or polarities are out of phase–not coincident in time–their amplitudes are reduced.

6. Evaluation of a microphone for speech includes at least four criteria: clarity, presence, richness, and versatility.

7. The closer a microphone is placed to a sound sources, the closer to the audience the sound source is perceived to be and the warmer, denser, bassier, drier, more intimate, and more detailed is the perceived sound.

8. The farther a microphone is placed from a sound source, the farther from the audience the sound source is perceived to be and the more distant, diffused, open, spacious, reverberant, and detached, and the less detailed is the perceived sound.

9. In selecting and positioning a mic, keep excessive sound that is reflected from room surfaces, furniture, and equipment from reaching the mic, or comb filtering can result. Choose a mic and position it to avoid sibilance, plosives, and breath sounds.

10. In monaural sound aural space is one-dimensional–measured in terms of depth–so perspective is near-to-far.

11. In stereo sound aural space is two-dimensional–measured in terms of depth and breadth–so perspectives are near-to-far and side-t0-side.

12. In stereo miking the angle or distance between the two microphones (or microphone capsules) determines side-to-side perspective. The smaller the angle or distance between the mics, the narrower the left-to-right stereo image; the larger the angle or distance, the wider the left-to-right image.

13. In disc jockey, interview, and panel programs, the participants should sound as though they are coming from the front and center of the aural space. With more than one participant, using individual microphones, the loudness levels for the participants must be similar if the sound is to be perceived as coming from the front and center of the aural space.

14. The overall sound of a radio station involves the particular music or talk format, the announcer’s delivery style, the production style of the spot announcements and jingles, and how tightly presented they all are.

15. The techniques used to mike speech for picture in television and film (and to produce sound, in general) may depend on whether the production is broadcast live, or live-on-tape, or is taped/filmed for showing at a later date.

16. In radio microphones can be placed anywhere without regard for appearance so long as the participants are comfortable and the mics do not get in their way. If the radio program is also televised, some care for appearance should be taken. In television, if a mic is in the picture, it should be good-looking and positioned so that it does not obscure the performer;s face. If it is not in the picture, it must be positioned close enough to the performer so that the sound is on-mic.

17. Generally, for optimal sound pickup the recommended placement for a mini-mic is in the area of the performer’s sternum, about 6 to 8 inches below the chin.

18. Hiding a mini-mic under clothing requires that the mic and mic cable are or can be made insensitive to rustling sounds and that the clothing be made of material that is less likely to make those sounds.

19. In television a desk mic is often used as a prop. If the desk mic is live, make sure it does not block the performer’s face, interfere with the performer’s frontal working space, pr pick up studio noises.

20.The handheld mic allows the host to control audience questioning and mic-to-source distance and, like the desk mic, helps generate a closer psychological rapport with the audience.

21. The boom microphone, like the mini-mic hidden under clothing, is used when mics must be out of the picture. Often one boom mic covers more than one performer. To provide adequate sound pickup, and to move the boom at the right time to the right place, the boom operator must anticipate when one performer is about to stop talking and another is to start.

22. Different techniques are used in controlling levels, leakage, and feedback of mic feeds from multiple sound sources: following the three-t0-one rule, moderate limiting or compression noise gating, or using an automatic microphone mixer.

23. If an audience is present, it must be miked to achieve an overall sound blend and to prevent one voice or group of voices from predominating.

24. Increasing audience laugher or applause, or both, by using recorded laugher or applause tracks adds to a program’s spontaneity and excitement.

25. Recording speech begins with good acoustics. Mediocre acoustics can make speech sound boxy, oppressive, lifeless, ringy, or hollow.

26. Recording speech generally involves either the voiceover–recording copy to which other sonic material is added–or dialogue. Voice-over material includes short-form material, such as spot announcements, and long-form material, such as documentaries and audiobooks.

27. Recording a solo performer and a microphone is a considerable challenge: there is no place to hide.

28. Among the things to avoid in recording speech are plosives, sibilance, breathiness, and tongue and lip smacks.

29. Three types of narration are direct, indirect, and contrapuntal.

30. It is often not so much what is said, but how is said that conveys the overall meaning of a message.

31. Voice acting involves “taking the words off the page” and making them believable and memorable.

32. Among the considerations a voice actor comes to grips with in bringing the appropriate delivery to copy are voice quality, message, audience, word values, and character.

33. Studio intercommunication systems are vital in coordinating the functions of the production team. Three types of studio intercom systems are the private line or phone line–PL; studio address–SA: and interruptible foldback–IFB.

Read Full Post | Make a Comment ( None so far )

Loudspeakers and Monitoring–Videos

Posted on October 8, 2010. Filed under: Acoustics, Audio, Loudspeakers, Psychacoustics, Web | Tags: , , , , , , , , , , , |

 A look at the I key powerd studio monitors, M Series 606

Yamaha MSP5 Studio Monitors Reviewed


Main Points To Remember

1. Loudspeakers are transducers that convert electric energy into sound energy.

2. Loudspeakers area available in the moving-coil, ribbon, and capacitor designs. The moving-coil loudspeaker is by far the most common.

3. Loudspeakers that are powered externally are called passive speakers. Loudspeakers that are powered internally are called active speakers.

4. A single, midsized speaker cannot reproduce high and low frequencies very well; it is essentially a midrange instrument.

5. For improve response, loudspeakers have drivers large enough to handle the bass frequencies and drivers small enough to handle the treble frequencies. These drivers are called, informally, woofers and tweeters, respectively.

6. A crossover network separates the bass and the treble frequencies at the crossover point, or crossover frequency, and directs them to their particular drivers.

7. Two-way system loudspeakers have one crossover network, three-way system loudspeakers have two crossovers, and four-way system loudspeakers have three crossovers.

8. In a passive crossover network, the power amplifier is external to the speakers and precedes the crossover. In an active crossover network, the crossover precedes the power amps.

9. Each medium that records or transmits sound, such as a CD or a TV, and each loudspeaker that reproduces sound, such as a studio monitor or a home receiver, has certain spectral and amplitude capabilities. For optimal results audio should be produced with an idea of how the system through which it will be reproduced works.

10. In evaluating a monitor loudspeaker, frequency response, linearity, amplifier power, distortion, output-level capability, sensitivity, polar response, arrival time, and phase should also be considered.

11. Linearity means that frequencies being fed to a loudspeaker at a particular loudness are reproduced at the same loudness.

12. Amplifier power must be sufficient to drive the loud speaker system, or distortion, among other things, will result.

14. Distortion is the appearance of a signal in the reproduced sound that was not in the original sound. Various forms of distortion include intermodulation, harmonic, transient, and loudness.

15. Intermodulation distortion (IM) results when two or more frequencies occur at the same time and interact to create combinations tones and dissonances that are unrelated to the original sounds.

16. Harmonic distortion occurs when the audio system introduces harmonics into a recording that were not present originally.

17. Transient distortion relates to the inability of an audio component to respond quickly to a rapidly changing signal, such as that produced by percussive sounds.

18. Loudness distortion, or overload distortion, results when a signal is recorded or played back at an amplitude greater than the sound system can handle.

19. The main studio monitors should have an output-level capability of 110 dB-SP.

20. Sensitivity is the on-axis sound-pressure level a loudspeaker produces at a given distance when driven at a certain power. A monitor’s sensitivity rating provide a good overall indication of its efficiency.

21. Polar response indicates how a loudspeaker focuses sound at the monitoring position(s).

22. The coverage angle is the off-axis angle or point at which loudspeaker level is down 6 dB compared with the on-axis output.

23. A sound’s arrival time at the monitoring position(s) should be no more than 1 ms: otherwise, aural perception is impaired.

24. Where a loudspeaker is positioned affects-sound dispersion and loudness. A loudspeaker in the middle of a room generates the least-concentrated sound; a loudspeaker at the intersection of a ceiling or floor generates the most.

25. Stereo sound is two-dimensional; it has depth and breadth. in placing loudspeakers for monitoring stereo, it is critical that they be positioned symmetrically within a room to reproduce  an accurate and balance front-to-back and side-to-side sonic image.

26. Loudspeakers used for far-field monitoring are usually large and can deliver very wide frequency response at moderate to quite loud levels with relative accuracy. They are built into the mixing-room wall above, and at a distance several feet from, the listening position.

27. Near-field monitoring enables the sound engineer to reduce the audibility of control room acoustics, particularly the early reflections, by placing loudspeakers close to the monitoring position.

28. Surround sound differs from stereo by expanding the depth dimension, thereby placing the listener more in the center of the aural image than in front of it. Therefore, using the 5.1 surround-sound format, monitors are positioned front-left, center, and front-right, and the surround loudspeakers are placed left and right behind, or to the rear sides of, the console operator. A subwoofer can be positioned in front of, between the center and the left or right speaker, in a front corner or to the side of the listening position, Sometimes in the 5.1 surround set-up, two subwoofers may be `positioned to either side of the listening position.

29. In adjusting and evaluating monitor sound, objective and subjective measures are called for. Devices such as a spectrum analyzer measure the relationship  of monitor sound to room sound. Although part of testing a monitor loudspeaker involves subjectivity, there are guidelines for determining performance.

30. In evaluating the sound of a monitor loudspeaker it is helpful to, among other things, use material with which you are intimately familiar and to test various loudspeaker responses with different types of speech and music.

31. Headphones are an important part of monitoring particularly on location. Five considerations are vital in using headphones: (1) frequency response should be wide, flat, and uncolored; (2) you must be thoroughly familiar with the headphones’ sonic characteristics before you use them; (3) the headphones should be airtight against the head for acoustical isolation; (4) the fit should stay snug even when you are moving; and (5) stereo headphones should be used for monitoring surround sound.


Read Full Post | Make a Comment ( None so far )

Acoustics and Psychacoustics–Videos

Posted on October 8, 2010. Filed under: Acoustics, Audio, Communications, Psychacoustics, Radio | Tags: , , , , , , , |

Close – Part 1

Close – Part 2

Main Points To Remember

1. Acoustics is the science of sound, including its generation, transmission, reception, and effects. Psychoacoustics deals with the human perception of sound. The term acoustics is also used to describe the physical behaviour of sound waves in a room. In that context psychoacoustics is concerned with the relationship of our subjective response to such sound waves.

2. By processing the time and intensity differences of sound reaching the ears, the brain can isolate and recognize the sound and tell from what direction its is coming.

3. Processing these time and intensity differences also makes it possible to hear sound three-dimensionally. This is known as binaural hearing.

4. Direct sound reaches the listener first, before it interacts with any other surface. The same sound reaching the listener after it reflects from various surfaces is indirect sound.

5. To be audible a sound reflection arriving up to 30 ms after the direct sound must be about 10 db louder. In which case the direct and reflected sounds are perceived as one. This is known as the Hass effect.

6.When hearing two sounds arriving from different directions within the Haas fusion zone, we perceive this temporal fusion of both sounds as coming from the same direction as the first-arriving sound, even if the immediate repetitions coming from another location are louder. This is known as the precedence effect.

7. The acoustic “life cycle” of a sound emitted in a room can be divided into three phases: direct sound, early reflections and reverberant sound.

8. Indirect sound is divided into early reflections (early sound) and reverberant sound.

9. Early reflections reach the listener within about 30 ms of when the direct sound is produced and are heard as part of the direct sound.

10. Reverberant sound, or reverb, is the result of the early reflections becoming smaller and smaller and the time between them decreasing until they combine, making the reflections indistinguishable.

11. Reverberation is densely spaced reflections created by random, multiple, blended reflections of a sound.

12. Reverbration time, or decay time, is the time it takes a sound to decrease 60 dB-SPL after its steady-state sound level has stopped, usually from average loudness–85 dB-SPL–to, generally, inaudible–25 dB-SPL.

13. If sound is delayed by 35 ms or more, the listener perceives echo, a distinct repeat of the direct sound.

14. Direct sound provides information about a sound’s origin, its size, and its tonal quality. Early reflections add loudness and fullness to the initial sound and help create our subjective impression of room size. Reverberation adds spaciousness to sound, fills out its loudness and body, and contains most of its tonal energy.

15. No one sound room is acoustically suitable for all types of sound. Therefore it is important to match studio acoustics to sonic material.

16. Rooms with reverberration times of one second or more are considered to be “live.” Rooms with reverberation times of one-half second or less are considered to be “dry” or “dead”.

17. Four factors influence how sound behaves in an acoustic environment: (1) sound isolation, (2) room dimensions, (3) room shape, (4) and room acoustic.

18. Noise is any unwanted sound (except distortion) in the audio system, the studio, or the environment.

19. The noise criteria (NC) system rates the level of back-ground noise.

20. Sound isolation in a room is measured in two ways: (1) by determining the loudest outside sound level against the minimum acceptable NC level inside the room, and  (2) by determining the loudest sound level inside the studio against a maximum acceptable noise floor outside the room.

21. Transmission loss (TL) is the amount of sound reduction provided by a partition,such as a wall, floor, or ceiling. This value is given a measurement called sound transmission class (STC).

22. The dimensions of a sound room–height, width, and length–should not equal nor be exact multiples of one another. Room dimensions  create additive resonances, reinforcing certain frequencies and not others and thereby coloring the sound.

23.  Resonance, another important factor in studio design, results when a vibrating body with the same natural frequencies as another body causes that body to vibrate sympathetically, thereby increasing the amplitude fo both at those frequencies if the variables are in acoustical phase.

24. The shape of a studio is significant for good noise reduction and sound dispersion.

25. When sound hits a surface, one or a combination of five reactions occurs: it is absorbed, reflected, partially absorbed and reflected, diffracted, or diffused.

26.The amount of indirect sound energy absorbed is given an acoustical rating called a sound absorption coefficient, also known as a noise reduction coefficient (NRC).

27. Three classifications of acoustic absorbers are porous absorbers, diaphragmatic absorbers, and Helmholtz absorbers or resonators.

28. When sound reaches a surface, in addition to being partially absorbed and reflected, it diffracts–or spreads around the surface.

29. Diffusion is the uniform distribution of sound energy in a room so that its intensity throughout the room is approximately uniform.

30.To be more acoustically functional, many studios are designed with adjustable acousticsmovable panels, louvers, walls, and gobos  (portable baffles) to alter reverberation time.

31. Studios are designed for sound that is appropriate for microphone pickup. Control rooms are designed for listening through  loudspeakers.

32. To accurately assess the reproduced sound in a control room, the main challenge is to reduce the number of unwanted reflections the monitoring locations– so it is a relatively reflection-free zone.

33. Four basic control room layouts are the cockpit and railroad styles, each in symmetrical and asymmetrical arrangements.

34. Ergonomics addresses the design of an engineering system with human comfort and convenience in mind.

Read Full Post | Make a Comment ( None so far )

Liked it here?
Why not try sites on the blogroll...