Signal Processors

Survey of Broadcasting: Assignment 1, Question 1: Describe the five general steps of signal processing–Videos

Posted on June 21, 2011. Filed under: Audio, Broadcasting, Communications, Digital Communication, Radio, Signal Processors, Sound, Television | Tags: , , , , , , |

1. Describe the five general steps of signal processing.

Roger Waters – Radio KAOS – Radio Waves

    The five general steps in signal processing are as follows:

  1. Signal generation
  2. Signal amplification and processing
  3. Signal transmission
  4. Signal reception
  5. Signal storage.

Step 1 Signal Generation: Signal generation is the conversion or transduction of the sound or light waves from the source into electrical energy which corresponds to the frequency of the original source. The audio signal may be generated  mechanically using a microphone or turntable to create an analog of the original sound signal such as a phonograph record or audio cassette.  Microphones are used to transduce the physical energy of music and voice into electrical energy.The audio signal may be generated electromagnetically using tape recorders.The audio signal may also be generated digitally by using laser optics to create a binary or digital equivalent of the original sound.  Television signal generation requires electronic line-by-line scanning of an image using an electron beam to scan each element of the picture. The image is subsequently retraced by the television receiver.

Step 2 Signal Amplification and Processing: Audio and video signals are amplified and mixed using audio consoles and video switchers. After the audio signal has been converted from a physical sound wave into an electrical or digital facsimile, the audio signal must be amplified to boost the signal and processed including the mixing, combining and routing for broadcast transmission and/or recording. Sound sources are combined at the mixing board. The amplified sound may be fine tuned using equalizers and special effects. The switcher is used to mix TV signals and put the desired picture on the air. A special effect generator is used to add transitions, split screen and keying. Digital video editing and effects can also be produced using computer software such as Adobe Premiere Pro and After Effects.

Step 3 Signal Transmission: The electronic signal is superimposed  by a modulation process on a carrier wave  generated or propagated by the radio station on its assigned frequency. The generated sound wave may travel by ground, sky and direct waves. Radio waves occupy a segment of the electromagnetic spectrum. AM radio channel frequencies are divided into three main types: clear channels, regional channels and local channels. FM channel frequencies are classified by antenna height and power. Stereo broadcasting and other nonbroadcast services are accomplished with the wide bandwidth of the FM channel. Digital radio is satellite-based  or in-band on channel. Television signal transmission includes over-the-air broadcasting using the electromagnetic radiation on the VHF and UHF portions of the spectrum or by wire through a cable system using coaxial cable that can carry programming on more than 100 channels.. New transmission technologies used for transmission and distribution include satellite and fiber optics for digital signals.

Step 4 Signal Reception: After the radio signal has been transduced, modulated and transmitted, the radio waves are picked up on a radio receiver where they are transduced or converted by the speaker system back into sound waves. The characteristics of the electromagnetic spectrum and modulation  method used in transmission determine the type of radio receiver needed to convert the signal back into sound waves.There are several types of radio receivers including AM, AM stereo, FM, shortwave, and multiband. These receivers can be equipped with either analog tuners or a digital system. For moving images both large and small-screen TVs are now receiving high-definition television vision signals. 

Step 5 Signal Storage: Both audio and video technology is used in the storage and retrieval of sounds and moving images.  Audio or video signals are transduced or converted for storage and eventual playback or rebroadcast. The storage medium have included glass discs, wire, vinyl, magnetic type, compact disc, video tapes, digital storage media such as digital versatile discs (DVDs) and computer hard drives including high-capacity disc drives.

Roger Waters – Radio KAOS – Tide Is Turning

Background Articles and Videos


Amplitude modulation tutorial & AM radio transmitter circuit 


The Professor – How does a radio work?


AM Modulation and Demodulation Part 1


Introduction to Radio Waves Training Course


High Definition Television (HDTV) : Difference Between High & Standard


Roger Waters – Radio KAOS – Intro

Read Full Post | Make a Comment ( None so far )

Signal Processors–Videos

Posted on October 8, 2010. Filed under: Audio, Communications, Digital Communication, Signal Processors, Sound, Synchronization | Tags: , , , , , , , , , |

Sound Made Simple – Equalization – Definition


Adobe Audition 3.0 Introduction and New Features

Adobe Audition 3.0 Tutorial—-In English

Adobe Audition Editing and Effects

Main Points To Remember

1. Signal processors are used to alter some characteristics of a sound. They can be grouped into four categories; (1) spectrum, (2) time, (3) amplitude, or dynamics, and (4) noise.

2. The equalizer and filter are examples of spectrum processors because they alter the spectral balance of a signal. The equalizer increases or decreases level of a signal at a selected frequency by boost or cut (also known as peak and dip), or by shelving. The filter attenuates certain frequencies above, below, between, or at a preset point(s).

3. Two types of equalizers in common use are the fixed-frequency and the parametric.

4. The most common filters are high-pass (low-cut), low-pass (high-cut), band-pass, and notch.

5. Psychoacoustic processors add clarity, definition, and overall presence to sound.

6. Time signal processors affect the time relationships of signals. reverberation and delay ae two such effects.

7. The three  most common types of reverberation systems used today are digital, plate, and acoustic chamber.

8. Digital  reproduces electronically the sound of different acoustic environments.

9. An important feature of most digital reverb units is predelay–the amount of time between the onset of the direct sound and the appearance of the first reflections.

10. Delay is the time interval between a sound or signal and its repetition.

11. Delay effects, such as doubling, chorus, slap back echo, and prereverb delay, are usually produced electronically with a digital delay device.

12. Flanging and phasing split a signal and slightly delay one part to create controlled phase cancellations that generate a pulsating sound. Flanging uses a time delay; phasing uses a phase shifter.

13. Amplitude (dynamic) signal processors affect a sound’s dynamic range. These effects include compressing, limiting, de-essing, expanding, noise gating, and pitch shifting.

14. With compression, as the input level increases, the output level also increases but at a slower rate, reducing dynamic range. With limiting, the output level stays at or below a preset point regardless of its input level. With expansion, as the input level increases, th e output level also increases but at a greater rate, increasing dynamic range.

15. A de-esser is basically a fast-acting compressor that acts on high frequency sibilance by attenuating it.

16. A noise gate is used primarily to reduce or eliminate unwanted low-level noise, such as ambience and leakage. It is also used  creatively to produce dynamic special effects.

17. A pitch shifter uses both compression and expansion to change the pitch of a signal or the running time of a program.

18. Noise processors are designed to reduce or eliminate noise from an audio signal. Most are double-ended; they prevent noise from entering a signal. Single-ended noise processors reduce existing noise in the signal.

19. Noise reduction is also possible using the digital signal processing (DSP) in a hard-disk recording/editing system. Virtually any unwanted sound can be eliminated.

20. Multieffects signal processors combine several of the functions of individual signal processors in a single unit.

21. A voice, or vocal, processor can enhance, modify, pitch-correct, harmonize, and change completely the sound of a voice, even to the extent of gender.

22. Plug-ins incorporated in hard-disk recording/editing systems add to digital signal processing not only the familiar process tools,–EQ, compression and expansion, reverb, delaypitch shifting, and so on–but also an array of capabilities that place at the recordist’s fingertips a virtually limitless potential for creating entirely new effects and at comparatively little cost.

23. Plug-ins’ programs are available separately or in bundles. bundles include a number of different plug-in programs.

Read Full Post | Make a Comment ( None so far )

Liked it here?
Why not try sites on the blogroll...