Read Aloud the Text Content
This audio was created by Woord's Text to Speech service by content creators from all around the world.
Text Content or SSML code:
There is also a potential noise problem with digital equipment – noise from cooling fans in computers or disk drives. If equipment has to be housed in a studio with live microphones, this could be a problem. Some production facilities solve this by putting computer CPUs in a separate equipment room, leaving only the keyboard, mouse, and monitor in the actual studio. If computer equipment must be in the studio, try to place it as far from microphones as possible, and/or build it into studio furniture to minimize any noise. Laptop computers may also be an option to consider, as their cooling components are usually much quieter. Digital audio signals are streams of digits broken up into digital words. If two digital signals are out of sync, then one may just be beginning a new digital word when the other is in the middle of a word. Switching from one to the other would result in an audible tick or pop. Synchronized audio signals start both new digital words at precisely the same time. Some digital production equipment is self-synchronizing, but digital audio consoles that accept many different types of digital inputs need to synchronize audio to a common clock. Most digital audio workstations, and many other pieces of digital broadcast equipment, also have the ability to incorporate MIDI and SMPTE synchronization. MIDI (musical instrument digital interface) is an interface system that allows electronic equipment, mainly musical instruments like synthesizers, drum machines, and samplers, to “talk” to each other through an electronic language. SMPTE (Society for Motion Picture and Television Engineers) time code is an electronic language developed for videotape editing that identifies each video frame with an individual address. The time code numbers consist of hour, minute, second, and frame. The frame digits correspond to the 30 video frames in each second. Both MIDI and SMPTE signals can be used to reference various individual pieces of equipment and accurately start, combine, stop, or synchronize them. Latency is the short amount of time required to convert analog audio into digital audio, or to add a digital effect to audio, or to move audio from one location to another. All digital equipment and computers dealing with audio will exhibit some latency. In most individual pieces of equipment, latency is usually not an audible issue, because it is often a delay of only a few milliseconds. However, latency can be cumulative as sound goes through several normal audio processes. For example, if you add equalization (EQ) to an audio track, mix that track with several other audio tracks, and then monitor everything through an audio interface, your audio will have gone through several layers of software processing, each adding some latency effect. In broadcast situations, latency issues arise when a live broadcast is combined with audio from a satellite or telephone feed or similar link. The linking equipment often has enough latency to produce a noticeable time delay between the live broadcast and the other audio. Computers with faster CPU processing have helped with latency issues, as has the development of audio drivers that bypass the Windows or Mac operating system, allowing audio signals to connect directly with the sound card. Still, latency is an issue that the audio production person must be aware of and willing to work around if it becomes noticeable in their production work. Editing normally begins after you’ve recorded and saved audio into the computer system, but previously recorded audio files can also be imported for editing. Although there are many different systems available, to gain an understanding of digital audio editing, this chapter looks specifically at Adobe® Audition®. In the Default view, Adobe® Audition® has seven main work areas (Files, Media Browser, History, Editor, Levels, Essential Sound, and Selection/View). For the purposes of this chapter, we will only be looking at four of these areas (Editor, Files, Levels, and Selection/View). You can close any unnecessary areas by clicking on the three lines next to the area’s name and selecting “Close Panel.” The Editor screen (shown in Figure 3.5) puts the user in a single-waveform editor that is used to record, process, and edit mono and stereo audio segments. It also shows the audio sound file and is utilized to edit or otherwise process the sound. The transport buttons (located at the bottom of the Editor screen) control recording and playback functions of the audio. To the right of the transport buttons are several magnifying glass icons that allow you to zoom in and out of the waveform on both horizontal and vertical axes, and a “timeline display” at the top of the screen shows timing information. The Files window lists the names of any files being worked on, and displays pertinent information such as format type, run time, sample size, and number of channels. The Levels window acts like a standard VU (volume unit) meter, in that it displays the audio levels on both playback and record. Finally, the Selection/View window allows the producer to see time information in a more specific numeric form. There are also various other functions and features on these screens that can be found via drop-down menus and shortcut buttons. As you gain more experience using this (or other) editing software, you will find these basic tools to be incredibly helpful. Becoming familiar with them as soon as possible is the best thing for you to do.