Using Acoustic Phonetics in Clinical Practice Acoustic phonetics deals with the physical aspects of speech sounds associated with the production and perception of speech. Acoustic measurement techniques can be used by speech-language pathologists to assess and treat a variety of speech disorders. In this article, we will review the source-filter theory of speech production, acoustic theory ... Article
Free
Article  |   July 01, 2010
Using Acoustic Phonetics in Clinical Practice
Author Affiliations & Notes
  • Amy T. Neel
    Department of Speech and Hearing Sciences, University of New Mexico, Albuquerque, NM
Article Information
Speech, Voice & Prosody / Articles
Article   |   July 01, 2010
Using Acoustic Phonetics in Clinical Practice
SIG 5 Perspectives on Speech Science and Orofacial Disorders, July 2010, Vol. 20, 14-24. doi:10.1044/ssod20.1.14
SIG 5 Perspectives on Speech Science and Orofacial Disorders, July 2010, Vol. 20, 14-24. doi:10.1044/ssod20.1.14

Acoustic phonetics deals with the physical aspects of speech sounds associated with the production and perception of speech. Acoustic measurement techniques can be used by speech-language pathologists to assess and treat a variety of speech disorders. In this article, we will review the source-filter theory of speech production, acoustic theory of vowels, and acoustic properties of consonants. We will examine how visual displays of acoustic information in the form of waveforms, amplitude spectra, and spectrograms can be used to analyze aspects of speech that might be difficult to hear and serve to provide biofeedback to clients to improve their speech production.

Robert Stetson, one of the early investigators in the speech sciences, described speech as “movements made audible.” In this article, a review of how movements of the speech mechanism are related to changes in the sound produced by the vocal tract will be explored. We will examine how basic principles of acoustic analysis can be used in the work setting to examine the consequences of speech subsystem motion on the acoustic signal emanating from the oral opening. With the proliferation of acoustic software which can be downloaded free of charge from the internet or purchased inexpensively, the application of acoustic analyses can be accomplished in the clinic with readily available and simple hardware components such as a laptop or desktop computer and a microphone. Several resources for internet-based acoustic applications are provided for the reader at the conclusion of this paper.
Basic Acoustics of Speech
Our exploration of acoustic phonetics will begin by reviewing the basic acoustics related specifically to speech production and perception. Generally speaking, sound for speech consists of variations in air pressure. Rapid increases and decreases in air pressures are carried by vibrating air molecules, where they are eventually picked up by the auditory system and perceived. For speech-language pathologists, audiologists, and speech scientists, the human voice is of specific interest. Here, the variations in air pressure are caused by the vibrating vocal folds (related to vowels), articulator positioning that creates aperiodic (noise) air turbulences (related to voiceless consonants), or a combination of the two (related to voiced consonants).
Variations in air pressure over time can be visually represented as a waveform. The amplitude of the air pressure variations are a display of the intensity of the sound represented on the Y or vertical axis with the associated time represented on the X or horizontal axis. If a waveform has large changes in air pressure, it will be perceived as loud. Waveforms with small increases and decreases in air pressure will be perceived as soft sounds. The shape of the waveform is also informative, telling us whether it is a pure tone consisting of only one frequency component, or a complex tone consisting of several frequency components. Pure tones, like the sounds produced by an audiometer or tuning fork, create simple sine wave sound patterns because they consist of a single frequency. Speech sounds, like the vowel shown in Figure 1, have more complicated shapes because they contain many frequency components.
Figure 1.

Waveform and amplitude spectrum for a section of the vowel /u/

 Waveform and amplitude spectrum for a section of the vowel /u/
Figure 1.

Waveform and amplitude spectrum for a section of the vowel /u/

×
Waveform displays can depict the frequency components of a sound through the number of visible vibratory cycles occurring over a standard period of time (usually 1 second). Vibratory cycles are measured in units called Hertz (Hz). The top graph of Figure 1 displays the vowel /u/ produced by an adult male in which the vocal folds are vibrating about 100 times per second or 100 Hz. This specific frequency is referred to as the fundamental frequency of the vowel. The fundamental frequency of a sound wave relates to the pitch that we perceive. For example, the frequency of vocal fold vibration for /u/ is somewhat low, so we perceive the pitch of the vowel to be low, when the frequency of vibration increases, the pitch is perceived as higher.
Although we are able to ascertain quite a bit of information from the waveform graph, we generally are unable to glean from it information about other frequency components that contribute to the complexity of a speech sound. In order to see all the frequencies that make up a complex speech sound, we have to consult a different graph called an amplitude spectrum. An amplitude spectrum, shown at the bottom of Figure 1, is obtained by selecting a section of the waveform and performing a mathematical procedure called a Fourier analysis. Fourier analysis is a fairly involved procedure for breaking up any complex sound into its component sine wave frequencies. Luckily, virtually all acoustic applications and software packages available today have automated Fourier analyses built in. Each component frequency is shown along the x axis and the relative amplitude or the amount of sound energy of each frequency component is shown on the y axis. Recall that the fundamental frequency for the vowel /u/ displayed in Figure 1 was 100 Hz. The higher frequency components that are visible are called harmonics, and occur at whole-number multiples of the fundamental. So, for /u/ being produced in Figure 1, the harmonics would be 200 Hz, 300 Hz, 400 Hz, and so on.
The relative amplitudes of the harmonics also can vary. These amplitude differences are influenced by the position of the articulators in the vocal tract. The one thing we are unable to see in the amplitude spectrum is how the frequency components of the speech sound change over time as the articulators change their positions in the vocal tract. This drawback can be resolved by using a specialized display known as spectrograms.
We use spectrograms to show how frequency components of speech sounds change over time. In the spectrogram for the vowels /i/, /α/, and /u/, time is shown along the x-axis and frequency along the y-axis, moving from low frequencies at the bottom of the diagram to high frequencies at the top. The intensity of a given frequency component is represented by the darkness of the marks on the graph. For example, intense or loud sounds are represented by darker frequencies, medium intensity by lighter marks, and silence by white space. In the next section we will examine the spectrographic characteristics of various vowels and consonants.
Figure 2.

Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.

 Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.
Figure 2.

Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.

×
Source-Filter Explanation of Speech Production
Recall from your university speech science course the acoustic theory or source filter theory of speech production, which posits that the vocal tract is divided into two components, the sound source and the filter, which modulates the sound passing through the vocal tract. For vowels, the source for speech is the glottal tone generated by the vibrating vocal folds. The rapid opening and closing movements of the vocal folds produce a mostly periodic or regularly repeating complex waveform. The amplitude spectrum of the glottal tone has a fundamental frequency that is directly related to the rate of vocal fold vibration and higher harmonics that occur at whole-number multiples of the fundamental.
The source tone produced by the vibrating vocal folds is transformed into recognizable speech sounds by the filtering action of the vocal tract. The vocal tract includes the pharynx, the nasal cavity, and the oral cavity with all the articulators housed within these regions (tongue, lips, teeth, hard palate, velum, jaw). The vocal tract is modeled by speech scientists as a tube that is closed at one end (vocal folds) and open at the other end (oral opening). The open-closed tube resonates certain frequencies of the glottal source better than others, thereby allowing some frequencies to “pass through” the vocal tract and other frequencies to be filtered out, thereby suppressing their acoustic energy and contribution to the sound you hear.
The shape and length of the vocal tract tube determines which frequencies of the glottal tone will be emphasized and which will be filtered out. Speakers change the length and shape of the vocal tract through learned motion and positional changes of component articulators for a given sound. For example, for vowels speakers change the tongue height by moving the tongue and the jaw up and down to produce high, mid, and low vowels. We alter tongue advancement by moving the location of the constriction between the tongue and the palate forward and backward in the mouth to produce front, central, and back vowels.
Unlike vowels, the acoustic properties of consonants can vary along several dimensions. Some consonant sounds are produced by shaping a “noise” source with the vocal tract filter. For example, voiceless fricatives are produced when air travels through a constriction occurring somewhere in the vocal tract at a high velocity, producing an audible turbulence of the air flow. The frication noise for /h/ is created by holding the vocal folds relatively close together, while the frication noise for /s/ is produced by sending air through a narrow gap between the tongue tip and the alveolar ridge. A section of frication noise from the /s/ in the word “sip” is shown in Figure 3 (region between 100 and 275 ms in the top panel). As can be seen, these noise sources have complex, aperiodic features (Figure 3). Because the noise waveform does not repeat regularly in time like the glottal tone, the amplitude spectrum for frication noise does not contain a fundamental frequency (which is a correlate of vocal fold vibration) accompanied by harmonic frequency components (bottom of Figure 3). Instead, frequency components are spread at random intervals across the spectrum. The vocal tract tube shapes the frication noise into speech sounds just as it shapes the glottal tone produced by the vibrating vocal folds, with some of the frequencies emphasized while others are filtered out.
Figure 3.

Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/

 Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/
Figure 3.

Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/

×
Acoustic Characteristics of Vowels
Vowels have prominent frequency bands called formants. The formants show the range of critical frequencies that have passed through the vocal tract filter. On the spectrogram, the formants appear as dark horizontal bands that last throughout the entire vowel sound. Formants for the vowels /i/, /a/, and /u/ are shown in Figure 2 (labeled F1, F2, & F3). For the purposes of understanding speech production and perception, the first three formants of the vowel are generally the most important. F1, the formant with the lowest frequency range, is related to the tongue height of the vowel. Vowels that are produced with the tongue positioned relatively high in the mouth, such as /i/ and /u/, generally have low first formant frequencies. Low vowels, such as /æ/ and /α/ have relatively high F1 frequencies. The frequency range of the second formant, F2, is related to the advancement of the tongue in the oral cavity. Vowels that are produced with the tongue relatively far forward in the mouth, such as /i/ and / Ι /, have high second formant frequencies, while back vowels such as /u/ and /α/ tend to have low F2 values. F3 values are most important in rhotic, or r-colored, sounds such as /Image Not Available/.
Listeners use formant frequency patterns to determine which vowel they hear. The exact formant frequencies for each vowel differ across speakers, because rates of vocal fold vibrations and vocal tract sizes differ across speakers. However, formant frequency patterns for particular vowels tend to be similar across speakers. We know as listeners that a vowel with a low F1 frequency and a high F2 frequency is generally indicative of the high, front vowel /i/, whereas a vowel with a high first formant and a low F2 value is most likely the low, back vowel /α/. In English, we also use duration information to distinguish among vowels with similar formant patterns. Although the high, front vowels /i/ and / Ι / both have low F1 and high F2 frequencies, the lax vowel / Ι / is typically shorter in duration than the tense vowel /i/.
Visual displays of formant frequency can provide valuable and useful information in the clinic. Clients learning English as a second language may have difficulty producing vowels that are not in their native language inventories. For example, languages such as Spanish and Korean do not use the lax vowels / Ι / and /Image Not Available/. When learning English as a second language, Spanish and Korean speakers may assimilate these unfamiliar vowels to their closest spectral counterparts, /i/ and /u/. Acoustic displays of the subtle differences in formant patterns and duration for the / Ι /-/i/ and /Image Not Available/-/u/ vowel pairs can provide powerful biofeedback information for accent modification therapy. This acoustic biofeedback also may be useful in improving vowel production for speakers with profound hearing impairments. For example, hearing-impaired speakers can be guided to shape their articulatory motion to match a template visible on a monitor that corresponds to the acoustic pattern of formants for a given sound class.
One helpful way of displaying formant pattern differences among English vowels is the F1/ F2 plot provided in the Dr. Speech software package. Real-time spectrograms for comparing similar vowels are available in the KayPentax Multi-Speech software and Visi-Pitch IV instrument. Another source for a variety of free downloadable programs to display spectrograms, waveforms, and amplitude spectra for clinical use is the University College London (UCL) Department of Speech, Hearing, and Phonetic Sciences. The figures in this article were all created by the author using the UCL's programs ESection and WASP. Clinicians also can use the sound editor program, AUDACITY, for displaying waveforms, spectra, and spectrograms. The creative clinician thus has numerous tools at their disposal to craft meaningful therapeutic experiences that tap into alternative sensory channels to foster changes in sound production.
Acoustic Characteristics of Consonants
Stops
Stop consonants are produced by occluding the airflow in the vocal tract using the lips, tongue, or velum and then suddenly releasing the pressure build-up through the opening between the articulators. In English, we can occlude the vocal tract by closing the lips for /p/ and /b/, by bringing the tongue tip to the alveolar ridge for /t/ and /d/, and by placing the back of the tongue against the soft palate for /k/ and /g/. Figure 4 shows a waveform and spectrogram of the stop burst that appears as a small vertical line. After the release burst occurs, it takes some time to bring the vocal folds together to produce phonation for the vowel that follows the stop. That interval of time is called voice onset time (VOT). The voiced stops /b/, /d/, and /g/ have short voice onset times (ranging from 0 to 35 ms) because it takes very little time for the vocal folds to begin vibrating after the stop. Some voiced stops are pre-voiced, meaning that the vocal folds begin to vibrate even before the release burst occurs. In these cases, VOT values can be negative, and the intervals are measured backward from the release burst to the onset of phonation. The voiceless stops /p/, /t/, and /k/ have long voice onset times (ranging from 44 to 110 ms) because it takes more time for the vocal folds to come back to the midline after they have separated to allow air to flow through for the voiceless stop. As listeners, we use the information about voice onset time to distinguish between voiced and voiceless stops in the initial and medial position of words.
Figure 4.

Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants

 Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants
Figure 4.

Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants

×
We can also observe the interval of time before a final stop consonant occurs. It takes a finite amount of time to bring the articulators together in order to form the obstruction of the vocal tract for stop consonants. On waveform and spectrogram displays, this period of time is called the stop gap and it appears as a silent interval before the stop burst lasting about 50 to 150 ms. Sometimes, stop gaps for voiced stops will contain voice bars indicating that vocal fold vibration is simultaneously occurring before the stop burst is produced. Also, the length of the vowels the preceding final stops depends on whether the final consonant is voiced or voiceless. Vowels that come before voiced stops are longer in duration than vowels preceding voiceless stops. As listeners, we use both vowel length cues and the presence or absence of voicing during the stop gap to determine whether final stops are voiced or voiceless. Acoustic displays of VOT, stop gap, and vowel duration can be used in the clinic to treat clients who have stop consonant voicing errors. For example, second language learners of English may benefit from acoustic feedback in order to produce more appropriate voice onset times to reduce the perception of a foreign accent.
Fricatives
Fricative consonants are produced by creating a narrow constriction between articulators through which air is forced, thus causing the air flow to become turbulent resulting in a hissing sound or frication noise. In English, these narrow constrictions are created between the upper lip and lower teeth for /f/ and /v/, tongue tip and teeth for / θ / and /ð/, tongue tip and alveolar ridge for /s/ and /z/, tongue blade and palate for /Image Not Available/ and /Image Not Available/, and vocal folds for /h/. The waveform and spectrogram for the word “scissors” containing both voiceless and voiced fricatives are depicted in Figure 5.
Figure 5.

Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/

 Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/
Figure 5.

Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/

×
The noise for fricative consonants continues for a relatively long time, anywhere between from 50 to 200 ms. The strident (or sibilant) fricatives /s/, /z/, /Image Not Available/, and /Image Not Available/ have greater acoustic energy than the nonstridents /f/, /v/, / θ /, /ð/, and /h/, their frication noise appearing darker on spectrograms. Voiced fricatives are accompanied by vibration of the vocal folds. For some voiced fricative productions, that phonation will be seen as a “voice bar” underneath the frication noise for the voiced fricative sounds (bottom of Figure 5). Voiceless fricatives, because they are not accompanied by vocal fold vibration, will have no energy in the frequency range of 100 to 300 Hz.
Place of articulation for fricatives can be exhibited by the acoustic spectrum of the frication noise. The strident fricative /s/ has a higher frequency spectral peak than /Image Not Available/. Spectral peak frequencies tend to be highest for the most forward places of articulation in the oral cavity. However, because the non-strident fricatives are generally weak in acoustic energy, the higher spectral peaks for /f/ and / θ / may be difficult to see. It is not completely understood how the acoustic cues are used to distinguish the fricatives from one another, particularly /f/ and / θ /. In the clinic, presenting spectral peak differences using amplitude spectra displays like the one shown at the bottom of Figure 3 might help clients who incorrectly produce the sounds /s/ and /Image Not Available/. Spectral information also might be useful to determine progress in therapy if clients are making small changes in /s/ production that can't be heard by the clinician but can be seen in the acoustic signal.
Glides, Liquids, and Nasals
The glides /w/ and /j/, the liquids /l/ and /r/, and the nasals /m/, /n/, and /ŋ/ are classified as sonorant consonants. All of them are voiced phonemes with a formant structure similar to those of vowels. The glides are consonants in which the articulators move relatively slowly from a narrowed configuration to the position appropriate for the following vowel. The glide /w/ is produced with rounded lips and the back of the tongue raised toward the velum. The formant structure of /w/ looks a great deal like that for /u/ at the beginning, and then the formants transition into their patterns for the vowel that is produced afterwards. For the consonant /j/, the tongue approaches the palate behind the alveolar ridge. Words beginning with /j/ have formant patterns that look like the pattern for /i/ and then change as the articulators move into their positions for the next vowel sound.
Figure 6.

Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position

 Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position
Figure 6.

Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position

×
The liquid /r/ can be distinguished from glides and other speech sounds by its low F3 frequency. This acoustic cue can be used to provide feedback about tongue position to clients who have difficulty with /r/ production. Older children with persistent /r/ distortions and second language learners of English, such as Japanese adults who confuse /l/ and /r/ can benefit from spectrographic or amplitude spectra displays of the low third formant.
The lateral sound /l/ is made by allowing air to flow over the sides of the tongue. Because some sound energy is trapped beneath the tongue, antiformants are introduced into the sound spectrum. Antiformants represent frequency regions in which sound is filtered out and appear as dips in the amplitude spectrum and areas of silence or weak intensity in spectrograms (see Figure 6). Acoustic cues for /l/ are not straightforward, /l/ formant patterns vary with phonetic context, and /l/ can look quite similar to nasal consonants.
The nasal consonants are produced with the soft palate lowered so that air can flow into both the nose and the mouth. They are usually less intense than the vowels that surround them. The acoustic signals of nasals have both vowel-like formants and antiformants (areas of reduced sound energy). The antiformants result from sound energy being trapped in the closed oral cavity. Nasal consonants usually have a relatively prominent low frequency “nasal formant” around 300 Hz. The frequency locations of the antiformants depend on place of articulation. Vowels that occur close to nasal consonants can become nasalized, their acoustic appearance may take on some characteristics of nasals such as antiformants.
Other Uses of Acoustic Analysis in the Clinic
In this article, we have concentrated on acoustic properties for vowels and consonants. There are many other ways to use acoustic techniques to assess and treat clients with speech and voice disorders. For example, we are able to obtain reliable diadochokinetic rates, calculate how many syllables per second a client produces in connected speech, and examine the location and duration of pauses in their utterances. In addition, we can examine a variety of characteristics related to voice including habitual pitch, pitch range, intonation patterns, intensity, and acoustic parameters related to hoarseness and breathiness. Waveforms and spectrograms make it easy to study speech timing as well. The availability of low-cost software and hardware makes using acoustic technology in the clinic an excellent option for the assessment and treatment of many different types of clients with sound production disorders or deficits.
Ferrand, C. T. (2007). Speech science: An integrated approach to theory and clinical practice (2nd ed). Boston, MA: Allyn and Bacon.
Ferrand, C. T. (2007). Speech science: An integrated approach to theory and clinical practice (2nd ed). Boston, MA: Allyn and Bacon. ×
Johnson, K. (2003). Acoustic and auditory phonetics (2nd ed.). Malden, MA: Blackwell.
Johnson, K. (2003). Acoustic and auditory phonetics (2nd ed.). Malden, MA: Blackwell. ×
Kent, R. D. (1997). The speech sciences. San Diego, CA: Singular..
Kent, R. D. (1997). The speech sciences. San Diego, CA: Singular.. ×
Kent, R. D., & Read, C. (2002). The acoustic analysis of speech (2nd ed.). Albany, NY: Singular Thomson Learning.
Kent, R. D., & Read, C. (2002). The acoustic analysis of speech (2nd ed.). Albany, NY: Singular Thomson Learning. ×
Mullin, W. J., Gerance, W. J., Mestre, J. P., & Velleman, S. L. (2003). Fundamentals of sound with applications to speech and hearing. Boston, MA: Allyn and Bacon.
Mullin, W. J., Gerance, W. J., Mestre, J. P., & Velleman, S. L. (2003). Fundamentals of sound with applications to speech and hearing. Boston, MA: Allyn and Bacon. ×
Raphael, L. J., Borden, G. J., & Harris, K. S. (2007). Speech science primer: Physiology, acoustics, and perception of speech (5th ed.). Philadelphia, PA: Lippincott, Williams & Wilkins.
Raphael, L. J., Borden, G. J., & Harris, K. S. (2007). Speech science primer: Physiology, acoustics, and perception of speech (5th ed.). Philadelphia, PA: Lippincott, Williams & Wilkins. ×
Stevens, K. N. (1998). Acoustic phonetics. Cambridge, MA: MIT Press.
Stevens, K. N. (1998). Acoustic phonetics. Cambridge, MA: MIT Press. ×
Figure 1.

Waveform and amplitude spectrum for a section of the vowel /u/

 Waveform and amplitude spectrum for a section of the vowel /u/
Figure 1.

Waveform and amplitude spectrum for a section of the vowel /u/

×
Figure 2.

Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.

 Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.
Figure 2.

Spectrogram of the vowels /i/, /α/, and /u/ showing the first three formants (F1, F2, and F3) for each vowel.

×
Figure 3.

Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/

 Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/
Figure 3.

Waveform and spectrogram for the word “sip” with the amplitude spectrum for the fricative /s/

×
Figure 4.

Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants

 Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants
Figure 4.

Waveform and spectrogram for the words “top dog” showing release bursts, voice onset times, and stop gaps for the stop consonants

×
Figure 5.

Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/

 Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/
Figure 5.

Waveform and spectrogram for the word “scissors” showing frication noise for initial /s/ and final /z/ and the voice bar for the medial /z/

×
Figure 6.

Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position

 Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position
Figure 6.

Waveforms and spectrograms for the sonorant consonants /w/, /j/, /l/, /r/, /m/, and /n/ in initial word position

×
We've Changed Our Publication Model...
The 19 individual SIG Perspectives publications have been relaunched as the new, all-in-one Perspectives of the ASHA Special Interest Groups.