Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Skip to main content

Micheal Dent

The cannabinoid receptor 1 (CB1R) is found at several stages in the auditory pathway, but its role in hearing is unknown. Hearing abilities were measured in CB1R knockout mice and compared to those of wild-type mice. Operant conditioning... more
The cannabinoid receptor 1 (CB1R) is found at several stages in the auditory pathway, but its role in hearing is unknown. Hearing abilities were measured in CB1R knockout mice and compared to those of wild-type mice. Operant conditioning and the psychophysical Method of Constant Stimuli were used to measure audiograms, gap detection thresholds, and frequency difference limens in trained mice using the same methods and stimuli as in previous experiments. CB1R knockout mice showed deficits at frequencies above 8 kHz in their audiograms relative to wild-type mice. CB1R knockouts showed enhancements for detecting gaps in low-pass noisebursts relative to wild-type mice, but were similar for other noise conditions. Finally, the two groups of mice did not differ in their frequency discrimination abilities as measured by the frequency difference limens task. These experiments suggest that the CB1R is involved in auditory processing and lay the groundwork for future physiological experiments.
ABSTRACT The current project investigates how experience with human speech can influence speech perception in budgerigars. Budgerigars are vocal mimics and speech exposure can be tightly controlled in a laboratory setting. The data... more
ABSTRACT The current project investigates how experience with human speech can influence speech perception in budgerigars. Budgerigars are vocal mimics and speech exposure can be tightly controlled in a laboratory setting. The data collected include behavioral responses from 30 budgerigars, tested using a cue-trading paradigm with synthetic speech stimuli. Prior to testing, the birds were divided into three exposure groups: Passive speech exposure (regular exposure to human speech), no speech exposure (completely isolated), and speech-trained (using the Model-Rival Method). After the exposure period, all budgerigars were tested using operant conditioning procedures. Birds were trained to peck keys in response to hearing different synthetic speech sounds that began with either “d” or “t.” Sounds varied in VOT and in the frequency of the first formant. Once training performance reached 80% on the series endpoints, budgerigars were presented with the entire series, including ambiguous sounds. The responses on these trials were used to determine which speech cues the birds use, if cue trading behavior was present, and whether speech exposure had an influence on perception. Preliminary data suggest experience with speech sounds is not necessary for cue trading by budgerigars.
Auditory filter shapes in the budgerigar (Melopsittacus undulatus) derived from notched‐noise maskers. [The Journal of the Acoustical Society of America 101, 3124 (1997)]. Jian‐Yu Lin, Robert J. Dooling, Michael L. Dent. Abstract. ...
ABSTRACT Detecting a signal embedded in noise is known to be enhanced by spatially separating the signal and noise in humans and other animals. This process is known as spatial unmasking and is a part of the larger phenomenon of the... more
ABSTRACT Detecting a signal embedded in noise is known to be enhanced by spatially separating the signal and noise in humans and other animals. This process is known as spatial unmasking and is a part of the larger phenomenon of the cocktail party problem. The exact mechanisms of unmasking are unknown, but binaural processes are thought to be at least partially involved. Most animals that exhibit unmasking are fairly adept at localizing pure tones in space. We wished to study spatial unmasking in an animal that is very poor at sound localization: the zebra finch. Zebra finches were trained using operant conditioning techniques and the psychophysical method of constant stimuli to peck keys for food reinforcement when they detected a tone embedded in a broadband noise masker. Thresholds were obtained for pure tones ranging from 500 Hz to 4000 Hz when the signal and the noise were emitted from the same speaker and when they were emitted from speaker locations separated by 180 deg. Zebra finches showed relatively little unmasking and there was large variation across subjects and frequencies, suggesting that the mechanisms underlying sound localization are related to those that result in spatial unmasking.
ABSTRACT Temporal waveform characteristics differentially affect masking by harmonic complexes in birds and humans. In birds, complexes with relatively flat temporal envelopes are more effective maskers than complexes with highly... more
ABSTRACT Temporal waveform characteristics differentially affect masking by harmonic complexes in birds and humans. In birds, complexes with relatively flat temporal envelopes are more effective maskers than complexes with highly modulated temporal envelopes. Flat temporal waveforms constructed using positive and negative Schroeder‐phase algorithms result in large masking differences in humans, but little masking difference in birds. This effect may be related to differences in the cochlear phase characteristics in the two species. A psychophysical estimate of the phase characteristic has been reported in humans by finding the phase spectrum of a harmonic masker that produces the least masking. Here we estimated the phase characteristic for zebra finches using operant conditioning methods to measure masked thresholds for tones embedded in harmonic complexes with phases selected according to scaled modifications of the Schroeder algorithm. There was little difference between birds and humans when the scalar was between 0.0 and −1.0 (i.e., negative Schroeder phases), but humans showed less masking than birds when the scalar was positive. These results may reflect a more linear phase characteristic along the avian cochlea. [Work supported by NIH R01 DC00198 and NSRA DC00046.]
Mice are emerging as an important behavioral model for studies of auditory perception and acoustic communication. These mammals frequently produce ultrasonic vocalizations, although the details of how these vocalizations are used for... more
Mice are emerging as an important behavioral model for studies of auditory perception and acoustic communication. These mammals frequently produce ultrasonic vocalizations, although the details of how these vocalizations are used for communication are not entirely understood. An important step in determining how they might be differentiating their calls is to measure discrimination and identification of the dimensions of various acoustic stimuli. Here, behavioral operant conditioning methods were employed to assess frequency difference limens for pure tones. We found that their thresholds were similar to those in other rodents but higher than in humans. We also asked mice, in an identification paradigm, whether they would use frequency or duration differences to classify stimuli varying on those two dimensions. We found that the mice classified the stimuli based on frequency rather than duration.
The auditory scene is filled with an array of overlapping acoustic signals, yet relatively little work has focused on how animals are able to perceptually isolate different sound sources necessary for survival. Much of the previous work... more
The auditory scene is filled with an array of overlapping acoustic signals, yet relatively little work has focused on how animals are able to perceptually isolate different sound sources necessary for survival. Much of the previous work on auditory scene analysis has investigated how sequential pure tone stimuli are perceived, but how temporally overlapping complex communication signals are segregated has been largely ignored. In this study, budgerigars and humans were tested using psychophysical procedures to measure their perception of synchronous, asynchronous, and partially overlapping complex signals, including bird calls and human vowels. Segregation thresholds for complex stimuli were significantly lower than those for pure tone stimuli in both humans and birds. Additionally, a species effect was discovered such that relative to humans, budgerigars required significantly less temporal separation between 2 sounds in order to segregate them. Overall, and similar to previous beh...
Mice are a commonly used model in hearing research, yet little is known about how they perceive conspecific ultrasonic vocalizations (USVs). Humans and birds can distinguish partial versions of a communication signal, and discrimination... more
Mice are a commonly used model in hearing research, yet little is known about how they perceive conspecific ultrasonic vocalizations (USVs). Humans and birds can distinguish partial versions of a communication signal, and discrimination is superior when the beginning of the signal is present compared to the end of the signal. Since these effects occur in both humans and birds, it was hypothesized that mice would display similar facilitative effects with the initial portions of their USVs. Laboratory mice were tested on a discrimination task using operant conditioning procedures. The mice were required to discriminate incomplete versions of a USV target from a repeating background containing the whole USV. The results showed that the mice had difficulty discriminating incomplete USVs from whole USVs, especially when the beginning of the USVs were presented. This finding suggests that the mice perceive the initial portions of a USV as more similar to the whole USV than the latter part...
Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation... more
Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and p...
Research Interests:
Research Interests:
Research Interests:
ABSTRACT Auditory streaming is a phenomenon that has been documented in a wide variety of animal species. Recently, space, intensity, time, and spectral composition were all found to be important factors for auditory streaming of birdsong... more
ABSTRACT Auditory streaming is a phenomenon that has been documented in a wide variety of animal species. Recently, space, intensity, time, and spectral composition were all found to be important factors for auditory streaming of birdsong by budgerigars and zebra finches, although the cues varied in importance. Those experiments were extended here to further examine the role of frequency characteristics on the auditory streaming of birdsong. The birds were initially trained using operant conditioning procedures to differentially peck keys in response to either a synthetic zebra finch song consisting of five syllables (whole song) or to the same song with the fourth syllable omitted (broken song). Correct responses were reinforced with millet, and incorrect responses were punished with a lights-off timeout period. Once the birds reached high-performance levels with the training stimuli, probe trials were inserted on a small portion of all trials. The probe stimuli contained either a white noise burst in the missing syllable's location or narrowband pieces of the original fourth syllable. Results show that the different cues are differentially effective in eliciting streaming of birdsong by zebra finches and budgerigars, similar to the previous experiments and results from human speech experiments.
Research Interests:
Research Interests:
ABSTRACT Deciphering the auditory scene is a problem faced by humans and animals alike. However, when faced with overlapping sounds from multiple locations, listeners are still able to attribute the individual sound objects to their... more
ABSTRACT Deciphering the auditory scene is a problem faced by humans and animals alike. However, when faced with overlapping sounds from multiple locations, listeners are still able to attribute the individual sound objects to their individual sound-producing sources. Here, we determined which characteristics of sounds are important for streaming versus segregating in birds. Budgerigars and zebra finches were trained using operant conditioning procedures on an identification task to peck one key when they heard a whole zebra finch song and to peck another when they heard a zebra finch song missing a middle syllable. Once the birds were trained to a criterion performance level on those endpoint stimuli, probe trials were introduced on a small proportion of all trials. The probe songs contained modifications of the incomplete training song's missing syllable. When the bird responded as if the probe was a whole song, it suggests they streamed together the altered syllable and the rest of the song. When the bird responded non-whole song, it suggests they segregated the altered probe from the rest of the song. Results show that some features, such as spectrotemporal similarity and location, are more important for streaming than other features, such as timing.
Research Interests:
ABSTRACT Temporal waveform characteristics differentially affect masking by harmonic complexes in birds and humans. In birds, complexes with relatively flat temporal envelopes are more effective maskers than complexes with highly... more
ABSTRACT Temporal waveform characteristics differentially affect masking by harmonic complexes in birds and humans. In birds, complexes with relatively flat temporal envelopes are more effective maskers than complexes with highly modulated temporal envelopes. Flat temporal waveforms constructed using positive and negative Schroeder‐phase algorithms result in large masking differences in humans, but little masking difference in birds. This effect may be related to differences in the cochlear phase characteristics in the two species. A psychophysical estimate of the phase characteristic has been reported in humans by finding the phase spectrum of a harmonic masker that produces the least masking. Here we estimated the phase characteristic for zebra finches using operant conditioning methods to measure masked thresholds for tones embedded in harmonic complexes with phases selected according to scaled modifications of the Schroeder algorithm. There was little difference between birds and humans when the scalar was between 0.0 and −1.0 (i.e., negative Schroeder phases), but humans showed less masking than birds when the scalar was positive. These results may reflect a more linear phase characteristic along the avian cochlea. [Work supported by NIH R01 DC00198 and NSRA DC00046.]
Research Interests:
Research Interests:
Research Interests:
Research Interests:
... Conditions BARBARA G. SHINN-CUNNINGHAM 1, VIRGINIA BEST 1, MICHEAL L. DENT 2, FREDERICK J. GALLUN 1, ELIZABETH M. MCCLAINE 2, RAJIV NARAYAN 1, EROL OZMERAL 1, AND KAMAL SEN 1 1 Introduction ...
Frequency weighting functions in humans are widely used as a single-figure guess to assess noise problems and aid in making decisions with regard to noise limitations when no other data exist. However, this use of frequency weightings... more
Frequency weighting functions in humans are widely used as a single-figure guess to assess noise problems and aid in making decisions with regard to noise limitations when no other data exist. However, this use of frequency weightings invariably results in a loss of precision in assessing the likelihood of a sound to produce hearing damage or sound annoyance. There is

And 30 more