Projections from the Temporal Cortex to the Basal Nucleus of the Amygdala in the Macaque

Ashley Bui, Sarah Friedman, Emily A. Kelly, and Julie L. Fudge

The amygdala is a complex brain structure involved in the emotional coding of complex sensory stimuli. In primates, including humans, visual information in the form of faces is one of the most important drivers of amygdala activity. We asked whether information from the temporal cortex – where visual information is hierarchically processed – shows a particular pattern of inputs to one of the main amygdala nuclei, the basal nucleus (BA). Following injections of neuronal tracers into the BA in four monkeys, we examined the temporal cortex for retrogradely labeled cells using immunocytochemistry and a neuron tracing software (Neurolucida). The most ventromedial injection in the BA had the most restricted distribution of labeled cells, found mainly in the entorhinal cortex. A slightly more dorsal injection resulted in additional dense labeling in the perirhinal cortex and moderate numbers of labeled cells in the inferotemporal cortex (TE). Increasingly dorsal injections resulted in heavy concentrations of labeled cells in the entorhinal and perirhinal cortices and TE, with additional labeled cells in the superior temporal gyrus (STG) and sulcus (STS). There is a topography of temporal cortical inputs to the BA with the most ventral regions receiving restricted inputs from entorhinal cortex. Inputs from perirhinal cortex, and eventually, TE, and STS/STG progressively contribute additional information to more dorsal BA sites. The entorhinal cortex plays a role in episodic memory, which is the memory of highly personal detailed information. The perirhinal cortex is intimately linked to the entorhinal cortex, and is implicated in visual place recognition. TE and adjacent STG and STS are more directly related to perception of ongoing visual information such as objects and faces. While memory-based information from entorhinal and perirhinal cortex, respectively, influence the entire BA, the dorsal regions are specialized in receiving inputs of faces and objects in the immediate environment.

Advertisements

Reciprocal Effects Of Glymphatic Function And The Experimental Autoimmune Encephalomyelitis (EAE) Model Of Multiple Sclerosis

Hanna Vinitsky, Iben Lundgaard, Shane O’neil, Wei Wang, Ben Reeves, Ezra Yang, Steven Goldman, Maiken Nedergaard Iben Lundgaard 

Multiple Sclerosis (MS) is an autoimmune disease targeting myelin in the central nervous system. Lesions in MS patients and in the experimental autoimmune (EAE) model of MS are characterized by immune cell infiltration, often forming peri-vascular cuffing around blood vessels. Here we used the EAE mouse model of MS and investigated glymphatic function dynamics using a fluorescent cerebrospinal fluid (CSF) tracer. The glymphatic system is a brain-wide clearance system using peri-vascular pathways for transport. We found that glymphatic influx to the brain was reduced and influx to the spinal cord was severely diminished. The distribution of CSF tracer was inversely correlated with the number of lesions, suggesting that EAE tissue pathology affects the glymphatic system in acute and chronic disease. Intriguingly, inhibition of the glymphatic function using acetazolamide and cisterna magna puncture (CMP) in the pre-symptomatic phase significantly ameliorated EAE clinical symptoms. This shows that glymphatic function is affected in EAE, but that disease progression might be aided by the glymphatic system in the early phase. This preliminary data suggests that targeting the glymphatic system in the early phase of MS might be a novel mechanism to curb disease.

Using fMRI to Explore the Neural Basis of Anticipation After Implicit Distributional Learning

Danlei Chen, Carol Jew, Benjamin Zinszer, and Rajeev Raizada

Distributional learning research has established that humans can track the frequencies of sequentially presented stimuli in order to infer the probabilities of upcoming events (e.g., Hasher & Zacks, 1984). We hypothesize that as people learn this frequency information, probabilistically weighted representations of the next stimulus are activated in the brain prior to each trial. We present behavioral evidence that these weighted representations are measurable in the response time of the subsequent trial, and we propose a further experiment to directly test the neural hypothesis. In the behavioral experiment, twelve adult participants viewed photographs of faces, tools, and buildings while performing a simple classification task. Each of these categories reliably evokes stronger responses in specific sets of brain areas compared to other categories (Chao & Martin, 2000; Epstein & Kanwisher, 1998; Kanwisher, McDermott, & Chun, 1997), allowing us to measure the intensity of brain activity separately and in parallel for each category in the MRI scanner. The frequency of each category (60%, 30%, 10%) was counterbalanced across six different frequency distributions. Using a two-way (Frequency-by-Category) linear mixed-effects model, we compared response times for the stimuli in each distribution to see whether the anticipation of a more frequent category reduced the response time. Response times significantly decreased with greater frequencies (t(6123) = -7.289, p < .0005), indicating that participants anticipated the stimuli proportional to the probability of the category and thereby reduced response times for the more frequent categories. With this evidence of probabilistic anticipatory representations, we are now testing this effect using functional MRI. We hypothesize that anticipation of a category will evoke activity in category-specific regions proportional to the probability of that category. If the neuroimaging results are in line with this hypothesis, they will suggest that learned distributional information produces probabilistically weighted representations of possible outcomes.

Generalized Adaptation to Novel Foreign Accents

Emily Simon, Lauren Oey, Crystal Lee, T. Florian Jaeger, and Xin Xie

Technology has made the world an increasingly interconnected sphere– one in which conversations can occur seamlessly while speakers sit oceans apart. However, with increasing globalization comes increasing demands on listeners to comprehend extensive variability in speech, particularly that of foreign-accented speakers. Nevertheless, evidence suggests that listeners rapidly adapt to accented speech – across varying speaker background, differing intelligibility and relatively brief exposures (Clarke & Garrett, 2004; Bradlow & Bent, 2008; Sidaras et al., 2009). After further exposure, listeners can generalize such adaptation to novel speakers with whom a listener has not previously interacted (Bradlow & Bent, 2008; Baese-Berk et al., 2013). The scope of this generalization, as well as its underlying mechanism, are still unknown. This is largely due to the inherent difficulties of measuring variability within and across speakers quantitatively. We examined generalizability of adaptation to accented speech in cases of exposure to multiple foreign accents. Using an online crowdsourcing paradigm, we will measure listener’s transcription accuracy after exposure to accented speech to assess generalized adaptation ability. During Exposure, listeners are assigned to one of three listening conditions; either 5 speakers of native English, 5 speakers of Mandarin-accented, or 5 speakers of varying language backgrounds (Korean, Thai, Hindi, Russian and Mandarin). After exposure, all listeners will be tested on a novel speaker of a familiar accent, and critically, a novel speaker of a novel accent. We hypothesize that transcription accuracy of novel foreign-accented utterances will be greatest in the case that listeners are exposed to the most systematic variability in accented speech. Under this assumption, we predict that listeners exposed to multiple foreign accents will perform best when tested on a novel accent.

Learning Adjective Meanings Through Variable Exemplars

Crystal Lee and Chigusa Kurumada

How have we learned the meaning of words like ‘full’ or ‘straight’? As adults, we know ‘full’ means ‘containing as much as possible without spilling over’. However, young learners often observe examples where ‘full’ is used to describe objects or situations that deviate from the prototypical definition, despite being contextually appropriate. e.g., a ‘full’ cup could be 90% full when transporting drinks. In fact, Syrett et al. (2010) found contrasting comprehension of absolute gradable adjectives (e.g., full, straight) between children and adults. When asked to give ‘the full cup’ with one 90% full cup and one 70% full cup present, four-year-olds were more willing to pass the 90% full cup. Contrastingly, adults were more likely to say that neither is ‘full’. We hypothesize that accumulated experiences allow adults to account for contextual contributions to word meaning: a 90% full cup is deemed full in an appropriate context (e.g., transporting drinks); otherwise any deviation from the prototypical meaning invalidates an instance to be judged ‘full’. We test this hypothesis by teaching adult subjects a novel gradable adjective; ‘pelty’ roughly meaning ‘tight-fitting’. 60 Subjects are randomly assigned either to With- or Without-context Condition. In Exposure, subjects watch 12 videos exemplifying the word use of objects that are tight-fitting to a varying degree. Those in the With-context condition receives contextual justification (e.g. A moderately tight-fitting shoe is still ‘pelty’ because it has to be worn with a thick sock) and those in the Without-context condition do not. In Test, participants see two novel objects (one 90% pelty and one 70% pelty) and a ‘neither’ option. We predict that subjects in the With-context condition, just as adult subjects in Syrett et al. (2010), would be more willing to select ‘neither’ than those in the Without-context condition.

Contextual Factors in Child Adjective Comprehension

Wesley Orth, Amanda Pogue, and Chigusa Kurumada

This research investigates preschoolers’ understanding of adjectives (e.g., big, clean, metal). While adjectives are part of very basic vocabulary acquired early in development, it is not clear whether children’s conceptual understanding of them is equivocal to that of adults. In particular, I am interested in how they acquire subtle meaning differences across adjectives. Some adjectives require a listener to reason about other objects in order to verify that they are true (e.g., to say a cat is big, one needs to know how large cats usually are), whereas others do not (e.g., to say a cat is striped, one needs to know if that cat has at least one stripe). I created a guessing game to directly compare young children and adults in their comprehension of various adjective types. 16 preschoolers and 20 adults were asked to match a description of an object a card; either a face-up or face-down card. There were three trial types: 1) adjectives that require a comparison class (e.g., big), 2) adjectives that are binary in meaning and do not require a comparison class (e.g., striped), and 3) adjectives that denote a property of the noun (e.g., metal). Participants may or may not flip the face-down card before making a match, the likelihood of which tells us whether they thought a given adjective requires a comparison class. The results show that adults seek out comparative information for the type 1 adjectives but not other adjectives (60%, 11.25%, 9.16% respectively), whereas children seek out less comparative information when necessary, yet still show a similar pattern (35%, 17.5%, 20%). I conclude that children’s understanding of these types of adjectives is qualitatively similar to adults’ while there is a quantitative difference between them. I am currently running a follow up experiment to investigate the nature of the difference.