In the human brain, glial cells outnumber neurons by about 50 to one. Glial cells are the most common cells found in primary brain tumors. When a person is diagnosed with a brain tumor, a biopsy may be done, in which tissue is removed from the tumor for identification purposes by a pathologist. Pathologists identify the type of cells that are present in this brain tissue, and brain tumors are named based on this association.
The type of brain tumor and cells involved impact patient prognosis and treatment. The brain is housed inside the bony covering called the cranium.
The cranium protects the brain from injury. Together, the cranium and bones that protect the face are called the skull. Between the skull and brain is the meninges, which consist of three layers of tissue that cover and protect the brain and spinal cord.
From the outermost layer inward they are: the dura mater, arachnoid and pia mater. Dura Mater: In the brain, the dura mater is made up of two layers of whitish, nonelastic film or membrane. The outer layer is called the periosteum. An inner layer, the dura, lines the inside of the entire skull and creates little folds or compartments in which parts of the brain are protected and secured. The two special folds of the dura in the brain are called the falx and the tentorium.
The falx separates the right and left half of the brain and the tentorium separates the upper and lower parts of the brain. Arachnoid: The second layer of the meninges is the arachnoid. This membrane is thin and delicate and covers the entire brain. There is a space between the dura and the arachnoid membranes that is called the subdural space.
The arachnoid is made up of delicate, elastic tissue and blood vessels of varying sizes. Pia Mater: The layer of meninges closest to the surface of the brain is called the pia mater. The pia mater has many blood vessels that reach deep into the surface of the brain. The pia, which covers the entire surface of the brain, follows the folds of the brain. The major arteries supplying the brain provide the pia with its blood vessels. The space that separates the arachnoid and the pia is called the subarachnoid space.
It is within this area that cerebrospinal fluid flows. Cerebrospinal fluid CSF is found within the brain and surrounds the brain and the spinal cord. It is a clear, watery substance that helps to cushion the brain and spinal cord from injury. This fluid circulates through channels around the spinal cord and brain, constantly being absorbed and replenished. It is within hollow channels in the brain, called ventricles, that the fluid is produced. A specialized structure within each ventricle, called the choroid plexus, is responsible for the majority of CSF production.
The brain normally maintains a balance between the amount of CSF that is absorbed and the amount that is produced. However, disruptions in this system may occur. The ventricular system is divided into four cavities called ventricles, which are connected by a series of holes, called foramen, and tubes. Two ventricles enclosed in the cerebral hemispheres are called the lateral ventricles first and second. They each communicate with the third ventricle through a separate opening called the Foramen of Munro.
The third ventricle is in the center of the brain, and its walls are made up of the thalamus and hypothalamus. The third ventricle connects with the fourth ventricle through a long tube called the Aqueduct of Sylvius.
CSF flowing through the fourth ventricle flows around the brain and spinal cord by passing through another series of openings. The brainstem is the lower extension of the brain, located in front of the cerebellum and connected to the spinal cord. It consists of three structures: the midbrain, pons and medulla oblongata.
It serves as a relay station, passing messages back and forth between various parts of the body and the cerebral cortex. Many simple or primitive functions that are essential for survival are located here. The midbrain is an important center for ocular motion while the pons is involved with coordinating eye and facial movements, facial sensation, hearing and balance.
The medulla oblongata controls breathing, blood pressure, heart rhythms and swallowing. Messages from the cortex to the spinal cord and nerves that branch from the spinal cord are sent through the pons and the brainstem. Destruction of these regions of the brain will cause "brain death. The reticular activating system is found in the midbrain, pons, medulla and part of the thalamus. It controls levels of wakefulness, enables people to pay attention to their environments and is involved in sleep patterns.
Originating in the brainstem are 10 of the 12 cranial nerves that control hearing, eye movement, facial sensations, taste, swallowing and movements of the face, neck, shoulder and tongue muscles. The cranial nerves for smell and vision originate in the cerebrum. Four pairs of cranial nerves originate from the pons: nerves five through eight. The cerebellum is located at the back of the brain beneath the occipital lobes.
It is separated from the cerebrum by the tentorium fold of dura. Neural resources for processing language and environmental sounds: evidence from aphasia. Apraxia of speech: the disorder and its management. New York: Grune and Stratton; Brain regions involved in articulation. Lancet ; : — Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide.
Sign In or Create an Account. Sign In. Advanced Search. Search Menu. Article Navigation. Close mobile search navigation Article Navigation. Volume Article Contents References. Brain areas involved in speech production. Nina Dronkers , Nina Dronkers. Oxford Academic. Google Scholar. Jennifer Ogar. Select Format Select format. Permissions Icon Permissions. Nature Neurosci. Issue Section:.
Download all slides. View Metrics. Email alerts Article activity alert. Advance article alerts. New issue alert. Subject alert. Receive exclusive offers and updates from Oxford Academic. These predictions are made on the basis of prior proposals by Guenther et al. In addition to dissociating brain activation for speech and non-speech mouth movements, we also looked for activation that was common to both speech and non-speech mouth movements relative to finger tapping and visual fixation.
The involvement of these regions in non-speech as well as speech mouth movements has already been demonstrated. For example, Chang et al. This suggests a general role for these regions in orofacial movements and their auditory consequences.
By including a visual fixation baseline, we could also identify activation that was common to both finger and mouth movements; and control for inner speech that occurs independently of mouth movements during free thought. Functional imaging data were acquired using positron emission tomography PET. For the current study of speech production there are two advantages of using PET rather than fMRI: the PET scanning environment is quieter for recording the presence or absence of speech output; and the regional cerebral blood flow rCBF signals are not distorted by air flow through the articulators.
The study was approved by the local hospital ethics committee. We scanned 12 right handed, native English speakers who had normal or corrected vision and hearing and no history of neurological disease or mental illness.
All gave written informed consent. One participant was subsequently excluded for reasons given below. The remaining 11 subjects 10 male had a mean age of 34 years range 19— The predominance of male participants is a consequence of using PET scanning which is not appropriate for women of child bearing age.
Inter-subject variability in our results was investigated and reported see Figure 2 to ensure consistency across participants, despite the wide range of ages and unequal distribution of males and females.
Figure 2. Activation during silent articulation of speech. Top: Activation for speech more than non-speech mouth movements is illustrated in yellow in the pars opercularis pOp and the left posterior superior temporal sulcus pSTS. Activation for speech and non-speech mouth movements relative to finger movements and fixation is illustrated in green. The blue area within this system corresponds to the location where activation was greater for tongue movements relative to lip movements.
Activations for all movement tasks mouth and finger relative to fixation are illustrated in red. Within the red areas, we have marked activation that was located in the insula INS and the left planum temporale PT. Below: Activation for speech relative to non-speech mouth movements percentage signal change on the y axis in each participant 1—11 on the x axis and the mean M at the peak co-ordinates for group activation in the frontal and temporal regions.
This illustrates the consistency of the effect in the same voxels. Table 1. There were four conditions: silent speech, non-speech mouth movements, finger tapping, and visual fixation. Each condition was repeated in three different blocks with one block equivalent to one 90 s PET scan. In all 12 scans, a black circle, presented every ms, was used as an external stimulus to pace movement production. They were specifically instructed to move their mouths as if they were speaking but without generating any sound i.
In the three non-speech mouth movement scans, participants pursed their lips in time with the stimulus, protruded, and retracted their tongue, or alternated between pursing their lips and protruding and retracting their tongue.
In the three-finger tapping scans, participants made a two-finger movement in one scan, a three-finger movement in another scan and alternated between the two-finger movement and three-finger movement in the third scan. The two-finger movement involved a tap of their index finger followed by a tap of their middle finger on a table placed under their arm in the scanner.
The three-finger movement involved a tap of their index finger followed by a tap of their middle finger followed by a tap of their fourth finger. All responses, during all conditions were video recorded to ensure that the data collected were consistent with the experimental aims e.
This only happened once for three different participants and in each case the repeated scan replaced the faulty scan. There was no further behavioral analysis because, in the final data sets, each condition was accurately performed i. Moreover, the functional imaging data showed no activation in the primary auditory cortex during any condition.
This is consistent with the participants performing all conditions silently. Statistical analysis used standardized procedures Friston et al. The condition and subject effects were estimated according to the general linear model at each voxel. The statistical model included 10 conditions: Fixation summed over three scans , the three-finger tapping conditions, the three non-speech mouth movement conditions and the three speech conditions.
The statistical contrasts of interest identified activation that was greater for 1 all speech than all non-speech mouth and finger conditions; 2 all speech than all non-speech mouth movements; 3 all speech and all non-speech mouth movements relative to all finger movements; and 4 all movement conditions relative to fixation; 4 non-speech tongue movements relative to non-speech lip movements or vice versa; and 5 alternating between movements or the same type e.
There were two areas where activation was significantly higher for silent speech than non-speech mouth movements: the left posterior superior temporal sulcus pSTS and the left dorsal pars opercularis within the inferior frontal gyrus extending into the left middle frontal gyrus.
In each of these areas, activation was also higher for speech than finger movements and for speech relative to the visual fixation baseline.
The loci and significance of these effects are shown in Table 1 and Figure 2. Both speech and non-speech mouth movements resulted in extensive activation in bilateral pre-central gyri relative to finger tapping and visual fixation see Table 1 and green areas in Figure 2 for details.
In addition, activation that was common to speech, non-speech mouth movements, and finger tapping relative to the visual fixation baseline was observed bilaterally in the postcentral gyri, superior cerebellum, inferior cerebellum, putamen, with left lateralized activation in the thalamus, insula, supratemporal plane, and supplementary motor area see Table 1 and Figure 2 which represents a subset of these regions in red.
Common activation in these areas may relate to shared processing functions. For example, it has been proposed that activation in the anterior insula is related to the voluntary control of breathing during speech production Ackermann and Riecker, It might therefore be the case that all three motor tasks speech, non-speech mouth movements, and finger tapping involve voluntary control of breathing in time with the motor activity.
Alternatively, common activation might reflect different functions that could not be anatomically distinguished in the current study. As the current study is concerned with differential activation for speech relative to non-speech mouth movements, we do not discuss the common activations further.
The only other significant effect was observed when non-speech tongue movements were compared to non-speech lip movements. These effects are shown in blue in Figure 2. The consistency of this effect with recent functional imaging Takai et al. We did not see significantly increased activation for non-speech lip relative to mouth movements; nor did we see differential activation between any of the conditions that alternated between two movements e.
We suggest that, during speech production, activation in these classic language areas are related to covertly generated auditory associations that are evoked automatically, and in synchrony, with highly familiar mouth movements, previously intimately associated with sound production, and thus auditory feedback. In contrast, lip pursing, tongue, and finger movements are less practiced actions that are not intimately associated with speech sounds although they may have acoustic associations.
The location and function of these activations is discussed below, in the context of generative models of perception and active inference Friston, ; Friston et al. The activation in the dorsal pars opercularis extended anteriorly into the left inferior frontal sulcus see Figure 2. It does not, therefore, correspond to the ventral premotor site of the speech sound maps proposed in the model by Guenther et al. It is also anterior to the more posterior premotor areas that respond during the observation of hand actions Caspers et al.
Nevertheless, it does correspond to the area that is activated during both inner and overt speech tasks, for example, silent phonological decisions on written words Poldrack et al.
Moreover, it is not differentially activated by articulating words silently as in the current study or saying them aloud see Price et al. Therefore the activation is more likely to reflect a fundamental property of speech production than atypical task-specific processing e. Given the minimal demands on conceptual, lexical, and auditory processing in the current study, we suggest that increased activation in the left dorsal pars opercularis for silently articulating words relative to non-speech mouth movements is related to higher-level representations of learnt words that predict the auditory consequences of well learnt speech articulations.
Confirmation of this hypothesis requires a functional connectivity study with high temporal resolution to determine how activation in the left dorsal pars opercularis interacts with that in the superior temporal gyrus and sulcus.
The left pSTS activation that we observed during the silent articulation of speech is associated with phonological processing of speech sounds Scott et al. The same STS area is also activated by written words in the absence of auditory inputs Booth et al. In addition, Leech et al. Specifically, they used a video game to train participants to associate novel acoustically complex, artificial non-linguistic sounds to visually presented aliens.
After training, viewing aliens alone, with no accompanying sound, activated the left pSTS with activation in this area proportional to how well the auditory categories representing each alien had been learnt. As Leech et al. The activation that we observe in left pSTS may therefore reflect auditory associations of the articulated words. This might either be seen as a consequence of auditory predictions from the left dorsal pars opercularis and left pSTS may, in turn, play an active role in generating the predicted acoustic input during articulation see the generative model in Figure 1.
As acknowledged above, future functional connectivity studies using data with high temporal resolution will be required to distinguish these alternatives. We did not find speech-selective activation in the lower bank of the Sylvian fissure that has been referred to as the PT, left supratemporal plane SPT , or Sylvian parietal temporal junction Spt.
The Sylvian fissure is the sulcus above the superior temporal gyrus but our speech-selective activation was in the pSTS which is the sulcus below the superior temporal gyrus. In other words, as shown previously Binder et al. Traditionally, PT has been considered to be an auditory association area that is important for speech but not more activated for speech than tone stimuli Binder et al.
Alternatively, it might be the case that finger tapping and non-speech mouth movements have low level acoustic associations that are predicted during the movements that have previously been associated with such sounds. How do our results fit with the models illustrated in Figure 1? As emphasized above, the full answer to this question requires techniques with higher temporal resolution that can characterize how all the speech production areas interact and influence one another during articulation.
Nevertheless, our data do allow us to test the anatomical hypotheses from the different models. Specifically, the Tian and Poeppel model suggests that the forward model of auditory processing is in the sensory cortex and the Guenther et al.
In contrast, the effects that we observed for speech processing in the left dorsal pars opercularis and pSTS are in higher-level association areas, not in sensory areas or the ventral premotor cortex.
The Spt activation that we observed for speech, non-speech, and finger tapping movements might plausibly correspond to the model proposed by Hickok et al. However, the Hickok et al. Thus, the anatomical predictions of the previous models do not explain our data.
We therefore propose a new anatomical model. Future studies are now required to investigate the validity of this proposal and test how higher-level systems predict inputs to lower-levels; and how prediction error is used to optimize future predictions Friston, ; Friston et al. We speculate that, during overt speech production, top-down predictions from higher-level areas optimize auditory processing of the heard response by minimizing the prediction error i. In parallel, the prediction error is fed back to the higher-level regions and used to optimize future motor commands and auditory predictions.
In addition, we propose that the left dorsal pars opercularis and pSTS areas may be involved in generating and maintaining a forward generative model of expected speech which can be used as a template for auditory prediction. Mismatches between the auditory predictions and auditory feedback can then be fed to the articulators to improve the precision of subsequent output.
These audio—motor interactions are particularly important during speech acquisition in childhood, in those with hearing loss or when adults learn a new language. They are also needed to modify the intensity of speech output in noisy environments and when auditory feedback is altered e.
0コメント