Monday, March 31, 2008

Encephalon #42 is Up at Of Two Minds—“It’s hot,” says Paris

Your favorite brain blogging carnival is up at Of Two Minds. Paris Hilton makes a guest appearance.

Imaging Gene Expression in the Brain

An integral aspect of finding better treatment for some of our most intimidating neurological afflictions, like multiple sclerosis (MS) and Alzheimer’s disease (AD), is improving our ability to detect them early. Our capacity to do so has improved drastically with the advent of neuroimaging techniques. But even with neuroimaging, early stages of these diseases may not be discerned if they have not yet caused considerable damage to the brain. What if we could find a way, however, to image the expression of genes that were activated to repair damage done to the brain, however slight it may be? That might be a way to start aggressive treatment of a disease without having to wait for the damage it wreaks to be evident on a brain scan, or without having to do an invasive biopsy.

And it is just what Researchers at Harvard have done recently. Their goal was to be able to detect gliosis in a living brain after damage to the blood-brain barrier (BBB). Gliosis is the accumulation of supporting neural cells called glial cells in areas of the brain where there has been an injury. A particular type of glial cell, called an astrocyte, is involved in gliosis. So, when an area of the brain is injured (in this case, the BBB), one can observe a proliferation of astrocytes in that area. Astrocytes contain a protein, glial fibrillary acidic protein (GFAP), which is integral to their supportive role (more on that in a second).

Since the researchers at Harvard were investigating the brain’s reaction to trauma, they induced BBB damage in mice through a number of different methods. The expectation was that the areas of damage would become engulfed with astrocytes intended to repair the injury. But astrocytes aren’t detectable with standard neuroimaging techniques.

So, the group developed a magnetic resonance (MR) probe that was connected to a short DNA sequence complementary to the mRNA of GFAP. Their reasoning was, if the DNA sequence runs into mRNA that encodes for GFAP, the two will anneal. Remember, GFAP is a protein found in astrocytes. Thus, the probe will accumulate in areas of astrocytic activity. This will be detectable by an MRI and indicate spots where neurological damage has occurred.

That’s exactly what happened. The scientists administered the probe—get this—through an eye drop. That’s about as noninvasive as it gets. The probe accumulated at the places where the BBB damage had been induced. Therefore, the probe seemed to indicate areas of acute neurological damage, before it could be measured otherwise without extremely invasive techniques.

The applications of this could be profound. They could include an improved ability to detect brain damage associated with AD, MS, stroke, and glioma (tumor), among other neurological problems. This could mean earlier detection, and better treatment, which in some of these disorders could mean a world of difference in quality of life after their onset.

Thursday, March 27, 2008

Beta-Blockers May Act on the Brain

When beta-blockers were discovered, they held such promise that the National Heart, Lung, and Blood Institute (NHLBI) halted trials of one such drug nine months early. They felt it was unethical to continue administering placebos to the control group in the study, based on the significantly improved survival rates they were seeing among those who were taking the beta-blockers. By the late 1990s beta-blockers had become a standard facet of therapy for patients suffering from congestive heart failure. They can be a very effective form of treatment, correlating with significantly reduced mortality rates in congestive heart failure patients.

Beta-blockers are so called because they block the action of epinephrine and norepinephrine on beta-adrenergic (BA) receptors. Epinephrine and norepinephrine are better known to some as adrenaline and noradrenaline. They are hormones responsible for modulating the “fight or flight” response of an organism. This response occurs when an organism is faced with a stressful situation, and results in an increase in both heart rate and force of myocardial contraction, along with the constriction of blood vessels in many parts of the body. Understandably, this puts a strain on the heart, a strain which beta-blockers seem to mitigate by blocking adrenaline from binding to BA receptors and reducing the intensity of the fight or flight response.

BA receptors are located throughout the body, however, and it has never been completely understood exactly where beta-blockers work to improve heart health. It was assumed (although not proven) that most of their action was on receptors in the heart. It was thought that the antagonistic effects of beta-blockers on these receptors inhibited the action of epinephrine, lowering heart rate, dilating blood vessels, and thus having an antihypertensive effect.

But a group of researchers at University College London recently demonstrated that beta-blockers may also have an influence on areas of the brain that regulate heart function. They studied the brains of rats that underwent infarction-induced heart failure and found the beta-blocker metoprolol acted directly on the brain to slow that heart failure. The location of the action was in an area the group had previously found to be associated with blood pressure and heart rate.

This doesn’t mean beta-blockers don’t work on the heart as well. It does, however, provide an impetus for further research into their mechanism. For, if they do have an effect on the central nervous system, understanding that influence could open the door for more comprehensive, and perhaps more specific, therapies to treat congestive heart failure, and heart disease in general.

Wednesday, March 26, 2008

Every Sweet Hath its Sour

People, along with many other animals, have a preference for sweet foods. This is putting it mildly, as our love of sugary sustenance has immensely influenced our culture, economy, and health. Even our vocabulary has been affected by an affinity for sugar, as the word “sweet” itself has a positive connotation, in English and other languages (e.g. la dolce vita).

But, our predilection for sweetness has come with a cost, as evidenced by a worldwide prevalence of obesity that is over 300 million. Obesity is not, as is sometimes implied, a condition that suddenly appeared within the last fifty years. The rate of obesity is, however, growing at an alarming pace, and has been for several decades. This has led to a similar increase in obesity-related illnesses, like type 2 diabetes, hypertension, and heart disease.

Why would evolution have left us with a preference for sweet foods, when it is these very same foods that make us fat and unhealthy? Part of the answer may lie in the fact that the environment in which our evolutionary ancestors lived didn’t have 64 oz. fountain sodas, frosting-covered donuts, and candy aisles. In their hunter-gatherer societies, food was much more scarce, so while we spend our days counting calories, they spent theirs searching for them. Sweet-tasting foods are usually an indication of high caloric content. They are also generally not poisonous, making them doubly valuable to a primitive food gatherer. So, a predilection for them would have been adaptive at one time, and may be an evolved mechanism. This hypothesis is reinforced by a widespread partiality for sweetness throughout much of nature.

A group of researchers from Duke University Medical Center have conducted a study, however, that calls into question the idea that a penchant for high-sugar foods is based on the ability to taste their sweetness. The group genetically engineered a line of mice that lack the ability to taste sweetness. They then exposed the mice to sugar water and water containing sucralose, a noncaloric sweetener. The “sweet-blind” mice demonstrated a preference for the actual sugar water. The preference appeared to be based not on sweetness, but calorie content.

This still fits in with the idea that the proclivity for high-calorie foods is an adaptive trait, but without the ability to taste sweetness as an indicator of the water’s calorie content, how did the mice know which water to drink? The researchers examined the brains of the mice and found that their reward system was activated by the caloric level of the water—independent of taste. The high-calorie sugar water raised dopamine levels and stimulated neurons in the nucleus accumbens, an area of the brain thought to be integral in reinforcing the value of rewarding experiences.

This activation of the reward system is one that seems to be separate from the hedonic aspect of pleasure. The affinity of the sweet-blind mice for high calorie water may represent the involvement of metabolic awareness in the reward system. This implies the brain’s understanding of "reward" is at a much deeper biological level than that which we normally associate with the word. It also is further indication of a separation between the hedonic and reinforcing aspects of the reward system (see the previous post on dopamine).

This finding could have real-world implications in helping to battle the obesity epidemic. If high-calorie foods are rewarding in and of themselves, it may help to explain our nation’s addiction to items that contain calorically fulsome additives, like high-fructose corn syrup. Reducing the prevalence of such additives could decrease the rewarding value of the food they are in, and thus reduce consumption.

It’s imperative that something is done soon to curb the rising rates of obesity. Our propensity toward heftiness may be partly due to a once evolutionarily adaptive trait that has become maladaptive in our modern environment. Thus, our difficulty in making the adjustment illustrates the power of genetics and evolution. But it should also remind us that evolution might be having a powerful effect right now. If so, it is being aided by fast food, mini-marts, and billions of dollars of advertising, and a society that may be too complacent to pay attention to the ramifications.

Monday, March 24, 2008

The Many Faces of Dopamine
The history of dopamine is full of experimental surprises and paradigmatic shifts. For many years after its discovery, it was thought dopamine’s only role in the brain was in the synthesis of norepinephrine, which is made from dopamine with the help of the enzyme dopamine B-hydroxylase. Around the middle of the twentieth century, however, it began to be recognized as having important physiological effects in its own right, and by the mid 1960s it was found that low levels of it were correlated with Parkinson’s disease. Continued investigation of dopamine led to the realization that it is a neurotransmitter, with its own receptors and pathways, and that its influence on brain activity is profound.

When a link between dopamine transmission and rewarding experiences (e.g. eating, sex, drugs) was established, it caused many to understandably hypothesize that dopamine was responsible for our subjective experience of pleasure. This is perhaps when dopamine reached the height of its stardom, as it began to garner media attention as the “pleasure transmitter”. Its role was readily embraced by a populace who were anxious to discover what exactly was behind their persuasive urge to sneak a piece of chocolate, pursue a one-night stand, or indulge in recreational drug use.

But science eventually caught up with the hype when researchers began to notice that dopamine didn’t correlate exactly with pleasure. For one thing, hedonic reactions could be sustained even after the administration of a dopamine antagonist (which inhibits dopamine’s effects). Additionally, dopamine transmission seemed to occur around the time a reward was being enjoyed, but not always during. For example, dopaminergic neurons might be activated as a person reaches for a piece of chocolate, but not while it is in their mouth, indicating more of an anticipatory role than an hedonic one.

It was eventually suggested that dopamine’s role is not in experiencing pleasure per se, but in making the association between an external stimulus and a rewarding experience. In this way, dopamine could act as a reinforcer, prompting associative learning that allows an organism to remember stimuli that proved to be rewarding in the past, and attribute importance, or salience, to them. Dopamine may be involved in the “wanting” aspect of pleasurable things, but not necessarily the “liking” of them.

So what causes pleasure, then? Leknes and Tracey, in April’s issue of Nature Reviews Neuroscience summarize a current view of the neurobiological substrates of pleasure and pain. According to this perspective, dopamine is responsible for the motivation required to seek out a reward while endogenous opioid systems are accountable for our subjective experience of pleasure. Dopamine is needed for “wanting”, while opioids are necessary for “liking”.

These two substances interact in a comprehensive model of the experiences of reward and pain known as the Motivation-Decision Model. According to this paradigm, actions are accompanied by an unconscious decision-making process that is based primarily on 1) the homeostatic needs of an organism, 2) threats in the environment, and 3) the availability of rewards. In the Motivation-Decision Model, the importance of survival is weighed against possible pain, and a decision is made on whether to pursue a stimulus. Opioids are involved not only in the experience of the reward, but also in inhibiting pain to allow the achievement of a reward if it is considered valuable enough.

Thus, dopamine at the same time draws attention to a reward and cues opioid release to allow for the procurement of it. Attainment of the reward also results in opioid activity as a substrate of pleasure. It seems these two neurotransmitter systems have a complex interaction that may underlie our experience of pleasure, our motivation to obtain it, and also the ability to withstand some pain in order to achieve it.

Dopamine may even have a role in helping us to remember aversive stimuli, although this is still up for debate. Striatal dopamine neurons have been found to be inhibited below baseline levels during exposure to aversive stimuli. It is unclear, however, whether dopamine's role in pain processing involves perception of pain or pain-avoidance learning.

Opioid and dopaminergic systems are closely related anatomically, but exactly which regions mediate the pain and pleasure responses is not yet completely understood. The overall system seems to include the nucleus accumbens (NAc), pallidum, and amygdala. All three areas have been shown to be active in reward and/or pain processing.

Although the history of dopamine is not yet complete, its role has grown substantially from a humble precursor to norepinephrine to one of the major neurotransmitters involved in the experiences of pain and pleasure, experiences that we find inseparable from our understanding of life. A deeper comprehension of this system will have great implications for areas like addiction. It also may prove beneficial in treating disorders like depression or chronic pain, where an ongoing affliction results in a diminished ability to experience pleasure. And, it will probably reveal dopamine’s part in the activity of the brain to be an extremely diverse one, a far cry from a neurotransmitter precursor, or even the “pleasure transmitter”, of the brain.

Leknes, S., Tracey, I. (2008). A common neurobiology for pain and pleasure. Nature Reviews Neuroscience, 9(4), 314-320. DOI: 10.1038/nrn2333

Thursday, March 20, 2008

fMRI and Counterterrorism

Bioethicists have for years been debating the conscientiousness of using neuroimaging techniques outside of a clinical setting, such as in courtroom situations or interrogations. These discussions are inevitable, as an fMRI visualization of brain activity seems—at least potentially—to be a much more precise indicator of hidden thoughts and emotions than the standard polygraph. Jonathan Marks, an associate professor of bioethics at Penn State recently drew attention to this debate by asserting that fMRI is being used by the United States government in the interrogation of terrorist suspects.

fMRI is a neuroimaging technique that was developed in the 1990s and has since become the preferred method of imaging brain activity. It involves placing the head of the subject in a donut-shaped magnetic device, which can detect subtle changes in electromagnetic fields. When an area of the brain is in use, blood flow is directed to that region. Hemoglobin, an oxygen-transporting protein in red blood cells, exhibits different magnetic properties when oxygenated as compared to when it is deoxygenated. This is how the fMRI detects the flow of oxygenated blood and, based on the resultant magnetic field, produces an image of which areas of the brain are being used. fMRI has been used to help us gain a better understanding of which brain regions are active during a number of different states, such as happiness, sadness, fear, and anger.

Marks, however, points out that fMRI technology is not reliable enough to be used as a lie detector, and warns our government using it could lead to further abuse of prisoners and human rights violations. He claims that the U.S. is using fMRI not only as a lie detector, but also to single out terror suspects for aggressive interrogation if it indicates they recognize certain names or stimuli (e.g. the name of a terrorist sect leader). Marks bases this allegation on previously unpublished statements made by a U.S. interrogator.

While fMRI may have the potential to one day be an accurate lie detector, most experts in the imaging field would agree that right now it isn’t reliable enough to be used outside of a clinical or laboratory setting. There are a number of reasons for this. One is that, although fMRI images currently have the best resolution neuroimaging can offer, those images don’t provide a complete view of the intricacies of brain activity. They are made up of pieces known as “volume picture elements”, or voxels. The smallest voxels an fMRI can make out are about the size of a grain of rice, and would include the activity of tens of thousands of neurons. While this is helpful in determining the stimulation of brain regions, it is not precise enough to break that activity down into the interaction of very small groups of neurons. Thus, it is far from providing a complete image of neuronal activity—at least far enough to make it an ethically questionable method to use to condemn or exculpate those whose brains are scanned by it.

More important, especially in the use of fMRI with terror suspects, is the fact that fMRI data is drawn from averages of groups of people in laboratory settings. Individual differences in brain activity could be significant, and could be even more drastic across cultures. Additionally, a subject’s baseline fMRI measurements could change over time or by setting. Marks suggests it could take up to weeks of testing to determine what any one person’s baseline neural activity is. Also, a subject who is undergoing the stress of being held by a hostile party may exhibit brain activity that is already much different than that obtained from a participant in a laboratory experiment.

This is not to say fMRI might not be able to one day be used as a lie detector. A group of researchers from the University of Pennsylvania conducted an fMRI study in 2002 to determine which areas of the brain were active while participants gave deceptive responses to questions. The results indicated that the anterior cingulate cortex and superior frontal gyrus were specifically associated with lying, causing the researchers to conclude there are specific neural correlates to deception that are recognizable by fMRI. Even the lead author of that study, however, cautions that fMRI technology is not at the point where it could be used to identify a liar with certainty. He points out that the slightest changes in the wording of a question could produce different neural responses.

Marks’ fear is that fMRI will be used to screen suspected terrorists, resulting in their subjection to aggressive interrogation techniques. These types of techniques can cause a suspect to admit to crimes he/she had no part in, just to end the torture. Thus, Marks’ asserts, the fMRI won’t aid in finding out the truth, it will just allow interrogators to feel more justified in using whatever tactic they feel necessary to extract the information they are looking for. He cites President Bush’s veto of legislation this month that would have banned aggressive interrogation by the CIA an indication this misuse of fMRI could go on unchecked.

fMRI is an amazing technology, and its value will probably continue to be reinforced over the next several decades (or until it is displaced by an even more precise method). But its current limitations in determining whether someone is lying or not are clear. It’s depressing to think that, while neuroimaging has led to better medical care and a significantly improved understanding of the brain, it may also have led to the torture of individuals, some who may have been erroneously singled out due to a misunderstanding (or disregard) of the limits of the technology.

Monday, March 17, 2008

Encephalon #41 is Ready for Your Viewing Pleasure

Check out Encephalon #41 at Pure Pedantry. The blog carnival includes a number of great neuroscience posts from a similar number of different bloggers. Have a look...

Sunday, March 16, 2008

Sea Cucumbers on the Brain
Advancements in biomedical science come from the study of all different sorts of organisms, from humans to roundworms, fruit flies, and yes, even sea cucumbers. The authors of a recent study in Science suggest research involving the sea cucumber has potential for improving treatments for Parkinson’s disease, stroke, and spinal cord injuries. They speculate it may conceivably even be used in the development of flexible body armor or bullet-proof vests.

You might be wondering what it is about the sea cucumber that would make it an interesting organism to study, and how such an ostensibly mundane creature could possibly lead to intriguing breakthroughs in science. The answer lies in the skin of this echinoderm, which can transform from soft and pliable to rigid and inflexible in a matter of seconds.

The sea cucumber gets its name from its appearance, which consists of an elongated, stocky body that resembles the precursor of the pickle. It is a scavenger, sliding along the sea floor in tropical waters, living off of plankton and debris for food. Normally, it is pliable, using its flexibility to slither around rocks or position itself along lines of current to suck up potential food particles that may float by. If touched, however, its skin goes from supple to stiff. This defensive mechanism provides it with a temporary sort of body armor to protect it from predators. The transformation is enabled by a complex matrix of collagen fibrils and fibrillin microfibrils whose interaction can be changed by the release of macromolecules from effector cells. The effector cells are activated by touch.

A group of researchers at Case Western Reserve University in Cleveland investigated the mechanism underlying this transformation in the hopes of creating a nanocomposite material analogous to the sea cucumber dermis. Nanocomposites are products of nanotechnology that involve the insertion of nanoparticles into macroscopic materials. This can alter the function or diversity of the macroscopic material, for example by making it more conductive or, in this case, adjusting its rigidity.

The group used an elastic polymer and inserted into it a rigid cellulose nanofiber network, made up of cellulose whiskers taken from other sea creatures known as tunicates. The authors note that, once the mechanism is perfected, cellulose from renewable resources like wood and cotton could also be used. The interaction between the cellulose fibers is made through hydrogen bonds, which keep the material rigid when it is dry. When soaked in water, however, the cellulose fibers are separated as water preferentially forms hydrogen bonds with them. This causes the substance to become malleable.

Neat, right, but how does it apply to the brain? Well, currently there is a lot of interest in using intracortical microelectrode implants to measure and influence brain electrical activity. This brain pacemaker method has shown a great deal of promise in treating Parkinson's disease, pain, stroke, and spinal cord injuries, among other disorders. Unfortunately, however, with current procedures the electrode signals tend to diminish after a few months, causing the treatment to have questionable long-term usefulness. It is hypothesized that the reason the signal decays is due to the rigidity of the electrode, which damages surrounding cortical tissue, leading to the electrode’s corrosion when glial cells respond to the threat.

Thus, the authors of this study suggest the use of an electrode that resembles the nanocomposite material they designed, which could be made rigid for penetration of the outer layers of the brain, then more flexible when implanted in cortical tissue to avoid doing harm to its environment. The aqueous makeup of the cortex could be suitable to displace the hydrogen bonds made between cellulose fibers and cause the electrode to become pliable.

This is a valuable find for the promising area of deep-brain stimulation. The authors suggest its potential may extend beyond such biomedical applications if the mechanism can be designed to react to nonchemical stimuli, like electrical or optical triggers. This is where technology like body armor could be involved—flexible at one moment yet rigid and protective at the next. That type of application involves all sorts of other dimensions, however, and is a long way off. Regardless, this is quite a bit of potential to come out of an organism many people have only heard of due to its seemingly incongruous comparison to its vegetable counterpart.


Capadona, J.R., Shanmuganathan, K., Tyler, D.J., Rowan, S.J., Weder, C. (2008). Stimuli-Responsive Polymer Nanocomposites Inspired by the Sea Cucumber Dermis. Science, 319(5868), 1370-1374. DOI: 10.1126/science.1153307

Saturday, March 15, 2008

Unraveling the Mystery of Mania

Bipolar disorder (BPD) is one of the most prevalent psychiatric disorders in the world, affecting close to six million people in the U.S. alone. It is characterized by severe shifts of mood between stages of depression and mania. The depression involves traditional symptoms of a depressive episode, such as hopelessness, loss of interest in daily activities, and disruption of sleeping patterns. The manic episodes are what you might consider the exact opposite of depression, manifesting as drastically increased energy, euphoria, lack of inhibition, and delusions of grandeur. Many psychiatrists believe BPD is underdiagnosed, but it is also a term that is overused colloquially (much like depression), at least in my experience. Often someone will refer to an acquaintance as bipolar, meaning he or she has frequent mood swings. BPD is marked by severe changes in mood that last for several days at a time, quite unlike the sudden shift your significant other might experience when he/she hasn’t had their coffee yet.

BPD involves a spectrum of symptoms, and sorting out the mechanisms behind its occurrence has been expectedly complicated. No single gene has been identified as being responsible for BPD, and its complexity has prohibited scientists from being able to recreate the disorder reliably in animals for study. Recently, however, a group of scientists from the National Institutes of Health (NIH) has identified a gene that seems to be related to manic states in mice.

The gene, GRIK2 (glutamate receptor, ionotropic, kainite 2), encodes for a glutamate receptor, specifically glutamate receptor 6 (GluR6). Glutamate is the predominant excitatory neurotransmitter in the central nervous system. GRIK2 has attracted a lot of attention in BPD research since it was found to be associated with suicidal ideation brought on by antidepressant treatment. People with BPD are more prone to treatment-induced suicidal ideation, leading the group at NIH to investigate this gene in regards to BPD.

The group created knockout (KO) mice that lacked the GRIK2 gene and compared their behavior with control mice. KO organisms are those that have been genetically engineered to carry an inoperable version of a gene. This allows researchers to juxtapose their behavior with that of other animals that have the gene, and thus isolate the effect that the gene has.

The KO mice exhibited behavior that was consistent with mania. This was measured with a battery of tests, which showed the mice to be more aggressive, more active, and less inhibited. They were also overly sensitive to amphetamine administration, and their hyperactivity was mitigated by the administration of lithium, a mood stabilizer and common treatment for BPD.

This research is important, as scientists may now have an animal model for the manic episodes of BPD. The group does point out, however, that it is unknown whether GRIK2 is involved in the cyclic nature of BPD, or if it causes the euphoric and mind-altering aspects of a manic episode. While there is still much to be understood about the disorder, this may be an integral step toward elucidating its perplexing mechanism.

Thursday, March 13, 2008

Daisy, Daisy, Give Me Your Answer Do

Even the most successful attempts at artificial intelligence (AI) always seem to lack certain essential qualities of a living brain. It is a formidable task to create a robotic or computerized simulation of a human that seems to display original desires or beliefs, or one that truly understands the desires and beliefs of others in the way people can. This latter ability, often referred to as “theory of mind”, is considered an integral aspect of being human, and the extent to which it has developed in us may be one thing that sets us apart from other animals. Reproducing theory of mind in AI is difficult, but a semblance of it has been demonstrated before with physical robots (click here for an example). Until now, however, it has never been recreated in computer generated characters.

A group of researchers at Rensselaer Polytechnic Institute (RPI) have developed a character in the popular computer game Second Life who uses reasoning to determine what another character in the game is thinking. The character was created with a logic-based programming RPI calls RASCALS (Rensselaer Advanced Synthetic Character Architecture for “Living” Systems). The program involves several levels of cognition, simple systems for low and mid-level cognition (like perception and movement), and advanced logical systems for abstract thought. The group believes they can eventually use RASCALS to create characters in Second Life that possess all the qualities of a real person, such as the capacity to lie, believe, remember, or be manipulative.

Second Life is a life-simulating game, similar in some ways to the popular game The Sims. Unlike the The Sims, however, Second Life involves a virtual universe (metaverse) where people can interact with one another in real-time through avatars they create for use in the game.

The character created by the group at RPI, Edd, appears to have reasoning abilities equivalent to those of about a four-year old child. To test these abilities, Edd was placed in a situation with two other characters (we’ll call them John and Mike). Mike places a gun in briefcase A in full sight of John and Edd. He then asks John to leave the room. Once he is gone, Mike moves the gun to case B, then calls John back. Mike asks Edd which case John will look in for the gun.

Does this sound familiar? It's an actual psychological test developed in the 1980s, originally known as the Sally-Anne test. The Sally-Anne test plays out the same scenario described above, only with dolls and a marble or ball (since its inception the test has been done with human actors as well). A child watches the Anne doll take a marble from Sally’s basket and put it in her box while Sally is not in the room. If the child, after watching the interaction, can guess when Sally returns that she will look in her basket for the marble, it demonstrates he or she has begun to form theory of mind. The child is able to understand that other people have thoughts and beliefs different from his or her own. They realize that when Sally re-enters the room she is unaware the marble has changed positions, so she will look in the spot where the marble originally was. The ability to make these types of attributions of belief usually develops at around age three to four in children.

Edd, the character from Second Life, is able to do the same. When Mike asks him in which case John will look for the gun, he will say case A—the case John saw the gun placed in (for the demonstration click here). And Edd is not programmed specifically to make this choice. Instead he “learns” from past mistakes that, if John cannot see the gun being moved he will not know it is in the other briefcase.

The research group at RPI see Edd as a first step in the creation of avatars on Second Life that can interact with humans in a manner unlike that of any simulated characters before, being able to understand and predict the actions of others, and act virtually autonomously. They see potential benefits of this technology in education and defense, as well as entertainment. IBM, a supporter of the research, has visions of creating holographic characters for games like Second Life, which could interact with humans directly.

This is all pretty amazing stuff, but for some reason HAL singing “Daisy Bell” keeps eerily replaying in my head as I write it.

Saturday, March 8, 2008

This Study Sponsored by Krispy Kreme

The brain’s motivational processes always provide an interesting area for research, as they underlie all of our “voluntary” behavior. Much progress has been made in understanding motivational areas of the brain since the advent of sophisticated neuroimaging techniques. Recently, a group of researchers using fMRI attempted to identify specific activity in the brain that takes place when a person shifts their attention to a relevant object in their environment (the first step in developing motivation to obtain the object).

The group focused on hunger, testing subjects at two separate occasions: once after eating as many Krispy Kreme donuts as they could (eight was the record), and another after fasting for eight hours. In each experimental condition, the subjects were then shown pictures, some of tools and others of donuts, while being scanned with fMRI.

As you might expect, the subjects who had just gorged themselves on donuts didn’t show increased activity in response to the donut pictures. But in those who fasted, images of donuts caused rapid activity throughout the limbic lobe—an area of the brain thought to be involved in identifying salient objects in one’s environment. Immediately after the donut was recognized, attentional mechanisms in the brain, involving the posterior parietal cortex, were also stimulated, demonstrating that the subject’s attention had been turned to the relevant object. These mechanisms seemed to work in conjunction with those that were used to gauge the importance of the object. Thus, the authors of the study suggest the posterior parietal and limbic lobe play an interactive role in identifying salient stimuli and immediately focusing one's attention on them.

This experiment provides further evidence for the concept that our brains are inherently organized to recognize aspects of our environment that are beneficial to us. Many believe the significance of certain types of stimuli is evolutionarily ingrained, meaning that our brains evolved to place importance on those that promote survival, such as food, water, or sex (which leads to dissemination of genetic information). This study goes a bit further to elucidate the mechanisms involved in the distribution of attention among salient and non-salient stimuli. If a hungry brain sees food, it will activate those attentional mechanisms to focus itself on that food, providing motivation to obtain it.

I suppose the greater task in our corpulent society right now, however, is to learn how to get people to avoid those Krispy Kreme donuts instead of to understand exactly how our brain focuses attention on them.

Friday, March 7, 2008

Genes and Happiness, or Free Will Revisited
As I begin writing this post I can’t help but be reminded of the one I wrote a few weeks ago about the troubles one runs into when trying to reconcile present-day understandings of neuroscience and genetics with the traditional concept of free will. A team of researchers from the University of Edinburgh and the Queensland Institute of Medical Research recently conducted a study to investigate how much our subjective sense of happiness is dependent upon our genetic makeup (and thus personality style). Is our ability to be happy solely up to us ("us" being defined as hypothetical beings with complete free will), or is it constrained by the type of person we are, which is determined to a large extent by our genes?

To find out, the researchers studied a sample of 973 pairs of twins (365 monozygotic, or identical, and 608 dizygotic, or fraternal). Twin studies are an experimental method used in behavioral genetics to isolate the influence of genes on personality. Since monozygotic twins share 100% of their genes, behavior that is based primarily on genetic makeup can be assumed to be seen in both members of a pair. The actual observations can be compared with the phenotype of dizygotic twins, who only share about 50% of their genetic information. Similarities between monozygotic twins that aren’t as significant in the dizygotic twins can be assumed to have a prominent genetic basis. In this model, environmental effects are also considered, but the composition of the sample allows the effects of gene and environment to more easily be separated.

The researchers used a questionnaire called the Midlife Development Inventory (MIDI) to assess the personality of the participants. The scores were averaged across five dimensions that describe overall personality characteristics, known as the Five Factor Model (FFM). It consists of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. The FFM is a personality assessment tool that was developed in the early twentieth century. It has been refined numerous times, and been shown to be a reasonably accurate instrument for making generalized appraisals of personality.

Certain traits measured by the FFM have also been correlated with one’s sense of subjective well-being, especially Neuroticism, Extraversion, and Conscientiousness. The reasons why exactly are uncertain, and could be due to any number of factors involving how these traits affect one’s social interactions and lifestyle. For example, low Neuroticism may indicate emotional stability and Conscientiousness could denote self-restraint, both qualities which are often considered important in leading a contented life.

The researchers also conducted an interview to assess well-being, asking participants how satisfied they were with their life at present, how much control they felt they had over their lives, and how satisfied they were with life overall.

The group found that, as has been seen in the past, subjective well-being was correlated with the personality traits of the FFM. Specifically, it was negatively correlated with Neuroticism, and positively correlated with Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. In addition, the correlation between the FFM characteristics in monozygotic twins was significantly higher than in dizygotic twins, suggesting a genetic basis for the formation of these personality traits.

Subjective well-being was shared between the twins at a level correlated with that of their positive personality traits. What this suggests is the following: we are born with a particular genetic makeup that is deeply ingrained and difficult to change, regardless of experience. This makeup is translated into personality traits that can be broadly generalized into categories such as neurotic, extraverted, etc. Some of these traits end up being conducive to our happiness and well-being. The less neurotic an individual is, for example, the happier he or she tends to be. Since these attributes are genetically prescribed and predictive of our happiness, some would say the amount of happiness we are able to experience in life is limited to a great extent by our genetic makeup.

It is easy, however, to take this argument a bit too far. A headline today on, for example is “Genes Hold the Key To How Happy We Are, Scientists Say”. This is not really what the authors of this study are claiming. They instead are suggesting our genes provide us with a starting point, a set-point, of emotional stability, which we end up moving from in one direction or another based on our experiences. While it is important for some of us to understand how the limitations of our genetic makeup might handicap us when it comes to the enjoyment of life, it’s also necessary to point out that the environment can have drastic effects on who we are compared to our original constitutional makeup. People born with what might be considered an unfavorable personality assessment according to the FFM often come up with innovative ways to improve their life, and their outlook on it. So, do genes alone hold the key to how happy we are? I don’t believe so. But they may provide us with a rough outline, albeit one that we are able to constantly revise throughout our lives.

Of course, I may just be in a good mood today. I don’t think the post I wrote about neuroscience and free will a few weeks ago was so optimistic. Those revisions we make in that outline may themselves be constrained by genetic limitations on the options we are able to imagine…and the argument can go on and on…


Weiss, A., Bates, T.C., Luciano, M. (2008). Happiness Is a Personal(ity) Thing: The Genetics of Personality and Well-Being in a Representative Sample. Psychological Science, 19(3), 205-210. DOI: 10.1111/j.1467-9280.2008.02068.x

Wednesday, March 5, 2008

Reading Minds With fMRI

Well, it may not be mind reading just yet, but a computer model developed by a group of neuroscientists at the University of California, Berkeley, is perhaps one (tiny) step closer to that sort of technology. In a study to be published in tomorrow’s issue of Nature, the group describes the use of the computer model to accurately identify which photograph—out of a group of many—a subject had just looked at, based only on fMRI data. Even more impressively, the model worked with photographs the participants had never seen before.

Studies of this sort have been conducted in the past with success, but involved only simple patterns or basic object recognition. In the current study, two participants were shown 1750 photographs of various scenes and objects while their brain activity was measured with fMRI. Using the data from these fRMI scans, the researchers created a computer model to distinguish patterns of activity in the visual cortex that occurred in response to specific features of the photographs. For example, the model could be used to determine which areas are typically activated in response to lines, spherical shapes, or spots of dark shadowing. To do this, they divided the fMRI representation of the visual cortex into small cubes and used the model to examine how activity in each subsection changed in relation to different photographs.

After the initial fMRI data was analyzed with the computer model, the two subjects (also co-authors of the study) then viewed 120 photographs they had never seen before while being scanned again. The researchers used the model to predict what the brain activity of each subject would be as they viewed the novel pictures. For one subject, the model’s prediction matched the actual brain activity 92% of the time. For the other, it was accurate 72% of the time. By chance alone, it would have made the correct match only 0.8% of the time. One of the subjects then viewed a set of 1000 pictures with scenes more similar to one another to further test the specificity of the model. It was still accurate 82% of the time.

While the mention of mind reading above is, of course, a bit sensationalistic, this technology is still amazing, and perhaps a harbinger of strange things to come. If we can eventually predict patterns of brain activity in response to visual stimuli with precision, who is to say we will not one day be able to dissect more complex thought processes, or at least identify sharp distinctions, like when one is telling a lie vs. telling the truth? Such technology, if determined to be accurate, could have interesting ramifications.

This is all speculation about things that may happen in the distant future, however, and only tenuously related to the computer model discussed above. After all, the model is still limited to pictures from a known set. It could not be used to interpret fMRI data and reconstruct a semblance of what a person has seen, it can only match the activity to photographs it has been exposed to previously. Regardless, it is an area of research that is worth following closely, as it involves perhaps the most precise elucidation of cognitive processes we have yet to be privy to.

Experimental Evidence Supports Runner's High; Aromatherapy...Not So Much

For a long time the idea that a “runner’s high” occurs after exercise of a long duration has been obvious to athletes. The physiological reasons behind it, however, have been much more of a mystery to scientists. The most prominent theory to explain it over the last twenty years or so has been the endorphin hypothesis, which suggests that prolonged strenuous activity releases endorphins, causing an elevation of mood and decrease in the perception of pain. The word endorphin comes from “endogenous”, meaning produced within the body, and “morphine”, an opiate known for its pain-mitigating properties. Thus, endorphins are like opiates created by our bodies, and can act as natural painkillers or induce euphoric feelings under certain circumstances.

The endorphin hypothesis has received a fairly high degree of support over the years, although it has never been confirmed experimentally—until now. A group of researchers from the Technical University of Munich and the University of Bonn recently conducted an experiment with ten athletes. They ran PET scans on the athletes at two separate times: at rest, and after a two-hour bout of endurance running.

In the PET, they used a radioactive opioidergic ligand 6-O-(2-[F] fluoroethyl)-6-O-desmethyldiprenorphine ([18F]FDPN). This substance binds to opiate receptors in the brain. If opiate receptors are in use by endorphins, the [18F]FDPN should be unable to bind to those receptors. Thus, if intense exercise produces endorphins, they will occupy opiate receptors, and, as compared to the PET scan at rest, there should be more [18F]FDPN in an unbound state, as its natural binding site will be filled by endorphins.

This is just what the researchers found. After the exercise, opioid receptors showed decreased availability (meaning they were bound to endorphins produced by the running). The level of euphoria as reported by the athletes was significantly increased, with higher levels inversely correlated with the availability of opiate receptors. The brain regions most affected were primarily in prefrontal and limbic areas, areas commonly associated with emotions.

The researchers hope to expand upon this study by searching for practical uses for the improved understanding of acitivity-induced endorphin binding. This could include investigating the specific benefits of exercise for those suffering from chronic pain, depression, or anxiety. They also are very interested in how genetic makeup affects opiate receptor distribution in the brain, and how this might affect addiction.

Another concept (albeit a less substantiated one than the occurrence of runner’s high) under investigation of late is the benefit of aromatherapy. Aromatherapy is an alternative medical practice that has been around for centuries, but has regained a great deal of popularity over the last couple of decades. It involves the inhalation of certain scents, such as lavender or lemon, which are purported to have a number of positive effects on one’s mood and health.

Researchers from Ohio State University put aromatherapy to the test in what is probably the most comprehensive study on the practice to date. Using 56 participants, some who advocated the use of aromatherapy and others who had no opinion, the researchers measured blood pressure, heart rate, healing ability, stress hormone levels, reaction to pain, and recorded self-report of mood over a three-day period of aromatherapy use. The participants were exposed to an odor suggested to be a stimulant (lemon), another purported to be relaxing (lavender), and water with no odor. Some of the participants were told what odors they would be subjected to, and what changes they might expect, while others were randomly placed in a blind category where no such information was given. The experimenters were all kept in a blind condition.

The lemon oil did induce an enhancement of mood based on self-report, although the lavender oil did not. Neither of the oils, however, had any effect on the numerous biochemical markers used to measure stress, healing ability, immune response, or pain tolerance.

The researchers who conducted the study are quick to point out this is not conclusive evidence there is no benefit to aromatherapy. As one of the authors stated, however, “…we still failed to find any quantitative indication that these oils provide any physiological effect for people in general”. It's something to keep in mind if you are thinking of buying any alternative medicine products that tout their immune-boosting, stress-relieving, and mood-enhancing qualities: are their claims backed by science, or just anecdotal evidence?

Saturday, March 1, 2008

Diltiazem Reduces Cocaine Craving in Rats

A seemingly unlikely candidate in the battle against cocaine addiction has emerged from work being done by researchers at Boston University School of Medicine and Harvard Medical School. The group administered diltiazem, a drug commonly used to treat hypertension, to cocaine-addicted rats, and found that it significantly reduced their cravings for cocaine.

Diltiazem is a type of drug known as a calcium (Ca2+) channel blocker. Ca2+ channels are voltage-gated ion channels, which when activated allow an influx of Ca2+ into a cell. This inflow of Ca2+ can exert any number of effects, depending on the cell. In neurons, it is often the trigger for the release of neurotransmitter-filled synaptic vesicles, which is the basis of communication between the nerve cells. Ca2+ channels also can be involved in hormone and gene expression. In the heart, they contribute to muscle contraction. Diltiazem limits the activity of Ca2+ channels, and reduces contraction of the heart muscle. This lowers the amount of oxygen the heart needs, which can alleviate symptoms of hypertension.

So what does this have to do with cocaine? Current models indicate drug addiction occurs due to neural reconfigurations caused by the memory-encoding activity of two neurotransmitters: glutamate and dopamine. Glutamate, the main excitatory neurotransmitter in the brain, is thought to encode specific sensory and motor information in cortical and thalamocortical areas, while dopamine seems to react in a more general sense to rewarding stimuli. The interaction of these two chemicals is believed to be responsible for intensifying the memory of drug use and all the stimuli related to it, leading to craving, repeated use of the drug, and addiction.

Ca2+ channels play an integral role in these neurotransmitters working together harmoniously. When they are blocked and brain Ca2+ decreased, the process is disrupted. This is what may account for the reduction of cocaine cravings in the rats.

This study is a promising one for the addiction field, as there are no effective drug therapies currently available for cocaine dependency. It also speaks volumes about how far our understanding of addiction has come. Once it was regarded simply as a choice made by degenerates who had no motivation to live a normal life. Eventually scientists found there are biological mechanisms underlying it that seem to, for the most part, preclude consciously choosing it as a lifestyle. As we come to understand those mechanisms more, the concept of addiction becomes at the same time more tangible and complex.

For example, dopamine was originally thought to be the substrate of pleasure in the brain. It now has a better understood, but more complicated role, of helping us to identify rewarding stimuli in our environment. This is a skill that is evolutionarily adaptive for obvious reasons. Fairly recently it was learned that glutamate is involved in the addiction process as well. Now that our understanding of the interaction between dopamine and glutamate is growing, we are beginning to understand addiction involves a series of synaptic changes made through associative learning, which result in an almost indelible imprinting of an addictive memory. And this expanding knowledge of the neuroscience of addiction all must be viewed against a backdrop of identified genetic patterns, which predispose certain people toward the disorder.

Although this view of addiction as having a neurobiological and genetic basis is commonly accepted science, it has yet to be embraced by a large portion of society, including governments who continue to use incarceration of addicts as their preferred method of dealing with the issue of drug use. Most of us have an addiction of some sort. It may be food, or cigarettes, shopping, or heroin. It’s important to remember, though, that all addictions, from the most minor to the most severe, are probably due to a similar process in the brain that has caused too much importance to be placed on the pursuit of a once-rewarding stimulus. So a crack addict is victim to the evolution and organization of the human brain’s reward system in the same way a shopoholic is. And for those who are lucky enough to not have an addiction, or at least a very damaging one, you might want to hesitate before you credit yourself too much or denigrate someone with an addiction. If the addict were born with your genes and your brain, chances are they wouldn’t be an addict either. It’s time we begin, as a society, recognizing addiction as a disorder instead of a reprehensible and prosecutable offense, in and of itself.