Surveillance: Tracking Brainwaves to Protect Our Borders

Surveillance System: Tracking Brain Waves to Protect Our BordersEver wonder what it would be like if the person who you are talking to could suddenly read your true thoughts? I know I have and have been very thankful that my thoughts were my own private treasure trove not open for public scrutiny. Well, that one private area of our lives that we have often thought untouchable by others may become a target of our war on terrorism.

One might wonder why this is necessary since it seems that Homeland Security has done a fairly good job, since 9/11, of securing our borders. In fact, it seems that our nation is employing an increasingly vigilant military personnel as well as private contractors equipped with the latest in cameras and sensors to actively watch our borders. Unfortunately, due to drug supply chains and human trafficking by “coyotes,” the current system is proving costly and often ineffective as sensors are often set off by desert animals. This makes it difficult for border agents to determine if they are going to encounter wildlife, trespassers, or terrorists.

To address this issue and to add an additional level of security for our patrol units, researchers have developed a new technology that relies on a person’s brainwaves. This technology has been designed to prevent the approximately 1,000 false alarms that agents respond to by making it easier to distinguish man from animal.

It has been reported that, in some sectors, sensors generate false reports every hour of the day and night, making it impossible to be vigilant enough to actually detect criminal entry over our borders. In fact, it is easy to see how sitting at a monitor with a view of the monotonous desert terrain would become so boring that an agent’s mind would drift to other things rather than what a particular camera angle was honing in on. Multiply this by mile on mile of border and one can see how providing security could become a huge problem. I know that I can easily see how an operator could miss an illegal entry event or terrorist activity.

This is why the new system has been developed to display images in quick succession along with showing monitored brainwave patterns. Apparently, when the display perceives a human brainwave pattern, the operator will be able to better concentrate on this area of the border, while being empowered to ignore the brainwave patterns associated with animals. In fact, in their studies, researchers have found that the new system eliminates over 90% of false alarms and is able to reliably distinguish human brainwaves from those of animals.

After reading about this new technology and how well it works, it made me wonder about other applications for the technology. On the positive side, I could see how the military could use the technology to locate enemy soldiers or snipers hiding inside of buildings or in rough terrain. I can even see how this technology could be used by police departments to locate suspects hiding in the area.

However, for the conspiracy theorists among us, I can already hear the outcries of Big Brother is watching (or, in this case, listening). In this instance, I can see how they could come to this conclusion, but at some point, one has to decide to trust those in power. In other words, do we want to see our nation, and therefore our borders, protected from terrorist entry, or are we willing to take the chance that a dangerous drug cartel or terrorist group will find a gaping security hole along our border defense? I know I wish that we didn’t find ourselves in this position, but the powers that be feel that the best way to protect us and those who serve us is by patrolling our long borders. As always, I am curious to hear your input, so please feel free to comment.

Source: NBC News FutureTech

CC licensed Flickr photo above shared by zigazou76

Obama Vs. Romney: Why Your Brain is Irrational About These Two Candidates

Obama Vs. Romney: Why Your Brain is Irrational About These Two CandidatesMost of us are already leaning to one presidential candidate or another, but the real question is which candidate — Barack Obama or Mitt Romney — is the most capable? Could it possibly come down to who has the deepest voice or which one is best coached on what is appropriate to say to a particular audience? One could easily parrot the words “who cares?” while missing an extremely valuable point in understanding how we humans process information. We may also be missing the point on how our emotions can override our rational brains and come to a conclusion that one could only label as being shallow.

I am personally glad to see this year’s presidential battle finally coming to an end. This simple fact is due to the decision that I made months ago about which man will get my vote — a decision from which I have no intention of being swayed. Once this election is over, the number of emails that I receive from some of my overzealous friends who want to change my mind about who to vote for will stop. These friends and acquaintances seem to think that they, and they alone, possess the wisdom to decide for everyone they know who should and who shouldn’t be elected.

One thing I realize after six plus decades on this planet is that, while many of us would like to believe that we elect our officials solely on their policies, there is an alternative theory that I believe makes more sense. That theory is that most of our elected officials are chosen for their appearance and the sound of their voice. To test this theory in your own mind, consider some of our previous political figures. Did you vote for John Edwards? How about Jimmy Carter? In your opinion, how did they live up to your expectations?

A real-life example of this type of decision can be seen in Missouri Governor Jay Nixon. This man has proven himself and I was fortunate enough to attend a concert at Missouri State University in August, where Governor Nixon and his wife were guests of honor. In fact, this event was of such note as to include John Goodman, a Missouri State alumnus, in the audience.

The event honored the 50th anniversary of the Tent Theater, a performing arts division of Missouri State University. I know that, while sitting in the audience while Governor Nixon made a brief speech about the celebration, I couldn’t help but be struck by his overall stature and the way that he presented himself. To me it was amazing how he was able to grab the audience’s attention and how easy it was to see the respect he commanded. The man is tall — over 6 feet — he was well-groomed, well-dressed, and had a low, resonant voice. For me, this was the first time I had seen the Governor in such a close forum; after having done so, it is no surprise to me that, though Missouri is a predominantly Republican state, this Democrat most likely will be re-elected as governor for a second term.

With this as a frame of reference, I was interested in an article that I read at Scientific American concerning how our brains process certain information regarding people. Some of the information was not surprising, such as the assumption that humans are influenced by another person’s tone of voice. It seems that those who speak in a higher-pitched voice are judged as being nervous and less truthful than those with lower-pitched voices. However, a person with a lower pitched voice who speaks slightly faster and slightly louder, with fewer pauses, will leave the audience feeling the person is more energetic, intelligent, and more knowledgeable.

In the article, the author referenced the first broadcast presidential race that was viewed on live television on September 26, 1960. The debate between John F. Kennedy and Richard M. Nixon was not presented in Technicolor, but in black and white. Upon reading the article, I recalled watching this debate with family members and, even at 12 years old, I immediately took to John F. Kennedy. To me it seemed that Kennedy came across as more energetic, thoughtful, and likeable, whereas Nixon appeared anemic — almost sickly — speaking in a quivering, high-pitched tone that made him appear unsure of himself.

I also know that my wife believes that a person’s eyes are a mirror to their soul. This can be a little nerve-racking since, if a person has gentle and kind-looking eyes, she is prone to trust them, whereas if their eyes are hard to read, she automatically distances herself from them.

So, I don’t know about what triggers your reactions; I can only hope that you are one of the people who can logically concentrate on a candidate’s policies, abilities, previous missteps, and the like. I also hope that you can look at the entire picture of what is happening at the time, who the person really is, and what they can do to better our country. I believe that the personality of the candidates may have an underlying effect, therefore, our decision on who we vote for may be determined by an emotional, rather than a political reason.

Comments welcome.

Source: Scientific American

CC licensed Flickr photo above shared by Cain and Todd Benson

Television Online: Can Reruns Help Your Mind?

Television Online: Can Reruns Help Your Mind?Did you know that reruns can be good for your brain? In one recent survey, it has been shown that watching old programming — programs that you have seen before — actually increases your ability to take on difficult tasks. When I first read about these findings, I was not surprised.

About six months ago, my wife and I were visiting our daughter and her husband, who live in Waterford, Connecticut. One evening, our son-in-law cranked up the television and asked what we wanted to watch. In our home, my wife and I contemplate which programming to watch by how hard the programming will strain our brains. Don’t laugh! We have come to realize that some programming requires a great deal of effort and concentration. Other programming is basically simple and is just for entertainment purposes.

When my wife said to our son-in-law, “Let’s watch something that we don’t have to think about,” he kind of laughed. He is a very smart guy and is highly educated, yet he didn’t understand the simplicity of what my wife said. The survey, which shows just how simple this type of thinking is, suggests that watching something we are familiar with does improve our minds. So it seems to validate our thinking: that watching something not requiring too much effort of thought does, in fact, help to relax your brain.

In the survey that was conducted, a group of folks were asked to keep a diary, which consisted of:

  • A task that required a great deal of effort
  • What type of media was consumed
  • How the subject perceived their energy level or lack of energy

In what turned out to be a surprising result, researchers found that those who watched current television programming wrote a lot about their favorite shows. These people expended a great deal of energy and effort explaining the shows and why they liked them. Those who wished to relax and slow a bit watched old reruns of programs that seemed to relax their minds.

Watching old reruns is what we commonly referred to as vegging out, which my wife and I call watching a program that doesn’t require much use of one’s brain. The survey also showed that those who watched all programming were able to tackle tasks better than those who watched their favorite programs. The researchers also found that watching a new episode of your favorite television program for the first time did not provide the same benefit.

In my opinion, watching a familiar rerun is like slipping on a pair of old, comfortable shoes. It just fits right and feels good. In addition, research has proven that watching older television reruns does help one to relax and makes doing tasks later just a little easier.

What about you? Does watching reruns help you to relax? Please share your thoughts with us.

Comments, as always, are welcome.

Source: LiveScience

CC licensed Flickr photo above shared by Lord Jerome

How Can Google Affect Your Brain?

TechCrunch had a great title for a recent article: “Just Because Google Exists Doesn’t Mean You Should Stop Asking People Things.” We get annoyed when co-workers or bosses ask us silly questions that they themselves could have Googled. However, the bigger issue is whether or not we are so reliant on Google that we are dumbing ourselves down? The Pope thinks that the Internet is increasing the risk of a “sense of solitude and disorientation” and basically numbing us, calling it an “educational emergency.”

Add to iTunes | Add to YouTube | Add to Google | RSS Feed

Lamarr has to agree with TechCrunch AND the Pope. Google has become a verb, and it’s making us all a little “dumber.” We are supposed to actually learn things. Instead, we are relying on Google to tell us the answer without ever knowing how to arrive at that answer ourselves. How is this expanding our minds?

What are your thoughts?

Want to embed this video on your own site, blog, or forum? Download the video!


When was the last time that you forgot something that you wanted to remember? The chances are that it didn’t happen too long ago. All of us are forced to remember more things now than ever before, and because of that, our brains can get overloaded and just give up on us. Keeping good notes is a smart idea, and with the Internet, you don’t need a pen and paper to make this happen. Springpad will help you to remember what you need and want to remember.

This service works in your browser and on your mobile devices, so you’re covered wherever you go. You can enter information directly or grab it from the Web, and since Springpad works well with the Internet, it can provide additional information about what you’re saving and even offer the latest news and deals that are related to that content. Organization is simple and configurable, and custom reminders help you to make sure that you don’t forget what you’re trying to remember when the time comes to remember it. If it wasn’t already, the cloud will now become your second brain.

Neurons Cast Votes To Guide Decision Making

There should be an image here!We know that casting a ballot in the voting booth involves politics, values and personalities. But before you ever push the button for your candidate, your brain has already carried out an election of its own to make that action possible. New research from Vanderbilt University reveals that our brain accumulates evidence when faced with a choice and triggers an action once that evidence reaches a tipping point.

The research was published in the October issue of Psychological Review.

“Psychological models of decision-making explain that humans gradually accumulate evidence for a particular choice over time, and execute that choice when evidence reaches a critical level. However, until recently there was little understanding of how this might actually be implemented in the brain,” Braden Purcell, a doctoral student in the Department of Psychology and lead author of the new study, said. “We found that certain neurons seem to represent the accumulation of evidence to a threshold and others represent the evidence itself, and that these two types of neurons interact to drive decision-making.”

The researchers presented monkeys with a simple visual task of finding a target on a screen that also included distracting items. The researchers found that neurons processing visual information from the screen fed that information to the neurons responsible for movement. These movement neurons served as gatekeepers, suppressing action until the information they received from the visual neurons was sufficiently clear. When that occurred, the movement neurons then proceeded to trigger the chosen movement.

The researchers also found that the movement neurons mediated a competition between what was being seen — in this case, the target and distracting items — and ensured that the decision was made to look to the proper item.

“What the brain seems to do is for every vote it receives for one candidate, it suppresses a vote for the other candidate, exaggerating the differences between the two,” Jeffrey Schall, E. Bronson Ingram Chair in Neuroscience and co-author of the study said. “The system that makes the response doesn’t listen to the vote tally until it’s clear that the election is going towards one particular candidate. At that point, the circuitry that makes the movement is triggered and the movement takes place.”

The findings offer potential insights into some psychological disorders.

“Impairments in decision-making are at the core of a variety of psychological and neurological impairments. For example, previous work suggests that ADHD patients may suffer deficits in controlling evidence accumulation,” Purcell said. “This work may help us to understand why these deficits occur at a neurobiological level.”

An important piece of this research is the novel model the researchers used in the study. The new model combined a mathematical prediction of what they thought would transpire with actual data about what the neurons were doing.

“In a model, usually all the elements are defined by mathematical equations or computational expressions,” Thomas Palmeri, associate professor of psychology and a co-author of the study, said. “In our work, rather than coming up with a mathematical expression for the inputs to the neural decision process, we defined those inputs with actual recordings from neurons. This hybrid model predicts both where and when the eyes move, and variability in the timing of those movements.”

“This approach provides insight between psychological processes and what neurons are doing,” Schall said. “If we want to understand the mind-brain problem, this is what solutions look like.”

[Photo above by ewedistrict / CC BY-ND 2.0]

Melanie Moran @ Vanderbilt University

[awsbullet:John Medina Brain]

How Do Some People Find A Way To Win – No Matter What?

There should be an image here!Whether it’s sports, poker or the high-stakes world of business, there are those who always find a way to win when there’s money on the table.

Now, for the first time, psychology researchers at Washington University in St. Louis are unraveling the workings of a novel brain network that may explain how these “money players” manage to keep their heads in the game.

Findings suggest that a specific brain area helps people use the prospect of success to better prepare their thoughts and actions, thus increasing odds that a reward will be won.

The study, published Aug. 4 in the Journal of Neuroscience, identified a brain region about two inches above the left eyebrow that sprang into action whenever study participants were shown a dollar sign, a predetermined cue that a correct answer on the task at hand would result in a financial reward.

Using what researchers believe are short bursts of dopamine — the brain’s chemical reward system — the brain region then began coordinating interactions between the brain’s cognitive control and motivation networks, apparently priming the brain for a looming “show me the money” situation.

“The surprising thing we see is that motivation acts in a preparatory manner,” says Adam C. Savine, lead author of the study and a doctoral candidate in psychology at Washington University. “This region gears up when the money cue is on.”

Savine and colleague Todd S. Braver, PhD, professor of psychology in Arts & Sciences, tested 16 subjects in an experiment that required appropriate preparation for one of two possible tasks, based upon advance information provided at the same time as the money cue. Monetary rewards were offered on trials in which the money cue appeared (which happened randomly on half the trials), provided that the subjects answered accurately and within a specified timeframe. Obtaining the reward was most likely when subjects used the advance task information most effectively.

Using functional magnetic resonance imaging (fMRI), the researchers detected a network of eight different brain regions that responded to the multitasking challenge and two that responded to both the challenge and the motivational cue (a dollar sign, the monetary reward cue for a swift, correct answer).

In particular, Savine and Braver found that the left dorsolateral prefrontal cortex (DLPFC), located in the brain approximately two inches above the left eyebrow, is a key area that both predicts a win, or successful outcome, and prepares the motivational cognitive control network to win again.

Simply flashing the dollar-sign cue sparked immediate activation in the DLPFC region and it began interacting with other cognitive control and motivational functions in the brain, effectively putting these areas on alert that there was money to be won in the challenge ahead.

“In this region (left DLPFC), you can actually see the unique neural signature of the brain activity related to the reward outcome,” Savine says. “It predicts a reward outcome and it’s preparatory, in an integrative sort of way. The left DLPFC is the only region we found that seems to be primarily engaged when subjects get the motivational cue beforehand, it’s the region integrates that information with the task information and leads to the best task performance.

The researchers actually observed increased levels of oxygenated hemoglobin in the brain blood flow in these regions.

The finding provides insight into the way people pursue goals and how motivation drives goal-oriented behavior. It also could provide clues to what might be happening with different populations of people with cognitive deficiencies in pursuing goals.

Savine and Braver sought to determine the way that motivation and cognitive control are represented in the brain. They found two brain networks — one involved in reward processing, and one involved in the ability to flexibly shift mental goals (often referred to as “cognitive control”) — that were coactive on monetary reward trials. A key question that still needs to be answered is exactly how these two brain networks interact with each other.

Because the brain reward network appears to center on the brain chemical dopamine, the researchers speculate that the interactions between motivation and cognitive control depend upon “phasic bursts of dopamine.”

They wanted to see how the brain works when motivation impacts task-switching, how it heightens the importance of a one-rewarding goal while inhibiting the importance of non-rewarding goals.

“We wanted to see what motivates us to pursue one goal in the world above all others,” Savine says. “You might think that these mechanisms would have been addressed a long time ago in psychology and neuroscience, but it’s not been until the advent of fMRI about 15-20 years ago that we’ve had the tools to address this question in humans, and any progress in this area has been very, very recent.”

In this kind of test, as in the workplace, many distractions exist. In the midst of a deadline project with an “eye on the prize,” the phone still rings, background noise of printers and copying machines persist, an interesting world outside the window beckons and colleagues drop in to seek advice. A person’s ability to control his or her cognition — all the things a brain takes in — is directly linked to motivation. Time also plays a big factor. A project due in three weeks can be completed with some distraction; a project due tomorrow inhibits a person’s response to interrupting friends and colleagues and allows clearer focus on the goal.

The researchers intend to explore the left DLPFC more as a “uniquely predictive measure of pursuing rewarded outcomes in motivated settings,” Savine says.”Another key research effort will seek to more directly quantify the involvement of dopamine chemical release during these tasks.”

And they may test other motivators besides money, such as social rewards, or hunger or thirst, to see “if different motivators are all part of the same reward currency, engaging the same brain network that we’ve shown to be activated by monetary rewards,” Savine says.

[Photo above by shoobydooby / CC BY-ND 2.0]

Todd Braver @ Washington University in St. Louis

[awsbullet:mensa casino gambling]

Scientists Find New, Inexpensive Way To Predict Alzheimer's Disease

There should be an image here!Your brain’s capacity for information is a reliable predictor of Alzheimer’s disease and can be cheaply and easily tested, according to scientists.

“We have developed a low-cost behavioral assessment that can clue someone in to Alzheimer’s disease at its earliest stage,” said Michael Wenger, associate professor of psychology, Penn State. “By examining (information) processing capacity, we can detect changes in the progression of mild cognitive impairment (MCI).”

MCI is a condition that affects language, memory, and related mental functions. It is distinct from the ordinary mental degradation associated with aging and is a likely precursor to the more serious Alzheimer’s disease. Both MCI and Alzheimer’s are linked to a steady decline in the volume of the hippocampus, the area of the brain responsible for long term memory and spatial reasoning.

MRIs — magnetic resonance imaging — are the most reliable and direct way to detect hippocampal atrophy and diagnose MCI. But for many, the procedure is unavailable or too expensive.

“MRIs can cost hundreds of dollars an hour,” Wenger said. “We created a much cheaper alternative, based on a memory test, that correlates with hippocampal degradation.” Wenger and his collaborators at the Mayo Clinic College of Medicine, Rochester, Minn., detail their findings in a recent issue of the Journal of Mathematical Psychology.

From a computer model of an atrophying hippocampus, the researchers determined how to estimate capacity with a statistical measure of how quickly tasks are completed. Applying this analysis to a memory test for people with MCI, the researchers were able to gauge their hippocampal capacity and compare it to the progression of their ailment.

“My collaborators at the Mayo Clinic backed up this study with MRIs for the MCI group,” Wenger said. “These capacity measures we developed showed a reliable relationship to the hippocampal volume measurements, so we know we are on the right track.”

The scientists began by modeling the hippocampus as a complex electrical circuit. Equations governing electric current and voltage mimicked the electrical firing of neurons within the circuit. The researchers switched off neurons in the simulation to model atrophy of the hippocampus.

With fewer cells available to process electrical signals, the model hippocampus slowed down, but its capacity for processing information decreased at an even faster rate. Capacity was the most sensitive measure of how the hippocampus was deteriorating, more than the average processing speed.

“We then applied this to the gold standard of the field — the Free and Cued Selective Reminding Test (FCSRT),” Wenger said. “This is a test that can discriminate between normal age-related memory changes and changes caused by impairment.”

The researchers gave this test to five groups of participants: college students, healthy middle-aged adults, healthy elderly individuals, people with diagnosed cases of MCI, and a control group of age-matched individuals without MCI. The first three groups each had 100 members and the last two each had 50.

During the FCSRT, the researchers showed the participants descriptive words, such as “part of the body” and “artery” and asked the participants to choose the picture that fit these cues from a set of 24 images, in this case, a picture of a heart. The psychologists then asked the subjects to recall as many items as they could. For objects they failed to remember, the psychologists provided the category cues, providing more information and testing the limits of the subjects’ capacity.

The researchers analyzed the response times for the tasks and the number of items that were recalled, with and without additional cues. The MCI group showed the greatest sensitivity to added cues – the additional input either substantially helped or inhibited their performance. But like the computer model, estimates of capacity highlighted the greatest cognitive difference between the MCI group and the others.

This study’s approach to defining processing capacity is unusual. The scientists combined disparate principles of engineering and statistics, mathematically translating processing capacity into what is called the “hazard function.”

The hazard function is well known in engineering, but relatively new for fields like psychology. It gives the probability that a task that is not yet completed will be completed in the next interval of time.

By measuring how long it takes a participant to recall the objects during the FCSRT, the psychologists fit a model based on the hazard function to each participant and obtain a measure of his or her capacity for the memorization task.

The difference in hazard function measures between the MCI group and all other groups was statistically much more pronounced than the differences between all groups in the number of items they recalled. These hazard function differences also outweighed the contrasts between all groups in their response times. The hazard function model proved to be the most sensitive diagnostic for cognitive distinctions in the groups, making it a reliable indicator of capacity and a better signal of the underlying hippocampal atrophy than processing speed alone.

The researchers’ results are valid for every person, not just for the whole group. Since the modified FCSRT relies on personal reaction times, hazard analysis and performance, it can track the progression of MCI for anyone, anywhere there is access to a computer.

“These results are still preliminary, but very encouraging,” Wenger said. “We plan to study what this approach can tell us about mental impairments related to other conditions, like iron deficiencies, in the future.”

A’ndrea Elyse Messer @ Penn State

[Photo above by Jen / CC BY-ND 2.0]

[awsbullet:Alzheimer disease prevent]

Scientists Find First Physiological Evidence Of Brain's Response To Inequality

There should be an image here!The human brain is a big believer in equality — and a team of scientists from the California Institute of Technology (Caltech) and Trinity College in Dublin, Ireland, has become the first to gather the images to prove it.

Specifically, the team found that the reward centers in the human brain respond more strongly when a poor person receives a financial reward than when a rich person does. The surprising thing? This activity pattern holds true even if the brain being looked at is in the rich person’s head, rather than the poor person’s.

These conclusions, and the functional magnetic resonance imaging (fMRI) studies that led to them, are described in the February 25 issue of the journal Nature.

“This is the latest picture in our gallery of human nature,” says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech and one of the paper’s coauthors. “It’s an exciting area of research; we now have so many tools with which to study how the brain is reacting.”

It’s long been known that we humans don’t like inequality, especially when it comes to money. Tell two people working the same job that their salaries are different, and there’s going to be trouble, notes John O’Doherty, professor of psychology at Caltech, Thomas N. Mitchell Professor of Cognitive Neuroscience at the Trinity College Institute of Neuroscience, and the principal investigator on the Nature paper.

But what was unknown was just how hardwired that dislike really is. “In this study, we’re starting to get an idea of where this inequality aversion comes from,” he says. “It’s not just the application of a social rule or convention; there’s really something about the basic processing of rewards in the brain that reflects these considerations.”

The brain processes “rewards” — things like food, money, and even pleasant music, which create positive responses in the body — in areas such as the ventromedial prefrontal cortex (VMPFC) and ventral striatum.

In a series of experiments, former Caltech postdoctoral scholar Elizabeth Tricomi (now an assistant professor of psychology at Rutgers University) — along with O’Doherty, Camerer, and Antonio Rangel, associate professor of economics at Caltech — watched how the VMPFC and ventral striatum reacted in 40 volunteers who were presented with a series of potential money-transfer scenarios while lying in an fMRI machine.

For instance, a participant might be told that he could be given $50 while another person could be given $20; in a second scenario, the student might have a potential gain of only $5 and the other person, $50. The fMRI images allowed the researchers to see how each volunteer’s brain responded to each proposed money allocation.

But there was a twist. Before the imaging began, each participant in a pair was randomly assigned to one of two conditions: One participant was given what the researchers called “a large monetary endowment” ($50) at the beginning of the experiment; the other participant started from scratch, with no money in his or her pocket.

As it turned out, the way the volunteers — or, to be more precise, the reward centers in the volunteers’ brains — reacted to the various scenarios depended strongly upon whether they started the experiment with a financial advantage over their peers.

“People who started out poor had a stronger brain reaction to things that gave them money, and essentially no reaction to money going to another person,” Camerer says. “By itself, that wasn’t too surprising.”

What was surprising was the other side of the coin. “In the experiment, people who started out rich had a stronger reaction to other people getting money than to themselves getting money,” Camerer explains. “In other words, their brains liked it when others got money more than they liked it when they themselves got money.”

“We now know that these areas are not just self-interested,” adds O’Doherty. “They don’t exclusively respond to the rewards that one gets as an individual, but also respond to the prospect of other individuals obtaining a reward.”

What was especially interesting about the finding, he says, is that the brain responds “very differently to rewards obtained by others under conditions of disadvantageous inequality versus advantageous inequality. It shows that the basic reward structures in the human brain are sensitive to even subtle differences in social context.”

This, O’Doherty notes, is somewhat contrary to the prevailing views about human nature. “As a psychologist and cognitive neuroscientist who works on reward and motivation, I very much view the brain as a device designed to maximize one’s own self interest,” says O’Doherty. “The fact that these basic brain structures appear to be so readily modulated in response to rewards obtained by others highlights the idea that even the basic reward structures in the human brain are not purely self-oriented.”

Camerer, too, found the results thought provoking. “We economists have a widespread view that most people are basically self-interested, and won’t try to help other people,” he says. “But if that were true, you wouldn’t see these sort of reactions to other people getting money.”

Still, he says, it’s likely that the reactions of the “rich” participants were at least partly motivated by self-interest — or a reduction of their own discomfort. “We think that, for the people who start out rich, seeing another person get money reduces their guilt over having more than the others.”

Having watched the brain react to inequality, O’Doherty says, the next step is to “try to understand how these changes in valuation actually translate into changes in behavior. For example, the person who finds out they’re being paid less than someone else for doing the same job might end up working less hard and being less motivated as a consequence. It will be interesting to try to understand the brain mechanisms that underlie such changes.”

Lori Oliwenstein @ California Institute of Technology

[Photo above by Jason Pier / CC BY-ND 2.0]

[awsbullet:inequality social class]

A Midday Nap Markedly Boosts The Brain's Learning Capacity

There should be an image here!If you see a student dozing in the library or a co-worker catching 40 winks in her cubicle, don’t roll your eyes. New research from the University of California, Berkeley, shows that an hour’s nap can dramatically boost and restore your brain power. Indeed, the findings suggest that a biphasic sleep schedule not only refreshes the mind, but can make you smarter.

Conversely, the more hours we spend awake, the more sluggish our minds become, according to the findings. The results support previous data from the same research team that pulling an all-nighter — a common practice at college during midterms and finals — decreases the ability to cram in new facts by nearly 40 percent, due to a shutdown of brain regions during sleep deprivation.

“Sleep not only rights the wrong of prolonged wakefulness but, at a neurocognitive level, it moves you beyond where you were before you took a nap,” said Matthew Walker, an assistant professor of psychology at UC Berkeley and the lead investigator of these studies.

In the recent UC Berkeley sleep study, 39 healthy young adults were divided into two groups — nap and no-nap. At noon, all the participants were subjected to a rigorous learning task intended to tax the hippocampus, a region of the brain that helps store fact-based memories. Both groups performed at comparable levels.

At 2 p.m., the nap group took a 90-minute siesta while the no-nap group stayed awake. Later that day, at 6 p.m., participants performed a new round of learning exercises. Those who remained awake throughout the day became worse at learning. In contrast, those who napped did markedly better and actually improved in their capacity to learn.

These findings reinforce the researchers’ hypothesis that sleep is needed to clear the brain’s short-term memory storage and make room for new information, said Walker, who is presenting his preliminary findings on Sunday, Feb. 21, at the annual meeting of the American Association of the Advancement of Science (AAAS) in San Diego, Calif.

Since 2007, Walker and other sleep researchers have established that fact-based memories are temporarily stored in the hippocampus before being sent to the brain’s prefrontal cortex, which may have more storage space.

“It’s as though the e-mail inbox in your hippocampus is full and, until you sleep and clear out those fact e-mails, you’re not going to receive any more mail. It’s just going to bounce until you sleep and move it into another folder,” Walker said.

In the latest study, Walker and his team have broken new ground in discovering that this memory- refreshing process occurs when nappers are engaged in a specific stage of sleep. Electroencephalogram tests, which measure electrical activity in the brain, indicated that this refreshing of memory capacity is related to Stage 2 non-REM sleep, which takes place between deep sleep (non-REM) and the dream state known as Rapid Eye Movement (REM). Previously, the purpose of this stage was unclear, but the new results offer evidence as to why humans spend at least half their sleeping hours in Stage 2, non-REM, Walker said.

“I can’t imagine Mother Nature would have us spend 50 percent of the night going from one sleep stage to another for no reason,” Walker said. “Sleep is sophisticated. It acts locally to give us what we need.”

Walker and his team will go on to investigate whether the reduction of sleep experienced by people as they get older is related to the documented decrease in our ability to learn as we age. Finding that link may be helpful in understanding such neurodegenerative conditions as Alzheimer’s disease, Walker said.

Yasmin Anwar @ University of California – Berkeley

[Photo above by Arby Reed / CC BY-ND 2.0]

[awsbullet:Brain-Based Learning: The New Paradigm of Teaching]

Should Cell Phones Carry A Cancer Warning?

For the past few years we have all heard the reports about how cell phone use could cause brain cancer. The battle and debate frustrated some users who were not sure who to believe. On one hand we have the studies that claim harm from cell phones that could cause brain caner and on the other side the cell phone industry that claims the phones are safe.

Now we have a new situation to deal with. It seems that one state legislator in Maine actually wants a warning on cell phones. Not to be out done, the Mayor of San Francisco wants his city to be the first to require the warnings. In one article it also states the following information:

The Federal Communications Commission, which maintains that all cell phones sold in the U.S. are safe, has set a standard for the “specific absorption rate” of radio frequency energy, but it doesn’t require handset makers to divulge radiation levels.

“Although research has not consistently demonstrated a link between cellular telephone use and cancer, scientists still caution that further surveillance is needed before conclusions can be drawn,” according to the Cancer Institute’s Web site.

It seems to me that until there is concrete proof that the phones actually do in fact cause brain cancer, why even bother.

So let me ask you. Do our cell phones need this warning or is it just a waste of time?

Comments welcome.


The Internet Increases Brain Activity?

I have to say, I am not all that shocked that using our favorite search engine is doing something positive for our brain activity. With all of the reading and thinking going into what you are searching for, anything less than a sign of life in the brain would be a real surprise, truth be told. What I do not see happening however, is the Internet, in any capacity, making us smarter. Seriously, you know you spend too much time in front of a computer when you setup your online bank’s bill payer to submit a payment for a bill to your own home address. I felt really weird walking to my mailbox to discover that apparently the city was cutting me a check for the exact same amount I had just watched being paid just a few days ago! Shocking…

Moral of the story is too much of anything, even using something helpful like the Internet can send your brain into a bit of a tizzy!

Computers 'to match human brains by 2030'

I always enjoy reading these future projections by would be prophets in which they provide us with a look into what technology may bring. But I also know that sometimes there projections can be overzealous and may not come true, but they do make for good reading. I recently read an article over at The Independent by Steve Connor, Science Editor in Boston, in which a future by 2030 would have computers on par with human brain power. The article states:

Computer power will match the intelligence of human beings within the next 20 years because of the accelerating speed at which technology is advancing, according to a leading scientific “futurologist”.

There will be 32 times more technical progress during the next half century than there was in the entire 20th century, and one of the outcomes is that artificial intelligence could be on a par with human intellect by the 2020s, said the American computer guru Ray Kurzweil.

Machines will rapidly overtake humans in their intellectual abilities and will soon be able to solve some of the most intractable problems of the 21st century, said Dr Kurzweil, one of 18 maverick thinkers chosen to identify the greatest technological challenges facing humanity.

There is one truth to the article that I agree with. In the next 20 years technical progress should be absolutely amazing. But will computers be just as smart as us? Some could argue that computers might already be this smart. :-)

What do you think?

Comments welcome.

Full article is here.

[tags]computers, brain, power, 2030, technology, progress, [/tags]

Google – Artificial Brain – Coming Soon

Google is in the spotlight once again, in that it was confirmed that they are working on artificial intelligence, which may not be to far off in the near future. In his speech to the The American Association for the Advancement of Science conference being held in San Francisco, one of the founders of Google, Larry Page stated in his speech:

“We have some people at Google who are really trying to build artificial intelligence and to do it on a large scale,”


“It’s not as far off as people think.”

I certainly hope Google has better luck in developing AI, than Microsoft has developing speech recognition, which by the way, still doesn’t work correctly in Vista. And there still hasn’t been a patch released as of yet for a discovered flaw. It should be interesting to see exactly what the folks at Google have in store for us.

Hopefully this new technology will still follow the company doctrine of ‘do no harm‘.
Watch the video of his speech here.

Homepage of the AAAS here.

[tags]google, brain, artificial, intelligence, soon, [/tags]

Develop A High Performance Mind

I know they say that a cluttered desk is the sign of a clean mind – but I just don’t believe it. Out of chaos comes order? No, out of order comes chaos – and somewhere in between those two extremes sits your mind. We all work differently (naturally). We all think differently, too. If you don’t know where something is at any given moment, are you in true control – and if you’re not in control, is that something you’ve given up for any given reason? Just think about it for a moment. Or don’t? Even so, sometimes it can be difficult to communicate one concept to someone else if they don’t have a full understanding of what it is you’re asking for. That’s where frustration can set it in if left unchecked. The brain is a wonderful thing.

There are things I don’t want to think about – or can’t possibly think about. I do my best to surround myself with people much smarter than myself (which includes all of Lockergnome’s primary task force – Ponzi the CEO, Bob the editor, Matt the lead writer, and Shayne the developer). It’s amazing what we’re able to accomplish, because each one of us thinks differently from one another. The mind is a wonderful thing.
Continue reading “Develop A High Performance Mind”