Generally it’s well established fact that a small percentage of society causes a disproportionately massive amount of problems and suffering for the rest of us. We’ve always wondered what holds humans back from creating the kind of society we’d all love to see. We used to call it evil but now we know that some people don’t have the capacity to empathize or be remorseful.
The documentary I Am Fishead: Are Corporate Leaders Psychopaths? examines whether the people at the top are more likely to be psychopaths than the rest of us.
The neurological definition of a psychopath is someone with small amygdala (lesser fear response) and often less connections to the frontal lobe, the center for reasoning. There are many people like this and the reasoning is that these people are more disposed to taking risks and are therefore more often in a position to take advantage of an opportunity. A psychopath is a risk taker and often fearless of most consequences.
It really boils down to how you define evil. Some derive pleasure from causing others pain, and this is the litmus test for purely evil behavior. If you don’t derive pleasure from killing the man behind the counter because he won’t hand over the money, or from signing the paper that privatizes the water supply in a small nation, but you do it anyway, because you want the reward, this would still constitute evil but to a lesser extent. In the first case, the thrill and positive emotions of performing the destructive act is your reward. In the second case, the reward lies in the payoff from the act, and the suffering you cause is just unimportant.
This is looking at it from an individual level however. From a societal perspective, the ‘lesser’ evildoers, or the sociopaths, are probably more dangerous, because they differ less from the average guy. They are just ready to go that one extra step to attain what they crave. Put in situations where that kind of behavior is rewarded, and the responsibility is diffused or non-existent, they are likely to thrive.
Babies’ brains adjust to listening to a language, even if they never learn it.
Lost languages leave traces on the brainBabies’ brains adjust to listening to a language, even if they never learn it.
by Cathleen O’Grady Nov 21 2014, 10:45pm CET
29Flickr user Prayitno
Our brains start soaking in details from the languages around us from the moment we can hear them. One of the first things infants learn of their native languages is the system of consonants and vowels, as well as other speech sound characteristics, like pitch. In the first year of life, a baby’s ear tunes in to the particular set of sounds being spoken in its environment, and the brain starts developing the ability to tell subtle differences among them—a foundation that will make a difference in meaning down the line, allowing the child to learn words and grammar.
But what happens if that child gets shifted into a different culture after laying the foundations of its first native language? Does it forget everything about that first language, or are there some remnants that remain buried in the brain?
According to a recent PNAS paper, the effects of very early language learning are permanently etched into the brain, even if input from that language stops and it’s replaced by another language. To identify this lasting influence, the researchers used functional magnetic resonance imaging (fMRI) scans on children who had been adopted to see what neural patterns could be identified years after adoption.
Because not all linguistic features have easily identifiable effects on the brain, the researchers decided to focus on lexical tone. This is a feature found in some languages that allows a single arrangement of consonants and vowels to have different meanings that are distinguished by a change in pitch. For example, in Mandarin Chinese, the word “ma” with a rising tone means “hemp”—the same syllable with a falling tone means “scold.”
People who speak tone languages have differences in brain activity in a certain region of the brain’s left hemisphere. This region activates in response to pitch differences that are used to convey a difference in linguistic meaning; non-linguistic pitch is processed in the right hemisphere. Tone information is learned very early in life: infants learning Chinese languages (including Mandarin and Cantonese) show signs of recognizing tonal contrasts as early as four months.
The researchers focused on 21 Chinese children who had been adopted early in life. The average age of the children at adoption was 12.8 months, which meant that they were likely to have learned to recognize tone before being adopted. Since adoption, the children had been exposed exclusively to French, had grown up as French monolingual speakers, and had no remaining conscious knowledge of Chinese.
As controls, the researchers used 11 children who spoke only French, as well as a third group of 12 children who spoke both Chinese and French. The children, all between 9 and 17 years old, completed a task involving tone discrimination while in the fMRI scanner. They heard pairs of phrases made up of nonsense words using Chinese speech sounds (like “brillig” or “strint” in English), or hummed phrases with nothing but tone information. Each pair of terms was either identical or had a difference in tone on the last syllable. The children were asked to press a button to show whether the final syllable was different or the same.
All of the children were able to answer with very high accuracy, and there were no differences between the groups on either accuracy or reaction times. However, their fMRI scans showed a difference in how they processed the information.
Chinese-French bilingual children used the specialized left-hemisphere brain region found in speakers of tone languages, while French monolingual speakers used only their right hemispheres, as they would for processing any complex sound. Adopted ex-Chinese speaking children showed the same pattern as the Chinese-speaking bilinguals—their brains showed activation in the specialized tone region in the left hemisphere.
There was also a stronger activation among children who had been older when they were adopted. The researchers suggest that this indicates that the representation of lexical tone in the brain gets strengthened with more exposure to it. However, the length of time since the children had been adopted made no difference to the amount of activation in the brain, possibly indicating that, once the representation of tone in the brain has been established, time doesn’t weaken or erase it.
What makes this study particularly useful, says Dr Cristina Dye, a researcher who studies childhood language acquisition, is that lexical tone is very well suited to probing this question. Previous studies tackling the same question used tasks that required more complex linguistic knowledge, which children are less likely to have learned at a very young age. Lexical tone also has the benefit of being very difficult for adults to learn, meaning that traces of it are most likely from early childhood.
As with many fMRI studies, the sample sizes are small. This is due to the expense of the technology, as well as the stringent criteria for participants. Nevertheless, the results corroborate behavioral studies that have shown similar traces of lost languages, says Dye.
The next thing to determine, write the researchers, is whether the neural traces of the first forgotten language can affect how subsequent languages are learned or processed by the brain. There may also be implications for learning the lost languages: people with forgotten exposure to languages may be able to learn that language faster, or more completely, than people with no exposure at all.
Is the War on Drugs doing more harm than good? In a bold talk, drug policy reformist Ethan Nadelmann makes an impassioned plea to end the “backward, heartless, disastrous” movement to stamp out the drug trade. He gives two big reasons we should focus on intelligent regulation instead.