Tag Archive | nature

The World’s First Island Powered by an Off-Grid Renewable Energy System*

The World’s First Island Powered by an Off-Grid Renewable Energy System*

By Lorraine Chow

A tiny, scenic island lying off Scotland’s west coast is truly a model for sustainable, off-grid living. With no mainland electricity connection, the Isle of Eigg gets its electricity from the water, the wind and the sun.

After decades of using diesel generators, in February 2008 the residents of Eigg officially switched to their own renewable electricity supply, becoming the world’s first community to launch an off-grid electric system.

The 12-square-mile island, with its small population of 105 residents, gets ’round-the-clock power via a combination of hydroelectric generators, wind turbines, a photovoltaic array and a bank of batteries. On days when renewable resources are low or during maintenance, two 80kW diesel generators provide backup.

“The set-up that we’ve got now will carry the island all day and put charge into the batteries for the evening,” John Booth, the former director of the community-owned Eigg Electric company, told the BBC.

On days when there is a surplus of power—like when it’s particularly windy or rainy—electric heaters automatically switch on in Eigg’s church and community hall, which is ideal for keeping shared spaces warm throughout the winter.

This means “virtually no central heating in the system at all,” Booth pointed out.

 “We don’t charge for it because the whole community benefits.”

As the BBC detailed, before making the transition to renewables, the island relied on noisy and expensive diesel generators that could only run for a few hours a day. But with the new power system, energy is available 24 hours a day.

Eigg residents are encouraged to use their power responsibly. Each house has a maximum use limit at any one time of 5kW, which is enough for an electric kettle and washing machine to run at the same time, or fifty 100w light bulbs. Businesses get 10kW. Residents are fined if they use too much power but meters help keep electricity use on track.

The whole thing is run by and for the island,” Booth said.

Researchers from all around the world—Brazil, Alaska and Malawi—have visited the isle to learn how the unique system can be adapted elsewhere.

Source*

Related Topics:

Scotland’s Eigg Island: Self Sufficient and Owned by its Residents*

Children in Scotland Murdered in Paedophile Snuff Movies*

After Brexit Scotland, Ireland Referendum to Leave U.K.*

How a Meeting in Cork Could Save Rural Europe*

The Pathological State of Britain*

Ireland Removing the Right to Live in the Countryside*

Rarest Blood Types On Earth – How Unique is Your Blood?*

Rarest Blood Types On Earth – How Unique is Your Blood?*

There are certain blood types that are very rare on Earth. People who have a unique blood type can be found in all corners of the world. Are you perhaps one them?

What are the rarest blood types on Earth?

To answer this question, we must first understand that a blood type is classified as rare if fewer than 1 in 1,000 people have it. There are currently 33 recognized blood group systems and many are still undiscovered. Some of these blood groups are extremely rare, making people have this blood truly unique.

What Determines a Human’s Blood Type?

There are 8 basic blood types and these are A, AB, B, and O and their positive or negative variations. These 8 types can be divided into millions of varieties and this fact makes blood research really complicated.

The classifications are derived from the antigens of a person’s blood cells – antigens being proteins that are found on the surface of the cells and which are designed to combat bacteria and viruses.  If a particular high-prevalence antigen is missing from your red blood cells, then you are “negative” for that blood group.

The most common of these blood type is O. The origin of our blood types still remains a great scientific riddle.

According to the American Red Cross the most rare blood type is AB(-). It is present in 1% of the Caucasians, in African Americans it is even rarer. B(-) and O(-) are also very rare, each accounting for less than 5% of the world’s population.

Not all ethnic groups have the same mix of these blood types. Latino-American people, for example, have a relatively high number of O’s, while Asian people have a relatively high number of B’s. The mix of the different blood types in the U.S. population is like presented in this image. Image credit: American Red Cross

 

Bombay Blood Group – One of The World’s Rarest Blood Groups

A blood type few have heard of is called Bombay blood group. It is one of the world’s rarest blood groups and people who carry this rare blood type can accept blood only from another Bombay blood type individual, and not from anyone who is O, A, B or AB type.

It is estimated that about 1 in 10, 000 Indians and 1 per 1,000,000 individuals in Europe.

The h/h blood group, also known as Oh or the Bombay blood group was discovered in 1952 in Bombay, India by Dr. Y. M. Bhende.

Finding a donor for someone with Bombay blood is very difficult.

Two patients needed blood transfusion, but none of the blood types known until then worked for them. The moment their blood samples were mixed with any of the above types, the blood coagulated or clumped up.

The Bombay (Oh) phenotype is characterized by the absence of A, B, and H antigens on red cells.

Rh-Null – A “Universal Blood”

What makes Rh-null very rare and special is that it lacks any antigens in the Rh system. There are only 9 active donors in the community of rare blood donors that have Rh-null blood. This means that people who have Rh-null can have difficulties obtaining lifesaving blood.

Rh-null was first described in 1961 in an Aboriginal Australian woman. At the time doctors were rather surprised because it was assumed an embryo missing all Rh blood-cell antigens would not survive, let alone grow into a normal, thriving adult. By 2010, about 43 people with Rh-null blood had been reported worldwide.

The Lu(a-b-) phenotype or Lunull – Extremely Rare Blood Group

The Lutheran blood group was initially described in 1945. It got its name because of a misinterpretation of the patient’s name, Luteran.

The Lu(a−b−) phenotype is extremely uncommon and is known to have three genetic backgrounds. Tests on 250000 blood donors show the frequency of Lu(a-b-) to be approximately 1 in 3000. So, this blood type is very rare!

Rh Negative – One of the Most Unexplained Blood Types

Rh negative is not rare, but nevertheless a real scientific mystery. No one has been able to explain where people with the Rh negative blood type people came from. Most, familiar with blood factors, admit that these people must be the result of a random mutation, if not descendants of a different ancestor. What cause the mutation? Who was this unknown ancestor?

There is some evidence that suggests the RH-negative blood group may have appeared about 35,000 years ago. Appearance of RH-negative blood did not follow the usual evolutionary path. In fact, evolution would seem to be ruled out as a possible cause of the anomaly. It has been proven that blood is the least likely to mutate. There are no other blood mutations. The introduction of the RH-negative blood type was not a naturally occurring part of human evolution.

The blood type is not rare, but it still poses a great scientific mystery.

Source*

Related Topics:

Now Discovered the Lungs Make Blood*

The Windsor-Bush Bloodline Traced Back to the Roman Caesars and Egyptian Pharaohs*

Italian Patients Infected With HIV and other Viruses via Blood Transfusion*

New Contraceptives Increase Risk of blood Clots by 50 – 80%*

The West Exports Porn, Casual Sex, and the Blood of the pre-born not Freedom*

Cosmic Rays Evolve Consciousness and Transform DNA*

Meet the Hidden Second Layer of Information in Your DNA*

Los Alamos Study Finds Airport Scanners Alter DNA*

Human DNA Tied Mostly to Single Exodus from Africa Long Ago*

DNA Study Finds Ice Age Europeans Predominantly Had Dark Complexions and Brown Eyes*

Ancient Ancestors Had More DNA Than We Do Now*

DNA Changing to Three and Four Strands*

Born with Two Different Sets of DNA*

Pesticide Residues Detected in Almost all European Foods*

Pesticide Residues Detected in Almost all European Foods*

By Marine Jobert

More than 97% of European food products contain pesticide residues, according to analyses carried out by the E.U.’s national authorities. EURACTIV’s partner Journal de l’Environnement reports.

The European Food Safety Authority’s (EFSA) annual compilation of results from studies across the E.U. on the presence of pesticides in food products held no surprises. Of the 84,341 samples of produce from conventional agriculture analysed, 97.2% contained traces of one or more of 774 pesticides.

Very limited values

53.3% of the samples tested in 2015 were “free of quantifiable residues” – which does not mean they were pesticide-free – while 43.9% contained residues “not exceeding legal limits”. Meanwhile, 99.3% of organic food was free from residues or within legal limits.

To meet E.U. standards, the residues of any pesticide present in a product must not exceed two times the legal limit. But these values are highly contested, particularly for endocrine disruptors, which can be active at very low concentrations.

Bananas, the multi-residue champions

In 2015, the analysts’ shopping basket included bananas, aubergines, broccoli, virgin olive oil, orange juice, peas, peppers, raisins, wheat, butter and eggs. While some samples of each product were found to contain residues of multiple pesticides, the results for bananas (58.4%) and raisins (58.3%) were the most striking, followed by peppers (24.4%).

Unauthorised pesticides

Three quarters of the sample batch came from E.U. countries (plus Norway and Iceland), with the other quarter coming from unspecified third countries. These imports pose the greatest risk to consumers, with 5.6% found to contain pesticide residues above the E.U. limits. Among E.U.-sourced produce, 1.7% of samples were over the legal limits. One third of all the pesticides detected are illegal in the European Union.

Source*

Related Topics:

As Expected EU to Approve GM Corn

European Food Authority Concludes that “Glyphosate is Safe”

Cloned Cattle Entering the E.U.*

U.K. Gov’t Has Colluded with Monsanto by Treating Wales as a Monsanto Toxic Dump*

Your Brain is not a Computer*

Your Brain is not a Computer*

By Robert Epstein

No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 64 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.

This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.

Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.

But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.

Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, someday, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.

If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost?

In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):

And here is the drawing she subsequently made with a dollar bill present:

Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.

What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?

Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.

The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?

A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.

The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?

So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.

The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.

Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.

From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.

As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types:

(1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens);

(2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars);

(3) we are punished or rewarded for behaving in certain ways.

We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.

Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.

A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.

A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

we will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading

Two determined psychology professors at Leeds Beckett University in the U.K. – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.

One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.

Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.

Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).

This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.

Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)

Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, E.U. officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.

We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.

Source*

Related Topics:

Your brain does not process information, or…*

The Brain-Shrinking Effects of a Junk Food Diet*

Neuroscientists Discover New ‘mini-neural computer’ in the Brain*

Parents Told Five Times to Abort Boy with ‘no brain’ and Now He’s a Thriving 4-year-old*

MRI Study Shows Spaceflight Physically Changes Astronauts’ Brains*

How Lying Takes our Brains Down a slippery slope*

Apple’s New ‘Wireless’ Headphones Emit Radiation … Right Next to Your Brain*

Mental Illness and the Gut-Brain Connection*

Common Drugs, Including Benadryl And Xanax, Cause Brain Atrophy And Increase The Risk Of Alzheimer and Dementia*

Dissected Open Brains of Nazi Victims Discovered in German Psychiatric Institute*

Music Training Speeds Up Brain Development in Children*

Scientists Discover what Traditional and Alternative Health Practitioners Know, the Immune System is Connected to the Brain*

MKUltra in Norway: Researcher Put Electrodes in People’s Brains for the U.S. Government*

CIA Mind Control: The Philadelphia Experiment on Americans

U.K. Scientists Use Brain Stimulation to ‘Make You Stop Believing In God’*

Study Finds Antipsychotic Drugs Shrinks the Brain*

’Brain-eating amoeba’ kills Texas Teen Training for the Olympics*

Modern Parenting is Preventing Brain Development*

Aborted Baby’s Heart was Beating as the Brain was Harvested*

Similarities between the Brain and the Universe*

$63 Million to Brain-Damaged Victims of Swine Flu Vaccine*

Media Multi-Tasking Shrinks the Brain and Causes Mood Swings*

Antidepressants Change the Functionality of the Brain*

It’s a Myth that We Only Use 10% of Our Brains*

 

20,000 Pakistani Schools to Go Solar*

20,000 Pakistani Schools to Go Solar*

By Lorraine Chow

About 20,000 schools in the province of Punjab in Pakistan will convert to solar power, according to government officials.

Punjab chief minister Muhammad Shahbaz Sharif reviewed the progress of the “Khadim-e-Punjab Ujala Programme” to install solar rooftop systems on the area’s schools at a recent meeting.

The project will kick off in Southern Punjab schools and expand in phases across the province, according to a local report.

The Asian Development Bank and France’s AFD Bank are backing the program, Cleantechnica reported. This is the first program of its kind in the country.

In Pakistan, nearly half of all residents are not connected to the national grid. Residents who are connected to the grid regularly experience rolling blackouts and power outages. And the problem is only expected to get worse in the coming years.

Renewable resources can help mitigate this growing energy crisis. Pakistan happens to be rich in solar, as the Express Tribune described:

“With eight to nine hours of sunshine per day, the climatic conditions in Pakistan are ideal for solar power generation. According to studies, Pakistan has 2.9 million megawatts of solar energy potential besides photovoltaic opportunities.

“According to figures provided by FAKT, Pakistan spends about $12 billion annually on the import of crude oil. Of this, 70 percent oil is used in generating power, which currently costs us Rs18 per unit. Shifting to solar energy can help reduce electricity costs down to Rs 6-8 per unit.”

Solar energy has made great strides in Pakistan in recent years. In February 2016, its parliament became the first national assembly in the world to be powered entirely by solar energy. The legislative body, known as the Majlis-e-Shoora, is in the capital city of Islamabad.

One of the world’s largest solar farms is currently under construction in Punjab. Developers of the 1,000-megawatt Quaid-i-Azam Solar Park in Bahawalpur have already added hundreds of megawatts of energy to the national grid.

Source*

Related Topics:

Pakistan Gv’t Warns the Country to Prepare for Global Cooling*

Thailand Introduces World’s First Solar Powered Hydrogen Houses*

India has Built the World’s Largest Solar Power Plant*

South Africa Joins India with a Solar-powered Airport*

U.S. Google-owned Solar Plant Incinerates 6,000 Birds per Year*

15 More Nations Ready to Sign onto the China’s AIIB*

Trauma and the Lineage of Illness*

Trauma and the Lineage of Illness*

Delphinium staphysagria

By Carina Lopez

Tolle Totum

Hahnemann writes in paragraph 78 of the Organon of Medicine that “true natural chronic disease arises from a chronic miasm.” A miasm is a series of reactions to abuses in life. These include dietary passions, habits, and environmental factors that affect generations of families through chronic illness.

Inherited and Suppressed Anger

“The women in my family are the martyrs for the men in my family,” Teresa stated, after recounting the generations of sexual and physical abuse the women in her family had suffered silently. Teresa had come to my office for help with her frequent panic attacks and debilitating anxiety and depression, from which she had suffered for as long as she could remember. She had already tried anxiety and depression medications, but they caused her to feel even more apathetic and disconnected from the world around her.

On the surface, Teresa had a very sweet and happy disposition. She smiled all the time, she was warm, and her coworkers adored her; however, she found it difficult to stand up for herself and remained a pushover on the job until a number of transgressions occurred, at which time she would explode.

Teresa was born with jaundice. In traditional Chinese medicine (TCM) ideology, jaundice relates to a perturbation of the liver, the organ considered “the seat of anger.” An unhealthy liver spurs an angry human being, and excessive anger damages the liver further, creating a vicious cycle. In addition, when Teresa became anxious, her heart fluttered away with palpitations. In TCM, the heart relates to joy. A morose, anxious person, according to ancient medical traditions as far back as Hippocrates, bears an unhealthy heart.

From a homeopathic perspective, the trauma endured by Teresa’s mother and earlier generations of women in her family was now ingrained in Teresa’s very being, perpetuating early and chronic illness. Teresa had developed a miasmatic reaction due to the sustained abuse of her ancestors and herself.

Elisabeth Kubler-Ross describes five common experiences of grief that may occur in any order after trauma:

  • Denial, or shock and disbelief, regarding the trauma
  • Anger, often misplaced onto anything and everyone nearby
  • Bargaining as a way to negotiate, find excuses, and displace blame
  • Depression, a deep despair of overcoming the trauma
  • Acceptance occurs when a sufferer comes to terms with her trauma.

These aspects of grieving are multi-faceted and manifest differently in every individual. Teresa’s inability to appropriately express her anger at work, leading to panic and explosive anger, indicated an urgent need to process her response to her grief and trauma.

I chose the homeopathic remedy Delphinium staphysagria, a beautiful purple flower that has been used medicinally for centuries. To many people, the color purple represents congealed blood, as when our blood boils from anger. The flower is toxic in its whole form, but has been used homeopathically to treat depression and hysteria with much success. William Boericke, MD, in his Homeopathic Materia Medica, describes Staphysagria as “necessary for those showing violent outbursts of passion.”

Teresa took a 200C potency BID along with herbs such as mimosa (Mimosa pudica), passion flower (Passiflora incarnata) and hawthorne (Crataegus oxyacantha). She changed her diet based on her TCM constitution, received frequent acupuncture treatments, and had the time and safe space during consultations to express herself and be fully heard.

Soon enough, Teresa’s anxiety decreased markedly, and she described feeling more centered, more calm and more in control. In social interactions, she found she was more readily speaking up for herself and not losing her temper. Alternative medicine had touched her miasm and triggered her progression through her underlying grief, shining a light toward her healing.

Like Mother, Like Daughter

Teresa then brought in her mother, who suffers from a depression she attributed to many years and generations of abuse. Her mother told me that, during countless years of abuse, she never once cried. She was stuck in a depressed state, and even after leaving the abusive situation, she had not been able to cry or come to a place of acceptance.

Four days after a single dose of Natrum muriaticum, 200C, she called me from the emergency room. She had begun crying the day after taking the remedy and had not stopped, which made her scared that something was wrong with her. I explained that crying was pivotal to freeing herself from suffering and that she was now moving toward acceptance. I could hear the smile in her voice even as her tears rolled down.

As time passes and Teresa and her mother move closer to their healing, I pause and wonder if Teresa has averted the passing on of family trauma to the next generation. Only time will tell, but my hopes are high. It is beautiful to see a mother and daughter working on their grief together in unity

The Intergeneration Impact of Trauma

The young field of epigenetics has linked cancer, heart disease, respiratory diseases, and autoimmune conditions in offspring with environmental exposures in the parent. Working with the adult offspring of Holocaust survivors, researcher Rachel Yehuda demonstrated the trans-generational transmission of cortisol dysregulation and the increased risk of post-traumatic stress disorder (PTSD) in those born to mothers who experienced PTSD compared to those born to mothers without PTSD.

This leads me to speculate whether the Holocaust had an epigenetic impact on genes associated with breast cancer and high breast cancer susceptibility in Ashkenazi Jews, and whether the historic trauma of slavery and Jim Crow terror have a role in the higher prevalence of hypertension among African Americans. I watch with interest as research in this field develops.

A Happy Ending

Today I received a call from Elsie, a young woman I saw quite some time ago, who was abandoned as an infant and has since suffered extended abuse. She came to see me for her horrible dysmenorrhea and uncontrollable anger. Staphysagria really moved her case, too. She cried for 6 months and was debilitated by the crying for 3 of them.

Today, she was concerned that her remedy had gotten old sitting in the sun. She hadn’t touched it in months but wondered if she needed a new bottle. Since our visits, Elsie has developed the strength to leave a dead-end job and move to California. She established healthier boundaries in her relationships and began to pursue a vocation as a spiritual healer, living on a ranch and using the healing power of horses. She called me happy, spirited and free. I told her she was fine now and not to worry about needing the remedy at this point. We both laughed in agreement. She has worked through her stages of grief, and my job with her is done. I look forward to the possibility of seeing her healthy children one day.

Source*

Related Topics:

Behind the Masks of the Feminine XIII: Nux Vomica

Behind the Masks of the Feminine IIX: Thuja

Behind the Masks of the Feminine IX: Arsenicum Album

Behind the Masks of the Feminine X: Phosphorus

Behind the Masks of the Feminine IX: Sulphur

Behind the Masks of the Feminine VIII: Calcarea carbonica

Behind the Masks of the Feminine VII: Sepia

Behind the Masks of the Feminine VI

Behind the Masks of the Feminine V

Behind the Masks of the Feminine IV

Behind the Masks of the Feminine III

Behind the Masks of the Behind the Masks of the Feminine II

The Feminine Connection to the Homeopathic Sea of Life

Swiss to Recognise Homeopathy as Legitimate Medicine*

Deadline to Keep Homeopathy as a Health Choice*

Federal Government Works with Pharmaceutical Companies to Prevent Natural Cures*

The Oldest Known Modern Man in Ethiopia*

The Oldest Known Modern Man in Ethiopia*

Depiction of what the ancient ‘Herto Man’ may have looked like. His skull dates to 160,000 years ago. (Bradshaw Foundation) Background.

 

The El Niño weather phenomenon of 1996-97 wrecked havoc on many parts of the world; however, it also enabled one team of scientists to make an incredible discovery. When the skies cleared and the floodwaters dried, a group of palaeontologists in Ethiopia’s Afar region unearthed three human skulls as well as numerous other human bone fragments. After years of reconstruction and analysis, the remains were dated to approximately 160,000 years. The so-called ‘Herto skulls’ were thus older than the closest competitors by tens of thousands of years. Some experts believe they deserve their own subspecies classification: Homo sapien idaltu.

The Afar Research Site: Home of the Herto Skulls

The team consisted of researchers from the University of California, Berkley, and from the Ethiopian Rift Valley Research Service. The state of Afar is located in the northeastern corner of Ethiopia and stretches 27,820 square miles (72,053 km). Yet, the area the paleontologists have for years been particularly interested in is called the Afar Triangle (or Afar Depression), a geological depression caused at the junction of three diverging tectonic plates: the Nubian, Somalian, and Arabian. It is one of the lowest places in Africa and frequently holds the title of the hottest place on Earth. It also has the world’s largest lava lake formed by the most continuously active volcano, Erta Ale. The region is home to the Afar people, considered to be “the toughest people in the world” (Onuh, 2016)

From this region, one of the earliest known hominin fossils was discovered in 1974: a female Australopithecus afarensis known affectionately as Lucy. And here, in 1997, the Herto team discovered the oldest Homo sapien remains.

Erta Ale is an active shield volcano located in the Afar Region of northeastern Ethiopia, within the Danakil Desert. (CC BY-SA 2.0) Remains such as the skulls known as the ‘Herto fossils’ were discovered here.

 

The Serendipitous Discovery of the Skulls

In 1996-97, El Niño caused punishing rains to fall throughout much of Eastern Africa. The deluge caused many of the semi-Nomadic Afar people, including those of the Herto village, to abandon the Depression for higher ground. The rains caused a good deal of soil to wash into the Awash River, exposing numerous fossils. As a result of the people and herds moving to higher ground, these newly unearthed bones were not trampled and remained undamaged waiting to be discovered.

“When the scientists returned 11 days later, it took them only minutes to find the skulls of two adults, probably male. Six days after that, Dr. Berhane Asfaw of Ethiopia’s Rift Valley Research Service found a third, the skull of a 6-or 7-year-old child, shattered into about 200 pieces. After years of painstaking cleaning, reassembly, and study, the team was confident enough to tell the world that it had found the earliest true Homo sapiens — older by at least 1,000 generations than anything previously discovered” (Lemonick and Dorfman, 2003).

Although the child’s skull appeared almost identical to modern human children skulls, the adults showed marked differences. “Each of the adult skulls was remarkably big. ‘We compared this with skulls of 6,000 modern humans, and still after that comparison not one was as big and robust as the Herto male,’ said Tim White, a University of California, Berkeley paleontologist and co-leader of the international team that found and studied the skulls. ‘These were very, very large robust people.’” (Joyce, 2003)

Nonetheless, the skulls are like modern humans in every feature. “The face is flat with prominent cheekbones, but without the protruding brow ridge of pre-human ancestors or Neanderthals. And the braincase is rounded, like a soccer ball, rather than the football shape of earlier human ancestors.” (Joyce, 2003) For this reason, the team proposed calling the remains a subspecies of humans Homo sapiens idaltu, ‘idaltu’ meaning ‘elder’ in Afar.

A Herto skull, Homo sapiens idaltu

 

Features of the Skulls

The similarity in features finally puts to rest the long-standing controversy over the origin of modern humans. While it is known that pre-human species left Africa and settled in Europe, the Middle East, and Asia, for decades it was not clear how these pre-human species all managed to develop into the same Homo sapien species. The answer is now clear that modern humans also developed in Africa and also left (most likely due to climate change). The second wave of African humanoids interbred and/or overtook the pre-human species, as can be seen in the well-studied case of the Neanderthals (one of the species that left Africa in the first wave).

“What this discovery in Ethiopia shows is that the shared features of modern humans – our high-rounded brain case, small brow ridges — originated in Africa,” said Chris Stringer from the Museum of Natural History in London” (Joyce, 2003).

Comparison of Modern Human and Neanderthal skulls from the Cleveland Museum of Natural History. (CC BY-SA 2.0)

 

A Post Mortem on the Ancient Skulls

Perhaps more interesting to the casual reader of paleo-discoveries was the treatment of the skulls immediately after their owners’ deaths 160,000 years ago. Each of the three intact skulls, as well as the (possibly) 10 skull fragments found at the Herto site, bore marks of deliberate tampering after death. Not in a cannibalistic way. Rather, the Herto fossils show the earliest known evidence of mortuary practices.

“Cut marks on the skulls indicate that the overlying skin, muscles, nerves and blood vessels were removed, probably with an obsidian flake. Then a stone tool was scraped back and forth, creating faint clusters of parallel lines. The modification of the child’s skull is even more dramatic. The lower jaw was detached, and soft tissues at the base of the head were cut away, leaving fine, deep cut marks. Portions of the skull were smoothed and polished.” (Lemonick and Dorfman, 2003)

Skull of the six to eight-year-old child, found in 1997, shows evidence of cut marks and polish after death. (CC BY-SA 3.0)

 

‘The cut marks aren’t a classic sign of cannibalism,’ White said while showing the skulls to a TIME reporter in Addis Ababa.

 ‘If you wanted to get at the brain in order to eat it, you’d just smash open the skull.’ Instead, he suspects, the scratches might be a form of decoration. As for the polished areas, he says, ‘we know they weren’t caused by the environment, because the marks go across the breaks between the recovered pieces. The child’s skull looks as though it has been fondled repeatedly.’

‘This,’ concludes White, ‘is the earliest evidence of hominids continuing to handle skulls long after the individual died.’” (Lemonick and Dorfman, 2003)

 

Source*

Related Topics:

A 200,000 Year-Old City in Southern Africa pre-Dates Sumer*

A Field View of Reality to Explain Human Interconnectedness*

Hidden Human History*

Human DNA Tied Mostly to Single Exodus from Africa Long Ago*

Reflections on the Idea of a Common Humanity*

The Cosmic Joke behind Human Genetics*

Secret Meeting at Harvard Discusses Synthetic Humans*

The Human Body Emits, Communicates with, and is Made from Light*

The Hidden History of the Human Race*

Humanity at the Crossroads: The Crisis in Spiritual Consciousness

DNA Study Finds Ice Age Europeans Predominantly Had Dark Complexions and Brown Eyes*

The Genocide of the Peoples of Europe*