Monday 14 December 2015

20 Things That Help Us Understand A Highly Creative Mind....

There’s no argument anymore. Neuroscience confirms that highly creative people think and act differently than the average person. Their brains are literally hardwired in a unique way. But that gift can often strain relationships.
If you love a highly creative person, you probably experience moments when it seems like they live in a completely different world than you. Truth is, they do. But trying to change them isn’t nearly as effective as trying to understand them.
It all begins by seeing the world through their lens and remembering these 20 things:
1. They have a mind that never slows down.
The creative mind is a non-stop machine fuelled by intense curiosity. There is no pause button and no way to power it down. This can be exhausting at times but it is also the source of some crazy fun activities and conversations.
2. They challenge the status quo.
Two questions drive every creative person more than any others: What if? Why not? They question what everyone else takes at face value. While uncomfortable for those around them, it’s this ability that enables creative to redefine what’s possible.
3. They embrace their genius even if others don’t.
Creative individuals would rather be authentic than popular. Staying true to who they are, without compromise, is how they define success even if means being misunderstood or marginalized.
4. They have difficulty staying on task.
Highly creative people are energized by taking big mental leaps and starting new things. Existing projects can turn into boring slogs when the promise of something new and exciting grabs their attention.
5. They create in cycles.
Creativity has a rhythm that flows between periods of high, sometimes manic, activity and slow times that can feel like slumps. Each period is necessary and can’t be skipped just like the natural seasons are interdependent and necessary.
6. They need time to feed their souls.
No one can drive cross-country on a single take of gas. In the same way, creative people need to frequently renew their source of inspiration and drive. Often, this requires solitude for periods of time.
7. They need space to create.
Having the right environment is essential to peak creativity. It may be a studio, a coffee shop, or a quiet corner of the house. Wherever it is, allow them to set the boundaries and respect them.
8. They focus intensely.
Highly creative people tune the entire world out when they’re focused on work. They cannot multi-task effectively and it can take twenty minutes to re-focus after being interrupted, even if the interruption was only twenty seconds.
9. They feel deeply.
Creativity is about human expression and communicating deeply. It’s impossible to give what you don’t have, and you can only take someone as far as you have gone yourself. A writer once told me that an artist must scream at the page if they want a whisper to be heard. In the same way, a creative person must feel deep if they are to communicate deeply.
10. They live on the edge of joy and depression.
Because they feel deeply, highly creative people often can quickly shift from joy to sadness or even depression. Their sensitive heart, while the source of their brilliance, is also the source of their suffering.
11. They think and speak in stories.
Facts will never move the human heart like storytelling can. Highly creative people, especially artists, know this and weave stories into everything they do. It takes longer for them to explain something, explaining isn’t the point. The experience is.
12. They battle Resistance every day.
Steven Pressfield, author of The War of Art, writes:
“Most of us have two lives. The life we live, and the unlived life within us. Between the two stands Resistance.”
Highly creative people wake up every morning, fully aware of the need to grow and push themselves. But there is always the fear, Resistance as Pressfield calls it, that they don’t have what it takes. No matter how successful the person, that fear never goes away. They simply learn to deal with it, or not.
13. They take their work personally.
Creative work is a raw expression of the person who created it. Often, they aren’t able to separate themselves from it, so every critique is seen either as a validation or condemnation of their self-worth.
14. They have a hard time believing in themselves.
Even the seemingly self-confident creative person often wonders, Am I good enough? They constantly compare their work with others and fail to see their own brilliance, which may be obvious to everyone else.
15. They are deeply intuitive.
Science still fails to explain the How and Why of creativity. Yet, creative individuals know instinctively how to flow in it time and again. They will tell you that it can’t be understood; only experienced first hand.
16. They often use procrastination as a tool.
Creative people are notorious procrastinators because many do their best work under pressure. They will subconsciously, and sometimes purposefully, delay their work until the last minute simply to experience the rush of the challenge.
17. They are addicted to creative flow.
Recent discoveries in neuroscience reveal that “the flow state” might be the most addictive experience on earth. The mental and emotional payoff is why highly creative people will suffer through the highs and lows of creativity. It’s the staying power. In a real sense, they are addicted to the thrill of creating.
18. They have difficulty finishing projects.
The initial stage of the creative process is fast moving and charged with excitement. Often, they will abandon projects that are too familiar in order to experience the initial flow that comes at the beginning.
19. They connect dots better than others.
True creativity, Steve Jobs once said, is little more than connecting the dots. It’s seeing patterns before they become obvious to everyone else.
20. They will never grow up.
Creative people long to see through the eyes of a child and never lose a sense of wonder. For them, life is about mystery, adventure, and growing young. Everything else is simply existing, and not true living.

Wine + 1p - How To Save A Spoiled Bottle of Wine....


Good news wine lovers: You can revive a stale bottle of your favourite vintage with a simple chemistry experiment. More good news: It'll only cost you a penny.

A new video from the American Chemical Society (ACS) explains how to do this super cheap, wine-saving "life hack" at home.

Simply pour a glass of spoiled wine (you'll know it's spoiled if it has a funky, sulphuric smell, akin to burnt rubber or rotten eggs) and drop in a clean copper penny. Be sure to give the penny a good scrub before you toss it in with the wine to clean off any grime. Stir the penny around in the glass; then remove it, and take a sip. If all goes as it should, your penny-infused wine will have lost its rotten-egg tinge.

There's a scientific explanation for how this simple trick works. The copper in the penny interacts with thiols, or stinky sulphur compounds, in your glass of wine. The wine has thiols as a result of a common part of the grape-fermentation process known as reduction, in which fermenting grape-juice sugars are kept from interacting with oxygen, the ACS said. Reduction is a complimentary process to oxidation, which involves exposing these same fermenting sugars to oxygen.

Sometimes the reduction process can go into "overdrive," and that's when stinky thiols are produced. Not sure what a thiol smells like? Well, ethyl mercaptan is one thiol that might be present in your wine bottle. It smells like burnt rubber. The thiol hydrogen sulphide smells like rotten eggs. And another thiol, methyl mercaptan, smells a whole lot like a burnt match.

But when these compounds interact with copper, the reaction produces an odourless compound known as copper sulphide. The same copper sulphide crystals will be produced if you dip a silver spoon into your glass of wine, the ACS said. Replacing smelly thiols with copper sulphides is a clever (and inexpensive) way to revamp your spoiled wine.

The ACS' video is part of a YouTube series called "Chemistry Life Hacks," in which viewers can learn other useful, science-inspired fixes to everyday problems. Among their other clever hacks, ACS chemists tackle how to sharpen a cutting knife using just a porcelain plate and how to check if your oven is reaching the correct temperature.

Thursday 8 October 2015

Word of the Day: Brimborion ‎(plural brimborions) - A useless or valueless object…

It’s not a word that rises unbidden to the lips of English speakers today, nor — if the record is to be trusted — at any time. It means a thing without value or use. It was borrowed from French, where it may still be found in dictionaries, though firmly marked as literary. According to the lexicographer Emile Littré, who compiled a famous dictionary of French in the middle decades of the nineteenth century, it’s a bastardised form of the Latin breviarium, the source of breviary for the service book used by Roman Catholic priests.

The link had been explained by another lexicographer two centuries earlier. Randall Cotgrave wrote in his French-English dictionary of 1611 that the word came to mean “foolish charms or superstitious prayers, used by old and simple women against the toothache, and any such threadbare and musty rags of blind devotion”, hence something valueless. A rare appearance is in a letter of 1786 by the writer Fanny Burney, in which she refers to “Talking to your royal mistress, or handing jewels ... and brimborions, baubles, knick-knacks, gewgaws”.

It is much less weird in German, in which the closely connected Brimborium, also borrowed from French but given a Latinate ending, is an informal term for an unnecessary fuss. The sentence “du machst viel zu viel Brimborium um eine Kleinigkeit” might be translated as “you’re making a lot of fuss about nothing”.

Wednesday 7 October 2015

Word of the Day: Adoxography - Skilled Writing on an Unimportant Subject....

Few dictionaries, not even the Oxford English Dictionary, give room to this word, so it is left mostly to non-lexicographers to define it, which they often do in terms such as “good writing on a trivial or base subject”. Near, but not quite right.

It’s a modern word to describe an ancient way to train young people in the art of rhetoric. They would be challenged to compose a speech praising an unpleasant idea such as poverty, ugliness, drunkenness or stupidity. So a better definition would be “rhetorical praise of things of doubtful value”. Anthony Munday published a book on the method in 1593, a translation of an Italian work, under the title The Defence of Contraries. It contained brief disquisitional examples on topics such as “ignorance is better than knowledge” and “it is better to be poor than rich”. Its preface claimed that it would be particularly useful to lawyers.

The root is Latin adoxus, paradoxical or absurd, but not from the classical language. It was first used by the Dutch scholar Erasmus around 1536, who took it from an identical ancient Greek word that meant inglorious. It was based on the root doxa, opinion or belief, which is also the basis of doxology, a formula of praise to God, and also of paradox.

The noun was first used in 1909 in The Conflict of Religions in the Early Roman Empire by Terrot Glover, though it was preceded by the adjective, adoxographical, which appeared in the American Journal of Philology in 1903. Dr Alex Leeper, the Warden of Trinity College, Melbourne, commented in Notes and Queries that year that it was an “ungainly word” and that it “will not, it is to be hoped, take root in the language.” His hope wasn’t fulfilled, though it remains rare.

New Research Has Found an Instinct for Fairness and Generosity in Toddlers….

Anecdotally, anyone who's spent time around toddlers knows that they mostly don't like sharing their toys. Together with research showing that toddlers, like adults, get pretty attached to their things and are reluctant to give them up, this has led to a popular belief that toddlers are selfish by nature.

But a team of developmental psychologists led by Julia Ulber has published new evidence in the Journal of Experimental Child Psychology that paints a more heart-warming picture. These psychologists point out that most past research has focused on how much toddlers share things that are already theirs. The new study looks instead at how much they share new things that previously no one owned. In such scenarios, toddlers frequently show admirable generosity and fairness.

There were two main experiments. The first involved 48 pairs of 18-month-old or 24-month-old toddlers sitting together at table, in the middle of which was a small container containing four marbles. If the toddlers took a marble and placed it in a nearby jingle box, it made a fun noise. The point of the set-up (repeated four times for each toddler pairing) was to see how the pairs of toddlers would divvy up the marbles between them.

Most of the time (44 per cent) the toddlers divided the marbles up fairly, 37 per cent of the time unequally (i.e. one child took 3 marbles), and 19 per cent of the time one child took all the marbles. This all took place pretty calmly, with marble steals happening only rarely. Overall, the experiment "rarely left one peer empty-handed," the researchers said, "and thus [the results] do not match the picture of the selfish toddler."

In a follow-up experiment with 128 pairs of two-year-olds, the set-up was more complex and this time, unlike the first experiment, none of the toddlers knew each other. Again, the children sat at opposite sides of a table with marbles on offer, but this time they had to pull a board sticking out of their side of the table to get the marbles to roll down into a reachable tray (marbles could again be used to make a jingle box play music). When the apparatus was designed so that there was one shared tray between the two toddlers, the toddlers shared the marbles equally about half the time. And this rose to 60 per cent if they'd had to collaborate by pulling the boards together to release the marbles.

In another variation of the set-up – possibly the most illuminating – the children had separate trays, and sometimes the researchers made it so that one child received three marbles in their tray and the other child just one. On about one third of these occasions, the results were delightful – the "lucky child" with three marbles gave up one of their marbles to their partner, willingly and unprompted. "This is the youngest age ever observed at which young children make sacrifices in order to equalise resources," the researchers said.

These acts of fairness were greater when the marbles were colour-coded so that two marbles matched the colour of one child's jingle box (located behind them) and the other two matched the other child's.  This colour-coding effect on generosity might be due to the children interpreting the colours as a sign of ownership (i.e. the idea being that this or that marble belongs to the other child because it matches their jingle box), or the colours might simply have helped the children, with their limited numerical skills, to identify a fair split in the numbers of marbles.

The researchers said their results showed that "young children are not selfish, but instead rather generous" when they're sharing resources among themselves, and that more research is needed to establish "in more detail the prosocial or other motives that influence the way in which young children divide resources."

New Genetic Evidence Suggests Face Recognition is a Very Special Human Skill…


A new twin study, published this month in PNAS, of the genetic influences on face recognition ability, supports the idea that face recognition is a special skill that's evolved quite separately from other aspects of human cognition. In short, face recognition seems to be influenced by genes that are mostly different from the genes that influence general intelligence and other forms of visual expertise.

The background to this is that, for some time, psychologists studying the genetics of mental abilities have noticed a clear pattern: people's abilities in one domain, such as reading, typically correlate with their abilities in other domains, such as numeracy. This seems to be because a person's domain-specific abilities are strongly associated with their overall general intelligence and the same genes that underlie this basic mental fitness are also exerting an influence on various specific skills.

Nicholas Shakeshaft and Robert Plomin were interested to see if this same pattern would apply to people's face recognition abilities. Would they too correlate with general intelligence and share the same or similar genetic influences?

The researchers recruited 2,149 participants, including 375 pairs of identical twins who share the same genes, and 549 non-identical twins, who share roughly half the same genes, just like typical siblings (overall the sample was 58 per cent female with an average age of 19.5 years). The participants completed a test of their face processing skills, including memorising unfamiliar faces, and also tests of their ability to memorise cars, and their general intelligence, in terms of their vocabulary size and their ability to solve abstract problems.

Comparing the similarities in performance on these different tests between identical and non-identical twin pairs allowed the researchers to estimate how much the different skills on test were influenced by the same or different genes.

All the abilities – face recognition, car recognition and general mental ability – showed evidence of strong heritability (being influenced by genetic inheritance), with 61 per cent, 56 per cent, and 48 per cent of performance variability in the current sample being explained by genes, respectively.

Crucially, performance on face recognition was only moderately correlated with car recognition ability (r = .29 where 1 would be a perfect correlation) and modestly correlated with general mental ability (r = .15), and only 10 per cent of the genetic influence on face recognition ability was the same as the genetic influence on general mental ability (and likewise, only 10 per cent of the genetic influence on face memory was shared with the genes affecting memory for cars).

Essentially, this means that most of the genetic influences on face recognition ability are distinct from the genetic influences on general mental ability or on car recognition ability. Shakeshaft and Plomin said this "striking finding" supports the notion that there is something special about human facial recognition ability. These results add to others that have suggested face recognition is a special mental ability – for instance, some have argued that faces alone trigger brain activity in the so-called "fusiform face area" (although this claim has been challenged); and unlike our ability to recognise other objects or patterns, our ability to recognise faces is particularly impaired when faces are inverted, consistent with the idea that we use a distinctive "holistic" processing style for faces.

The story is complicated somewhat by the researchers' unexpected finding that recognition ability for cars was also linked with distinct genetic influences that mostly did not overlap with the genetic influences on general mental ability. Perhaps, the researchers surmised, the tests of general mental ability used here (a vocabulary test and the well-used Raven's Progressive Matrices) did not adequately tap the full range of what we might consider general mental abilities. Whatever the reason, it remains the case that this new research suggests that face recognition ability is influenced by a set of genetic influences that are largely distinct from those implicated in a similar form of visual recognition (for cars) and implicated in vocabulary ability and abstract reasoning. Based on this, the researchers concluded they'd shown for the first time that "the genetic influences on face recognition are almost entirely unique."


Friday 1 May 2015

The Reasons We Cheat - And Why It Doesn't Need To Mean The End Of A Relationship....

People tend to have very firm rules about monogamy in a relationship and, generally, are fairly non-negotiable in their reactions to infidelity.
 
It's crap when someone cheats on you. You're likely to be hurt, angry, and of course, take the betrayal personally. But there is more than one reason for infidelity and cheating doesn't always need to mean that a relationship is over.
 
Who's to even say that monogamy is natural? Couldn't the idea of a person vowing to have sex with one person for a lifetime, be seen as less natural than a person having a number of different sexual partners throughout their life, as their tastes, interests and maturity changes?
 
A persons reason to cheat is individual and not always as cut and dry as many people would think. Of course women cheat too, but when considering men and infidelity, the truth is that often when men are offered sex, they take it - according to research, the reason being that men are less likely to be propositioned and so more likely to take advantage of an opportunity when it arises. Nothing boosts a man's ego like a person who isn't their wife suggesting a quickie in the stationary cupboard, and despite being possibly, the most hopeless of reasons to cheat, is often the root of the infidelity.
 
Just because a man gets married, it doesn't mean that he loses all want to be desired. So even if the stationary cupboard isn't on the cards, the thrill of a chase doesn't, necessarily, disappear the minute he walks down the aisle. And sometimes, it can be the familiarity and monotony of a relationship that drives a man to cheat, in itself stirring the fear of 'is this all there is' and the temptation of one last adventure. 
 
It hurts people, it's selfish and potentially devastating for everyone involved - however, it doesn't matter how you wrap it up, what makes an affair so tempting is the excitement and thrill that comes with it.
 
There will never be the same level of 'naughtiness' from an honest relationship for the simple reason that it's not a secret. And, for a lot of men, an affair is less about the person or even the sex, it's about the thrill - of the chase, of the secrecy and yes, even the deceit. This being the case, it has absolutely nothing to do you and everything to do with a compulsion that needs to be satisfied. 
 
I'm in no way blaming the 'victim' for the infidelity but sometimes, when in a relationship, it's easy to get comfortable and stop making an effort. Sex, appearance, communication - all the things that we invest so much time in at the beginning of a relationship but drop by the wayside the more 'comfortable' we become.
 
Relationships need effort, whether you've been together for three months or three decades and domesticity isn't always the golden chalice of happiness. Similarly, raising children, paying mortgages, arguing about sunday lunches at the in-laws, isn't alway the sexiest of things to have on your mind when trying to keep things alight in the bedroom.
 
Yes, keeping the romantic side of a relationship in this situation alive is the better thing to do but it isn't always the easiest - and is, arguably, one of the main reasons men cheat. 
 
Infidelity is often a symptom of something much bigger, a problem within the relationship that has manifested itself in one person feeling that they are unable to communicate. It's amazing to think that a couple who have lived under the same roof for 10 years feel unable to voice a concern about their relationship. But sometimes, burying your head in the proverbial sand (or something more fun) is easier than a face-to-face conversation about your emotions. 
 
Having an affair can be either a way of escaping the problems or a way of finding comfort and reassurance from someone who will prioritise you - make them feel either valid and needed again or give them space to breath, away from the tension and animosity. 
 
No, not always - all situations are unique and one persons reason for cheating will always be different from another. But an affair doesn't have to mean the end of a relationship and in some cases can act as the trigger to turn things around.
 
Thinking beyond the sex, learning to forgive the betrayal and trying to understand the reasons behind a person cheating can lead to a level of communication and understanding you might never had had before - yes it's a hard way to get there but if you think it's something worth saving, don't let people tell you it's black and white - grey is a colour too.

Saturday 25 April 2015

Pando; The Trembling Giant - The 80,000-Year-Old Aspen Grove That Clones Itself....

The oldest living organism in the world is 80,000 years old, and clones itself. Known as Pando, and nicknamed The Trembling Giant, this organism is a single grove of Quaking Aspen trees in Utah.

The grove is called Pando, which is Latin for "I spread" - and spread it does. The grove is actually a single clonal colony of a male Quaking Aspen. Simply put, it is essentially one massive root system that began life an estimated 80,000 years ago. The root system currently has somewhere around 47,000 stems that create the grove of trees that keep the root system going.

Pando is not only considered the oldest living organism but also possibly the heaviest. The colony has spread over about 106 acres and experts think in all it weighs about 6,600 short tons. However, some experts think that chunks of the root system have died off leaving parts of the colony separated, making it effectively more than one organism. And other less-studied clonal colonies of aspen may be contenders for the title of heaviest.

Pando exists in part because frequent fires have kept conifers out of the area, and because a shift to a semi-arid climate has kept other aspen seedlings from taking root. This has left plenty of space for the ancient root system of Pando to spread and thrive. The fact that Pando is one giant organism wasn't discovered until the 1970s, by Burton V. Barnes of the University of Michigan. Currently, experts are worried that a range of factors are threatening the life of this ancient organism.

While Pando's estimated age of 80,000 years may be staggering, even more amazing is the possibility that experts have underestimated its age. Because the age of the organism cannot be determined through tree rings (the average age of the stems being around 130 years), many factors such as the history and climate of the local environment over millennia. Taking different factors into account, some experts think that Pando could be closer to 1 million years old! There is a lot of debate and speculation around Pando, but one thing is certain: this organism is mind-blowing.

10 Of The World’s Most Remarkable Trees....

From the oldest to the tallest, to the most sacred and more, as a celebration of Arbor Day I present a brief who's-who of arboreal heroes.

There are so many reasons we should thank the trees that we share this planet with. They are the gentle giants who seem to have gotten the short end of the stick, so to speak. They are generally afforded with few rights and a general lack of deep respect by many, yet meanwhile, we are so incredibly reliant on their existence: they pump out the oxygen we need to live and they absorb carbon dioxide; they remove pollution; they provide shade; they create food, control erosion, and the list goes on. So with that in mind, here's a list of a handful of remarkable trees we have in our midst.

Methuselah

 
Considered the world’s oldest tree, the ancient bristlecone pine named Methuselah lives at 10,000 feet above sea level in the Inyo National Forest, California. Hidden amongst its family in the Ancient Bristlecone Pine Forest of the White Mountains, Methuselah is somewhere around 5,000 years old. For its protection, the location is kept a secret by the forest service – which means that nobody is exactly sure what Methuselah looks like, but the ancient bristlecone pine pictured above could be it. Then again, maybe not. It's a mysterious Methuselah.

Jomon Sugi


With a height of 83 feet and a 53-foot girth, Jomon Sugi is the largest conifer in Japan. This Cryptomeria japonica grows in a foggy, old-growth forest at an elevation of 4,200 feet on the north face of the tallest mountain on Yakushima island. Estimates age the trees to be between 2,170 and 7,200 years old, based on sample analysis and size. Visitors can hike to see Jomon, but the trek takes four to five hours each way; which doesn't seem to keep people away from making pilgrimage to this old moody beauty.

Hyperion


The tallest living tree is a towering 379.1-foot coast redwood (Sequoia sempervirens) discovered by Chris Atkins and Michael Taylor in California's Redwood National Park in 2006. Hyperion is a trooper; it survives on a hillside, rather than the more-typical alluvial flat, with 96 percent of the surrounding area having been logged of its original coast redwood growth. The tree-discovering duo had earlier found two other coast redwoods in the same park – Helios (376.3 feet) and Icarus (371.2 feet) – which both also beat the previous record held by Stratosphere Giant.

The Tree of A Hundred Horses


Located on the eastern slope of Mount Etna in Sicily, the Hundred Horse Chestnut (Castagno dei Cento Cavalli) is not only the largest, but also the oldest, known chestnut tree in the world. The Sweet Chestnut is thought to be anywhere from 2,000 to 4,000 years old, with the further end of the range coming in from botanist Bruno Peyronel. This giant beauty holds the Guinness World Record for "Greatest Tree Girth Ever," with a circumference of 190 feet when it was measured in 1780, but since it has separated into three parts, it no longer holds the record as current. The tree got its name from a legend in which a queen of Aragon and her company of one hundred knights took refuge under its protective boughs during a thunderstorm.

El Arbol del Tule


While the Tree of the Hundred Horses holds the record for the tree with the greatest girth historically, the tree which holds the current record is known as El Arbol del Tule, which lives inside a gated churchyard in the town of Santa Maria del Tule in Oaxaca, Mexico. This Montezuma cypress (Taxodium mucronatum) measures around the circumference at 119 feet, with a height of only 37 feet, what a squat cutie! To get a sense of the girth, it would take 10 mid-size cars placed end-to-end to circle del Tule.

Endicott Pear


In 1630, an English Puritan named John Endicott – serving as the premier governor of the Massachusetts Bay Colony – planted one of the first cultivated fruit trees in America. Upon planting the pear sapling imported from across the pond, Endicott proclaimed, "I hope the tree will love the soil of the old world and no doubt when we have gone the tree will still be alive." Indeed, 385 years later, the tree lays claim to the title of oldest living cultivated fruit tree in North America ... and still offers its pears to passers-by.

General Sherman


How do you say majestic? How about "the General Sherman Tree." This hulking grand dame in California's Sequoia National Park is the largest, by volume, known living single stem tree in the world. This giant sequoia (Sequoiadendron giganteum) is neither the tallest known living tree, nor is it the widest or oldest – but with its height of 275 feet, diameter of 25 feet and estimated bole volume of 52,513 cubic feet, it's the most voluminous. And with a respectable age of 2,300–2,700 years, it is one of the longest-lived of all trees on the planet to boot.

Jaya Sri Maha Bodhi


While it could be argued that all trees should be considered sacred, Jaya Sri Maha Bodhi truly is. This sacred fig tree in Anuradhapura, Sri Lanka is said to be the southern branch of the historical Bodhi tree in India under which Lord Buddha attained Enlightenment. It was planted in 288 BC, and is thus the oldest living tree planted by humans in the world. It is considered one of the most sacred relics of the Buddhists in Sri Lanka and is adored and visited by Buddhists all over the world.

Old Tjikko


At a mere 16 feet in stature, this Norway spruce on Fulufjället Mountains in Sweden may not seem that impressive on first glimpse, but don't judge a book by its cover. Old Tjikko is 9,550 years. It is not the oldest tree on the planet per se, but it is the oldest single-stemmed clonal tree – meaning that while the trunk may have died off here and there, the same roots have endured for all this time. For millennia the brutal tundra climate kept Old Tjikko and its neighbors in shrub form, but as the weather was warmed, the bush has sprouted into a full-blown tree.

Pando


Pando (Latin for "I spread") is not a single tree, but rather a clonal colony of Quaking Aspen; and with an age of 80,000 years, it is oldest living organism in the world. Residing in Utah and nicknamed the "trembling giant," this 105-acre colony is made of genetically identical trees connected by a single root system. Remarkably, by some estimates, the woodland could be as old as 1 million years, predating the earliest Homo sapiens by 800,000 years. Pando holds another impressive record as well: at 6,615 tons, it is also the heaviest living organism on earth.


 

Ecosystem Services Lost To Oil and Gas in North America....

Advanced technologies in oil and gas extraction coupled with energy demand have encouraged an average of 50,000 new wells per year throughout central North America since 2000. Although similar to past trends (see the graph, this page), the space and infrastructure required for horizontal drilling and high-volume hydraulic fracturing are transforming millions of hectares of the Great Plains into industrialized landscapes, with drilling projected to continue. Although this development brings economic benefits and expectations of energy security, policy and regulation give little attention to trade-offs in the form of lost or degraded ecosystem services. It is the scale of this transformation that is important, as accumulating land degradation can result in continental impacts that are undetectable when focusing on any single region. With the impact of this transformation on natural systems and ecosystem services yet to be quantified at broad extents, decisions are being made with few data at hand.
                 
Here is a first empirical analysis to advance beyond common rhetoric and speculation of oil and gas development, combining high-resolution satellite data of vegetation dynamics with industry data and publicly available data of historical and present-day oil and gas well locations for central North America. In addition to this broad-scale assessment of satellite-derived net primary production (NPP), a fundamental measure of a region's ability to provide ecosystem services, it also evaluates patterns of land-use change and water use. Before this work, little has been done in examining these types of data and their relations with ecosystem services at broad scales.
                 
Ecosystem service trade-offs: NPP is the amount of carbon fixed by plants and accumulated as biomass. It is a fundamental and supporting ecosystem service that is the basis for all life on Earth.

As such, the dynamics of NPP affect regional ability to provide a host of other essential ecosystem services (e.g., food production, biodiversity, wildlife habitat), which makes it a robust metric for broad evaluation of ecosystem services. Oil and gas activity reduces NPP through direct removal of vegetation to construct oil pads, roads, and so on.
 
These satellite-derived measurements of NPP began in 2000 and are produced annually; they capture interannual dynamics. To match the spatial scale of NPP measurement (∼1 km2), we determined annual density of oil and gas activity at the same resolution and estimated annual loss of NPP relative to such densities. Direct loss of vegetation resulting from oil and gas activity was validated at medium and fine spatial scales (∼250 m2 and 30 m2, respectively) by examining vegetation and disturbance trends before and after drilling [see supplementary materials (SM)]. We categorized annual reductions in NPP relative to land cover type (e.g., cropland and rangeland). As NPP is measured in grams of carbon per year, we convert to equivalent biomass-based measurements to provide context and discussion.
 
The number of oil and gas wells drilled within central provinces of Canada and central U.S. states 1900–2012. Canadian provinces: Alberta, Manitoba, and Saskatchewan. U.S. states: Colorado, Kansas, Montana, Nebraska, New Mexico, North Dakota, Oklahoma, South Dakota, Texas, Utah, and Wyoming.
 
We estimate that vegetation removal by oil and gas development from 2000 to 2012 reduced NPP by ∼4.5 Tg of carbon or 10 Tg of dry biomass across central North America (see the chart on page 402, left). The total amount lost in rangelands is the equivalent of approximately five million animal unit months (AUM; the amount of forage required for one animal for 1 month), which is more than half of annual available grazing on public lands managed by the U.S. Bureau of Land Management (BLM). The amount of biomass lost in croplands is the equivalent of 120.2 million bushels of wheat, ∼6% of the wheat produced in 2013 within the region and 13% of the wheat exported by the United States (see SM for equivalency calculations).

The loss of NPP is likely long-lasting and potentially permanent, as recovery or reclamation of previously drilled land has not kept pace with accelerated drilling (SM). This is not surprising because current reclamation practices vary by land ownership and governing body, target only limited portions of the energy landscape, require substantial funding and implementation commitments, and are often not initiated until the end of life of a well. Barring changes from existing trends and practices, it is likely that NPP loss and its effects (i.e., further loss of forage) will continue to parallel drilling trends and, potentially, may create unforeseen conflicts among agriculture, conservation, and energy.

Additional ecosystem functions, including wildlife habitat and landscape connectivity, are arguably as important as NPP. We estimate that the land area occupied by well pads, roads, and storage facilities built from 2000 to 2012 is ∼3 million ha, the equivalent land area of three Yellowstone National Parks (see the chart, middle). Although small in comparison with the total land area of the continent, this important land use is not accounted for and creates additional pressures for conserving rangelands and their ecosystem functions. The distribution of this land area has negative impacts: increasing fragmentation that can sever migratory pathways, alter wildlife behavior and mortality, and increase susceptibility to ecologically disruptive invasive species. As competition for arable land intensifies because of food and bioenergy demand, oil and gas may further expand into native rangelands.
 
The hydraulic fracturing technology underlying the current expansion of oil and gas drilling in the region has profound implications for hydrological, water-quality, and water-use regimes. High-volume hydraulic fracturing uses 8000 to 50,000 m3 of water per well for the initial fracturing event, which results in 7187 to 33,903 million m3 for wells drilled across this region during 2000 to 2012 (see SM). Nearly half of wells drilled in this time period occurred in already highly or extremely water-stressed regions (see the chart, right). As refracturing becomes more common to yield greater production, oil and gas development adds to an already fraught competition among agriculture, aquatic ecosystems, and municipalities for water resources, in addition to concerns of water quality.
                  
Avoiding broad-scale loss: The capacity for insight into land-use decisions has improved substantially since the last major episode of widespread land-use change across the Great Plains. In the early 20th century, rapid agricultural expansion and widespread displacement of native vegetation reduced the resilience of the region to drought, ultimately contributing to the Dust Bowl of the 1930s. It took catastrophic disruption of livelihoods and economies to trigger policy reforms that addressed environmental and social risks of land-use change.
                 
Fortunately, data and information are now far less of a barrier in understanding and addressing continental and cumulative impacts. However, the scale and focus of most land-use decision-making discourages comprehensive assessment of trade-offs implied in oil and gas development.

Recent planning efforts by U.S. federal management agencies demonstrate potential to balance demand for energy development with the need to protect other values, but the scope is limited to lands under federal jurisdiction. About 90% of oil and gas infrastructure in this region occurs on private land (United States only; see SM). Provinces, states, and municipalities that permit the majority of oil and gas development lack the capacity and mandate to address continental or regional consequences that transcend political boundaries; this lack leads to fragmented and piecemeal policies.                 

Decision-makers and scientists must work together to ensure that the best available information guides development of policies at the water-energy-food nexus. Traditional laws and regulations may have limited application, as oil and gas can be exempt from key environmental regulations, or such regulations isolate features of systems—e.g., a single species—while failing to capture interrelated impacts. Active synthesis and consolidation of data will improve accessibility and monitoring.

Integration of these data into land-use planning and policy across scales and jurisdictions is necessary to achieve energy policies that minimize ecosystem service losses.

Friday 24 April 2015

Sustainable Surfboards Part 1: Become One With The Ocean With This Algae-Based Surfboard.....

Would you surf algae? Not in it, on it. A team of scientists and surfboard makers in California claim to have created the world’s first sustainable surfboard made from algae. Researchers at the University of California, San Diego partnered with the leading polyurethane surfboard manufacturer Arctic Foam and determined a way to turn algae oil into the polyurethane foam core that comprises a surfboard. Foam cores for most surfboards currently come from petroleum.
Steven Mayfield, a professor of biology and algae geneticist at UCSD, led researchers in creating the new board. The team worked to chemically change the oil derived from laboratory algae and “morph” it into types of “polyols” to form the core of the new surfboard. “In the future, we’re thinking about 100 percent of the surfboard being made that way—the fiberglass will come from renewable resources, the resin on the outside will come from a renewable resource,” Mayfield said in a statement.
The board, which looks just like any other surfboard, was crafted at Arctic Foam’s headquarters in Ensenada, Mexico, and then brought to Oceanside, Calif. The difference, of course, is that this board is sustainably made. A surfer himself, Mayfield said he often felt contradictory in riding the waves with something produced in such an unsustainable way.

The board was presented to San Diego Mayor Kevin Faulconer in the hopes that Faulconer will display the board and show others how innovation can bring about sustainable change. The fit is perfect with San Diego’s reputation for the ocean and surfing as well as biotechnology and innovation, Mayfield said.


 

Are We Watching A Paradigm Shift? 7 Hot Trends In Cognitive Neuroscience.....

In the spirit of procrastination, here is a list of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.

7 Hot Trends In Cognitive Neuroscience:

Oscillations

Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a core topic of research in areas ranging from conscious awareness to memory and navigation.

Complex systems, dynamics, and emergence

Again, a concept as old as neuroscience itself, but this one seems to be piggy-backing on several trends towards a new resurgence. As neuroscience grows bored of blobology, and our analysis methods move increasingly towards modelling dynamical interactions (see above) and complex networks, our explanatory metaphors more frequently emphasize brain dynamics and emergent causation. This is a clear departure from the boxological approach that was so prevalent in the 80’s and 90’s.

Direct intervention and causal inference

Pseudo-invasive techniques like transcranial direct-current stimulation are on the rise, partially because they allow us to perform virtual lesion studies in ways not previously possible. Likewise, exponential growth of neurobiological and genetic techniques has ushered in the era of optogenetics, which allows direct manipulation of information processing at a single neuron level. Might this trend also reflect increased dissatisfaction with the correlational approaches that defined the last decade?

You could also include steadily increasing interest in pharmacological neuroimaging under this category.

Computational modelling and reinforcement learning

With the hype surrounding Google’s £200 million acquisition of Deep Mind, and the recent Nobel Prize award for the discovery of grid cells, computational approaches to neuroscience are hotter than ever. Hardly a day goes by without a reinforcement learning or similar paper being published in a glossy high-impact journal. This one takes many forms but it is undeniable that model-based approaches to cognitive neuroscience are all the rage. There is also a clear surge of interest in the Bayesian Brain approach, which could almost have it’s own bullet point. But that would be too self serving.

Gain control

Gain control is a very basic mechanism found throughout the central nervous system. It can be understood as the neuromodulatory weighting of post-synaptic excitability, and is thought to play a critical role in contextualizing neural processing. Gain control might for example allow a neuron that usually encodes a positive prediction error to ‘flip’ its sign to encode negative prediction error under a certain context. Gain is thought to be regulated via the global interaction of neural modulators (e.g. dopamine, acetylcholine) and links basic information theoretic processes with neurobiology. This makes it a particularly desirable tool for understanding everything from perceptual decision making to basic learning and the stabilization of oscillatory dynamics. Gain control thus links computational, biological, and systems level work and is likely to continue to attract a lot of attention in the near future.

Hierarchies that are not really hierarchies

Neuroscience loves its hierarchies. For example, the Van Essen model of how visual feature detection proceeds through a hierarchy of increasingly abstract functional processes is one of the core explanatory tools used to understand vision in the brain. Currently however there is a great deal of connectomic and functional work pointing out interesting ways in which global or feedback connections can re-route and modulate processes from the ‘top’ directly to the ‘bottom’ or vice versa.

It’s worth noting this trend doesn’t do away with the old notions of hierarchies, but instead just renders them a bit more complex and circular. Put another way, it is currently quite trendy to show ‘the top is the bottom’ and ‘the bottom is the top’. This partially relates to the increased emphasis on emergence and complexity discussed above. A related trend is extension of what counts as the ‘bottom’, with low-level subcortical or even first order peripheral neurons suddenly being ascribed complex abilities typically reserved for cortical processes.

Primary sensations that are not so primary

Closely related to the previous point, there is a clear trend in the perceptual sciences of being increasingly liberal about how ‘primary’ sensory areas really are. I saw this first hand at last year’s Vision Sciences Society which featured at least a dozen posters showing how one could decode tactile shape from V1, or visual frequency from A1, and so on. Again this is probably related to the overall movement towards complexity and connectionism; as we lose our reliance on modularity, we’re suddenly open to a much more general role for core sensory areas.

Interestingly I didn’t include things like multi-modal or high resolution imaging as I think they are still actually emerging and have not quite fully arrived yet. But some of these – computational and connectomic modelling for example – are clearly part and parcel of contemporary zeitgeist. It’s also very interesting to look over this list, as there seems to be a clear trend towards complexity, connectionism, and dynamics. Are we witnessing a paradigm shift in the making? Or have we just forgotten all our first principles and started mangling any old thing we can get published? If it is a shift, what should we call it? Something like ‘computational connectionism’ comes to mind.

6 Steps You Can Take Today to Stop Worrying.....

Mark Twain once said “I am an old man and have known a great many troubles, but most of them never happened.” That is a tremendously profound statement. Let us try to think about most of the things we worry about. We think about our future families, our jobs, our health, potential dangers we might face in traffic and other things, while we’re seated at our desks. Some of us even make up things to worry about. If you’ve ever thought about elaborate scenarios where you were being mugged, or situations where a spouse you haven’t even met yet disagreed with you and things got out of control, you aren’t alone.
 
Worry is like a virus. It takes hold of your thoughts with an innocuous little concern, and before you know it, you are strapped along for a ride you don’t remember getting on. If all of the time and energy you spend worrying could instead be put to productive use, think of how much better your life would be.
 
Dale Carnegie talked about worrying quite often in his work. He encouraged people to look back at all of the time they spent worrying over the years, and then asked them if their lives would have been different if they did something else with that time instead. We’re all humans, and worrying is a normal human tendency. However, we don’t need to let worry run our lives. Too many of us spend sleepless nights thinking about work, relationships, finances and a dozen other things. Instead of lying in bed, looking up at the wall pondering, what if we just got up and did something about the problem? If you can’t sleep, that’s perfectly fine. Since you’re already awake, why don’t you actually do something that will alleviate the situation?
 
Let’s talk about some things you can do to stop yourself from worrying incessantly.
 
1. Define the problem.
 
Instead of ruminating an intangible fear in your mind, grab a piece of paper and write it down. Write down the exact nature of your fear. Now, divide this paper in half. On one side of the paper, list the things you know are factual about this problem. On the other, list the things that you are making up in your mind. Once you’ve read the exact nature of the problem in detail, you will be less likely to worry about it.
 
2. Write down your worst-case scenario.
 
If you’re really worried about something, grab a piece of paper and let your imagination run wild. Write down the absolute worst thing that could happen if every single thing you were worrying about came true.
 
For example, if you’re worried about your finances, write down a scenario where this problem comes true in the worst possible way. You went broke, and subsequently lost your house. Your friends and family deserted you because you didn’t have any money. You had to find shelter in an alley, and pull a tarp blanket over your head on rainy nights while you tried to sleep. Maybe the odd stray dog nipped at your heels as you lay there. You get the idea – stretch your scenario to its limit.
 
Now, read your worst case scenario back out loud. You may even be amused at some of the things you came up with. There’s a curious thing that happens when you put your fears in writing. They suddenly aren’t that scary anymore. In the scenario above, even though you are broke, homeless and had nothing to feed yourself, you’ll still have your skills. You still have everything you’ve learned in your life. You can walk into an interview and get hired. Your road back would be slow, but it would be worth it.
 
That worst-case scenario doesn’t seem too bad now, does it?
 
3. Forgive yourself for your mistakes.
 
Unless you’re a superior species descended from an unknown corner of the galaxy, you are allowed to make mistakes. Thinking about what you did or what you said helps no one. The past is gone. All you have is right now – this moment. In this moment, choose to forgive yourself, and move on as a wiser person.
 
4. Accept that the future is uncertain.
 
There are very few things in this world that you can be certain are going to happen. Yes, you can predict that the sun will rise in the east tomorrow. But other than that, everything else is pretty much up in the air. Don’t try to worry about a time in the future. It hasn’t happened yet. You can’t engineer the details of your life exactly. None of us can. The element of uncertainty is what makes life fun and exciting.
 
Do what you can in the present moment to give yourself the best possible odds, and then be content in the knowledge that you have done all you can.
 
5. Other people’s opinions of you don’t matter.
 
Too many of us spend our lives trying to mould ourselves according to other people’s expectations of us. This is a colossal waste of time. None of us are perfect. We all have our strengths and weaknesses. Why would you want to shape your personality into something that isn’t a genuine representation of yourself?
 
Stop worrying about what your boss or your peer group might think of you. You only need your own approval.
 
6. Make a plan.
 
Now that you’ve figured out all the bits and pieces of the things you’re worried about, you need to do something to prevent those things from happening. If your financial situation isn’t perfect, and you constantly worry about it, make a plan of action for earning more money. Ask your superiors for a raise, learn a new skill that is more valuable on the market or look for a higher-paying job. If you don’t think you’re going to ever make enough money as a salaried employee, start your own business.
 
As soon as you map out concrete steps you can begin to take immediately, your worries will subside.
 
Taking action is a huge tool for conquering your worries.
 
In summation, worrying is an activity that most of us have become so used to that we are unable to imagine an alternative. But there is a way to live a life where you don’t succumb to your fears. A life where you use any concerns or worries that might pop up as catalysts for positive change. Start implementing these changes into your thinking process, and the results will be life-changing.

Autistic Children's Sensory Experiences In Their Own Words......

Children diagnosed with autism often have distinctive sensory experiences, such as being ultra sensitive to noise, or finding enjoyment in repeated, unusual sensory stimulation. However, much of what we know about these experiences comes from the testimony of parents, researchers and clinicians. Now Anne Kirby and her colleagues have published the first report of autistic children's sensory experiences, based on these children's own accounts. As the authors say, "children's voices are still rarely heard or taken seriously in the academic arena," so this is an innovative approach.

Twelve autistic children aged 4 to 13 were interviewed in their homes. The children's autism varied in severity, but they were all capable of conducting verbal interviews. The researchers used a range of techniques to facilitate the interviews, such as playing family video clips of the children to prompt discussion of specific episodes. Kirby and her team said their first important finding was to demonstrate the feasibility of interviewing young children with autism.

Careful analysis of the transcripts from the interviews revealed three key themes. The first of these – "normalising" – showed how the children considered many of their experiences to be just like other people's, as if rejecting the notion that there was something distinct or odd about their behaviour, and also showing a certain self-consciousness (contrary to existing research that suggests self-consciousness is impaired in autism).

Interviewer: What about things you don't like to touch or feel on your skin?
Child: Um, sharp stuff.
I: Sharp stuff? (smiles) Yeah, exactly.
C: Um, like most people do
I: Yeah
C: Um (pause), hot stuff.
I: Yep.
C: Like, burning hot, like pizza that just came out of the oven.
----
I: Do you have a favourite thing that you like to eat?
C: Uh, pizza.
I: Yeah? When it's not too hot, right?
C: Right. That's what most people say.
The children also expressed satisfaction at learning to cope with problematic sensory sensitivity – such as a dislike of brushing hair. "What's different about having your hair brushed now?" the interviewer asked. "That I look beautiful," the thirteen-year-old replied. The children appeared motivated to adapt to their sensitivities, so as to participate in normal daily activities. The researchers said this is contrary to past findings that suggest people with autism don't want to be "neurotypical" (perhaps such feelings can emerge later).

Another theme was the methods the children used to recount their experiences, including using anecdotes, demonstrating (e.g. by imitating the noise of the car engine, or mimicking a disgust reaction), by repeating their own inner speech from particular experiences, and, in the case of two children, by using similes. On that last point, one child likened eating spinach to eating grass, another likened loud voices to a lion's roar. "The use of simile as a storytelling method seemed to suggest a sort of perspective-taking that is not expected in children with autism" the researchers said.

The final theme concerned the way the children frequently talked about their sensory experiences in terms of their responses to various situations and stimuli. For example, the children spoke of their strategies, such as covering their ears, watching fireworks through a window, and watching sport on TV rather than in the arena. They also told the interviewers about their uncontrollable physical reactions, such as the pain of loud noises or teeth brushing. When he hears loud music, one little boy said: "it feels like my heart is beating, and um, my, uh, my whole body's shaking. Mmm and uh, and my eyes, uh, they start to blink a lot." The children's reactions were often tied to their fear of particular situations or objects, such as inflated balloons.  It feels like "the unknown is gonna come," said another child.

The study has obvious limitations, such as the small sample and lack of a comparison group, so we can't know for sure that children without autism wouldn't come up with similar answers. However, the research provides a rare insight into autistic children's own perspective on their sensory worlds. "Through exploration of how children share about their experiences, we can come to better understand those experiences," the researchers said, ultimately helping "how we study, assess, and address sensory features that impact daily functioning among children with autism."

Thursday 23 April 2015

Optimism and Pessimism Are Separate Systems Influenced by Different Genes....

Optimists enjoy better health, more success, more happiness, and longer lives, than pessimists. No surprise, then, that psychologists are taking an increasing interest in our outlook on life. An unresolved issue is whether optimism and pessimism are two ends of the same spectrum, or if they're separate. If the traits are separate, then in principle, some people could be highly optimistic and pessimistic – to borrow the poet Gibran's analogy, they would be keenly aware of both the rose and its thorns.

Timothy Bates at the University of Edinburgh has turned to behavioural genetics to help settle this question. He's analysed data on optimism and pessimism gathered from hundreds of pairs of identical and non-identical twins. These were participants from a US survey and their average age was 54. The twins rated their agreement with various statements as a way to reveal their optimism and pessimism such as "In uncertain times, I usually expect the best" and "I rarely count on good things happening to me." They also completed a measure of the "Big Five" personality traits: extraversion, neuroticism etc.

The reasoning behind twin studies like this is that if optimism and pessimism are highly heritable (i.e. influenced by inherited genetic factors), then these traits should correlate more highly between pairs of identical twins, who share all their genes, than between non-identical twins, who share approximately half their genes. And if optimism was found to be more heritable than pessimism, or vice versa, this would indicate different genetic influences on optimism and pessimism.

Another insight from twin studies is to disentangle the relative influence of shared and unique environmental factors – these are the aspects of a twin's upbringing that they share with their sibling, such as parenting style, and those that are unique, such as the friends they keep.

Bates' analysis indicates that optimism and pessimism are subject to shared genetic influences (with each other, and with other personality traits), but also to independent genetic influences, thus supporting the notion that optimism and pessimism are distinct traits, not simply two sides of the same coin.

"Optimism and pessimism are at least partially biologically distinct, resulting in two distinct psychological tendencies," Bates said. He added that this dovetails with neuroscience evidence that's indicated there are separate neural systems underlying optimism and pessimism.

The new findings also suggested there is a "substantial" influence of upbringing on optimism and pessimism (i.e. increasing one and lowering the other, and/or vice versa). This raises the intriguing possibility that optimism might to be some extent a malleable trait that can be encouraged through a child's upbringing.

Saturday 28 March 2015

Why It's Important That Employers Let Staff Personalise Their Workspaces....

The sparring mitt, yellow stitches spelling "SLUGGER" casually lying on the desk. The Mathlete trophy on a high shelf. A Ganesh statue, slightly chipped. Why do people bring these kinds of personal objects into the workplace?

Researchers Kris Byron and Gregory Laurence found answers by consulting 28 people in a range of jobs and workplaces. They used the "grounded theory" approach, starting with a clutch of more open-ended interviews and then pursuing the lines of inquiry that emerged, in every case inventorying the person’s workspace and exploring the significance of each object.

The conventional understanding is that personal objects are territorial markers used to communicate who we are to co-workers. And indeed many interviewees emphasised this function, a "unique fingerprint" that expresses difference. This might be an indicator of character - I’m a happy-go-lucky person - but participants also used objects to emphasise their organisational roles. A framed MBA certificate reminds others that this cubicle bunny is made of management material, thank you, whereas doodles show that the person is part of the creative class. An event planner explained that the thank-you notes pinned to their board were to reassure others of her reliability - a core requirement in her role.

As well as showing differences, personalisation can also affirm shared identity. Star Wars memorabilia across multiple desks shows that "a lot of us have, you know, that techie background". Similarly, some items were inside jokes, with meaning only apparent to those sharing in its history. And although personalisation could emphasise status - think of that MBA certificate - some managers attempted to de-emphasise status differences by presenting everyday objects that made themselves more approachable.

Interviewees raised another reason for personalisation: to build relationships. These items were seen as icebreakers or ways to find "common ground", whether through the contents of a bookshelf, or a photo denoting parenthood. Byron and Laurence photographed every desk-setup from the perspective of an outside visitor, and found that 75 per cent of such conversation-starters were positioned to be clearly visible from that view. Many participants felt that these personalisation functions were vital and companies prevent them at their peril: "They want to have such strong relationships with customers but they’re taking away the personal elements that I think can lend towards building those types of relationships with clients."

In contrast, a certain proportion of personalisation objects - about a third in all - were positioned to only be visible to the owner themselves. These exemplify a final function of personalisation - not to communicate to others, but to remind ourselves of our identity.

This could be an aspirational symbol - the poster put up by a designer that showed "the kind of design I eventually want to do", or the gift from an inspiring role model. Or it might be a way to put work into a larger context, so on the tough days, "you can look at your picture (of children) and realise this is only a job."

Many objects had multiple functions - communicating difference, starting conversations, and reminding oneself of identity. Byron and Laurence conclude that "organisations would be unwise to put excessive limits on employees’ personalisation of their workspaces," as an innocuous paperweight may turn out to carry a lot inside.