Newsletter – September 2014



     “Is there any way I can avoid becoming an entitled little shit?
     “To heal the world emanate love, not hate.
     “‘Judge not that ye be not judged does not mean that if you judge others, they will judge you back. It means that whatever you do outwardly, you will do with greater intensity to yourself inwardly. If you want to stop being so hard on yourself, stop being so judgmental of others.
September greetings, Dear Friends…
About 30 years ago on a personal retreat I wrote a piece called “YOU ARE WHERE YOU PUT YOURSELF.”  a play on the 70’s slogan “YOU ARE WHAT YOU EAT.” It was, in my late 40s, a reflection upon and codification of a profound and humbling understanding I finally arrived at – We really do become the environments we put ourselves in. This recollection was inspired by all the articles included in this month’s newsletter, especially Michael Lewis’ piece “OCCUPATIONAL HAZARDS OF WORKING ON WALL STREET” which calls for a repeat of William Dershowitz’ quote from last month’s newsletter:
     “Is there any way I can avoid becoming an entitled little shit?”
The Lewis’ article expands Dershowitz’s “entitled little shit” disease from elite education to soul-corroding careers – and beyond. We do literally become where we put ourselves. And where others put us.
Increasingly these environments are becoming more virtual than real, more digitized brainwashing than physical coercion. In a scanning tonight’s headlines and political cartoons on the web, I recognized how addicted I am to infecting my spirits with images of negativity presented to me in “the news.” Finally I am making a connection to the wisdom of “love thy neighbor” that has been around me all my life. Ram Dass put it this way in this recent quote from…
     “To heal the world emanate love, not hate.”
Most of what I ingest as “news” emanates hate, not love. More importantly I, after reading and watching such “news,” am more prone to emanate hate myself.
It does seem we humans are programmed to attend to drama. As far back as I can remember, before there was any technology except dial-up telephone and AM radio, I, and almost everyone I knew, was drawn to drama. What occurs to me now is to be interested in the few who were not. How many of those few who approach non-judgmentalness can I remember clearly?
Beverly in high school comes to mind. Dick in college. Harlan in my first job. Sarah, Barney  and Stef as I began my teaching career. It’s hard to remember many from the late 60s through the 70s, an environment of ego-driven activism and righteousness for me and those I was trying so hard to emulate and be valued by.
Why? Because, the righteous activist environment I put myself in surrounded me by the company of others who were cynically judgmental as well. And I didn’t just choose their company, I chose them as audience I needed to impress. That choice was a source of hate-emanation that has stuck with me for a number of decades.
For all of the effort I put into becoming a spiritual and loving being, it is only now at 76 years of age that I see what I’m writing about here. I have had the words and concepts since my first communion at six – and I have managed to misunderstand and misuse them as judgments for all this time. One of my great teachers, John Enright, said this to me in 1971:
     “‘Judge not that ye be not judged’ does not mean that if you judge others, they will judge you back. It means that whatever you do outwardly, you will do with greater intensity to yourself inwardly. If you want to stop being so hard on yourself, stop being so judgmental of others.”
Forty-three years later I’m beginning to transform this concept into behavior.
So what could I possibly offer to a younger me from this perspective? What could I say or write to my younger self that he could hear? I have no idea, but I suspect many older folks offer little because we can’t imagine anyone getting through to us “back then” – when we were so caught up trying to fit “the right way to be…”
And once again the Universe provides – I’m reading a marvelous novel right now, “Where the Rehoku Bone Sings”  by Tina Makeretti . It is about the tribal history of the Moriori and the Maori in New Zealand, and it is not a pretty history. The Moriori, a people committed to peace and emanating love, created and lived in a unique culture on Rehoku, now known as the Chatham Islands. Two Maori tribes, overpowered on the New Zealand mainland by other warring tribes, invaded Rehoku in the 1800’s. The Morioris  welcome them with love. The invading Maoris, whose history was one of dominate or be dominated, enslaved those Moriori they didn’t exterminate. This novel is not just a history of injustice and hate-emanation, but opens a totally different way of thinking about human beings and their possibilities. I cannot recommend it highly enough, especially to those of us who have known only cultures that believe they must dominate or be dominated.
In the book an ancestor, Imi, watches from the “no-time” space as his descendants try to cope with the now “civilized world” they find themselves in – and rediscover parts of their history that might bring them back to the possibilities of “emanating love.” Their journey is helping me with mine even though their roots are not mine to return to. But, along with their struggles, Ram Dass’ offering of “emanate love” and my maturing, I, too, am feeling possibilities of returning to “emanating love.” It is a feeling that is making sense of everything for me now.
So, while focusing September’s newsletter on the dangers of unawareness in choosing our environments, let’s begin with “NINETY-SIX WORDS FOR LOVE”…
Much love, FW
PS: Because some of the articles are long, I’m trying to give you a glimpse of why I find them valuable for me; this is in the form of an ‘FW NOTE’ at the beginning of each – and remember, this is meant to be a month’s worth relaxed reading…

PS#2: You’ll notice we’re using a new format – we’d welcome any feedback you have…

FW NOTE:  I greatly respect the unconscious power of words – 76 years of experience have made clear to me that “the way we phrase our lives is the way we end up living them.” And I also admire Robert Johnson who wrote many books, among them HE, SHE, WE and TRANSFORMATION. The trio helped me greatly reduce my Masculine-Feminine confusion, and the latter gave me another glimpse of how First, Second and Third Ages, rightly understood, can light our way home to “elderhood”.
The first difficulty we meet in discussing anything concerning our feelings is that we have no adequate vocabulary to use. Where there is no terminology, there is no consciousness. A poverty-stricken vocabulary is an immediate admission that the subject is inferior or depreciated in that society.
Sanskrit has ninety-six words for love; ancient Persian has eighty, Greek three, and English only one. This is indication to me of the poverty of awareness or emphasis that we give to that tremendously important realm of feeling. Eskimos have thirty words for snow, because it is a life-and death matter to them to have exact information about the element they live with so intimately. If we had a vocabulary of thirty words for love … we would immediately be richer and more intelligent in this human element so close to our heart. An Eskimo probably would die of clumsiness if he had only one word for snow; we are close to dying of loneliness because we have only one word for love. Of all the Western languages, English may be the most lacking when it comes to feeling.
Imagine what richness would be expressed if one had a specific vocabulary for the love of one’s father, another word for the love of one’s mother, yet another for one’s camel (the Persians have this luxury), still another for another’s spouse, and another exclusively for the sunset! Our world would expand and gain clarity immeasurably if we had such tools.
It is always the inferior function, whether in an individual or a culture, that suffers this poverty. One’s greatest treasures are won by the superior function but always at the cost of the inferior function. One’s greatest triumphs are always accompanied by one’s greatest weaknesses. Because thinking is our superior function in the English-speaking world, it follows automatically that feeling is our inferior function. These two faculties tend to exist at the expense of each other. If one is strong in feeling, one is likely to be inferior in thinking — and vice versa. Our superior function has given us science and the higher standard of living — but at the cost of impoverishing the feeling function.
This is vividly demonstrated by our meager vocabulary of feeling words. If we had the expanded and exact vocabulary for feeling that we have for science and technology, we would be well on our way to warmth of relatedness and generosity of feeling.

FW NOTE:  As I said in my Musings, all the articles included here caused me to reflect again on “WE BECOME WHERE WE PUT OURSELVES,” this one in particular. For a current example of powerfully seductive, and dangerously self-destructive, certain environments can be, read on…
A few times in the past several decades it has sounded as if big Wall Street banks were losing their hold on the graduates of the world’s most selective universities: the early 1990s, the dot-com boom and the immediate aftermath of the global financial crisis (Teach for America!). Each time the graduating class of Harvard and Yale looked as if it might decide, en masse, that it wanted to do something with its life other than work for Morgan Stanley.
Each time it turned out that it didn’t.
Silicon Valley is once again bubbling, and, in response, big Wall Street banks are raising starting salaries, and reducing the work hours of new recruits. But it’s hard to see why this time should be any different from the others.
Technology entrepreneurship will never have the power to displace big Wall Street banks in the central nervous system of America’s youth, in part because tech entrepreneurship requires the practitioner to have an original idea, or at least to know something about computers, but also because entrepreneurship doesn’t offer the sort of people who wind up at elite universities what a lot of them obviously crave: status certainty.
“I’m going to Goldman,” is still about as close as it gets in the real world to “I’m going to Harvard,” at least for the fiercely ambitious young person who is ambitious to do nothing in particular.
The question I’ve always had about this army of young people with seemingly endless career options who wind up in finance is: What happens next to them? People like to think they have a “character,” and that this character of theirs will endure, no matter the situation. It’s not really so. People are vulnerable to the incentives of their environment, and often the best a person can do, if he wants to behave in a certain manner, is to choose carefully the environment that will go to work on his character.
One moment this herd of graduates of the nation’s best universities are young people — ambitious yes, but still young people — with young people’s ideals and hopes to live a meaningful life. The next they are essentially old people, at work gaming ratings companies, and designing securities to fail so they might make a killing off the investors they dupe into buying them, and rigging various markets at the expense of the wider society, and encouraging all sorts of people to do stuff with their capital and their companies that they never should do.
Not everyone on Wall Street does stuff that would have horrified them, had it been described to them in plain English, when they were 20. But enough do that it makes you wonder. What happens between then and now?
All occupations have hazards. An occupational hazard of the Internet columnist, for instance, is that he becomes the sort of person who says whatever he thinks will get him the most attention rather than what he thinks is true, so often that he forgets the difference.
The occupational hazards of Wall Street are more interesting — and not just because half the graduating class of Harvard still wants to work there. Some are obvious — for instance, the temptation, when deciding how to behave, to place too much weight on the very short term and not enough on the long term. Or the temptation, if you make a lot of money, to deploy financial success as an excuse for failure in other aspects of your life. But some of the occupational hazards on Wall Street are less obvious.
Here’s a few that seem, just now, particularly relevant:
— Anyone who works in finance will sense, at least at first, the pressure to pretend to know more than he does.
It’s not just that people who pick stocks, or predict the future price of oil and gold, or select targets for corporate acquisitions, or persuade happy, well-run private companies to go public don’t know what they are talking about: what they pretend to know is unknowable. Much of what Wall Street sells is less like engineering than like a forecasting service for a coin-flipping contest — except that no one mistakes a coin-flipping contest for a game of skill. To succeed in this environment you must believe, or at least pretend to believe, that you are an expert in matters where no expertise is possible. I’m not sure it’s any easier to be a total fraud on Wall Street than in any other occupation, but on Wall Street you will be paid a lot more to forget your uneasy feelings.
— Anyone who works in big finance will also find it surprisingly hard to form deep attachments to anything much greater than himself.
You may think you are going to work for Credit Suisse or Barclays, and will there join a team of professionals committed to the success of your bank, but you will soon realize that your employer is mostly just a shell for the individual ambitions of the people who inhabit it. The primary relationship of most people in big finance is not to their employer but to their market. This simple fact resolves many great Wall Street mysteries. An outsider looking in on the big Wall Street banks in late 2008, for instance, might ask, “How could all these incredibly smart and self-interested people have come together and created collective suicide?” More recently the same outsider might wonder, “Why would a trader rig Libor, or foreign exchange rates, or the company’s dark pool, when the rewards for the firm are so trivial compared with the cost, if he is caught? Why, for that matter, wouldn’t some Wall Street bank set out to rat out the bad actors in their market, and set itself as the honest broker?”
The answer is that the people who work inside the big Wall Street firms have no serious stake in the long-term fates of their firms. If the place blows up they can always do what they are doing at some other firm — so long as they have maintained their stature in their market. The quickest way to lose that stature is to alienate the other people in it. When you see others in your market doing stuff at the expense of the broader society, your first reaction, at least early in your career, might be to call them out, but your considered reaction will be to keep mum about it. And when you see people making money in your market off some broken piece of internal machinery — say, gameable ratings companies, or riggable stock exchanges, or manipulable benchmarks — you will feel pressure not to fix the problem, but to exploit it.
— More generally, anyone who works in big finance will feel enormous pressure to not challenge or question existing arrangements.
One of our financial sector’s most striking traits is how fiercely it resists useful, disruptive entrepreneurship that routinely upends other sectors of our economy. People in finance are paid a lot of money to disrupt every sector of our economy. But when it comes to their own sector, they are deeply wary of market-based change. And they have the resources to prevent it from happening. To take one example: in any other industry, IEX, the new stock market created to eliminate a lot of unnecessary financial intermediation (and the subject of my last book) would have put a lot of existing players out of business. (And it still might.) The people who run IEX have very obviously found a way to make the U.S. stock market — and other automated financial markets — more efficient and, in the bargain, reduce, by some vast amount, the take of the financial sector. Because of this they now face what must be one of the best organized and funded smear campaigns outside of U.S. politics: underhanded attacks from anonymous Internet trolls, congressional hearings staged to obfuscate problems in the market, by senators who take money from the obfuscators; op-ed articles from prominent former regulators, now employed by the Wall Street machine, that spread outright lies about the upstarts; error-ridden pieces by prominent journalists too stupid or too lazy or too compromised to do anything but echo what they are told by the very people who make a fortune off the inefficiencies the entrepreneurs seek to eliminate.
The intense pressure to conform, to not make waves, has got to be the most depressing part of all, for a genuinely ambitious young person. It’s pretty clear that the government lacks the power to force serious change upon the financial sector. There’s a big role for Silicon Valley-style scorched-earth entrepreneurship on Wall Street right now, and the people most likely to innovate are newcomers to the industry who have no real stake in the parts of it that need scorching.
As a new employee on Wall Street you might think this has nothing to do with you. You would be wrong. Your new environment’s resistance to market forces, and to the possibility of doing things differently and more efficiently, will soon become your own. When you start your career you might think you are setting out to change the world, but the world is far more likely to change you.
So watch yourself, because no one else will.


FW NOTE:  But what about the environments we are born into, the imprint us with their values before we can make any choices for ourselves? How do we recover from that cultural programming that shuts down possibilities out of our conscious awareness? In this piece, Kasey Edwards offers one map for helping our young open up those possibilities again…
LIFE ISN’T A WAITING ROOM: I want my daughters to be empowered to choose to be in a relationship because they want to, not because they feel they need to.
“Why does she put up with him?’”
It’s a question we’ve all asked about our friends’ relationships when we’ve heard about or seen their partners behaving badly.
And if I’m being completely honest, looking back on some of my past boyfriends, I wish I’d asked myself the same question.
Some women are trapped in toxic relationships (which is a separate issue entirely), but other women stay in them because they believe that a bad relationship – or even an okay one – is better than being single.

From Cinderella to The Bachelor, girls can’t escape the message that being single is the equivalent of life’s waiting room.
This can encourage women to stay in relationships when they probably shouldn’t and tolerate or excuse behaviours that they definitely shouldn’t.
I want my daughters to be empowered to choose to be in a relationship because they want to, not because they think they need to.
I’m determined to instill in them that being single is an acceptable, and even desirable, option.
The following are some small, everyday approaches of getting girls to practice the habits of independence.
Learning to enjoy your own company is powerful because you can make better choices about how you spend your time and who you spend it with. We have ‘quiet time’ everyday in our house where everybody spends an hour playing on their own.
Not only do I need the sanity break, it’s an opportunity for me to model the pleasure that can be found in solitude.
When my five year old tells me she’s bored I try to reframe it as an opportunity for her to learn to entertain herself.
I want my girls to know that even though their dad and I love to spend time together, there will always be a special place in my life for my female friends.
Before my girls start dating, I want them to understand the importance of female friendship and how it needs to be nurtured and protected.
No matter how besotted they become with a boy, they should always make time for their female friends.
Boys will come and go but it’s our female friends who will bring us ice-cream and tissues in the middle of the night.
And if a man ever tries to isolate them from their friends they should see it as a neon warning sign for unhealthy possessiveness and controlling behaviour.
Money is a pretty crappy reason to stay in a relationship. But it also exerts a strong pull.
I want my girls to understand that the best relationships are when both people are independent and choose to be together.
It’s important to teach them about budgeting and saving and sexually transmitted debt.
Young love can make people do stupid things with money, so before they’re love sick I want to caution them about the common pitfalls such as unwisely lending money to their lovers, prematurely buying joint assets, and buying overly extravagant gifts.
Ad Feedback
‘I wish daddy was here, he’d know what to do,’ was my daughter’s response when the batteries went flat on her favourite toy.
It was a wake-up call to me to start modeling how to fix stuff.
You don’t have to go out and restore a car or take apart a computer, but changing a light bulb is enough to demonstrate that women and men can take charge and fix problems.
The prevalence of ‘sluts’ and ‘whores’ in young adult literature and schoolyard banter is enough to make a feminist mother weep.
Our daughters learn early the same sexually oppressive messages that we learnt: that female sexuality is a prize to be given to (or taken by) a man.
I’ve seen it with friends who embarrass their young daughters by telling them to stop touching themselves because it’s ‘dirty down there’.
And so begins a pattern of lifelong shame.
Allowing girls to learn to self-satisfy empowers them to take charge of their own sexual desires – both with and without a partner.
* It’s possible my girls will choose female partners, but for the purpose of this article I’ll assume their future partners are male.
Kasey Edwards is a writer and bestselling author

FW NOTE:  I hadn’t previously thought of “delaying adulthood” as an important part of the process of becoming an elder. Steinberg’s focus on the importance of extending “brain plasticity” (which occurs by delaying  adulthood for neurological reasons) gives me a radically different perspective on younger generations’ “laziness.”  And a more personal reason is you might also find this alteration of viewpoint helpful – especially if you have 20-somethings  who’ve moved back home and “aren’t taking on their fair share of ‘adult’ responsibilities”…
ONE of the most notable demographic trends of the last two decades has been the delayed entry of young people into adulthood. According to a large-scale national study conducted since the late 1970s, it has taken longer for each successive generation to finish school, establish financial independence, marry and have children. Today’s 25-year-olds, compared with their parents’ generation at the same age, are twice as likely to still be students, only half as likely to be married and 50 percent more likely to be receiving financial assistance from their parents.
People tend to react to this trend in one of two ways, either castigating today’s young people for their idleness or acknowledging delayed adulthood as a rational, if regrettable, response to a variety of social changes, like poor job prospects. Either way, postponing the settled, responsible patterns of adulthood is seen as a bad thing.
This is too pessimistic. Prolonged adolescence, in the right circumstances, is actually a good thing, for it fosters novelty-seeking and the acquisition of new skills.
Studies reveal adolescence to be a period of heightened “plasticity” during which the brain is highly influenced by experience. As a result, adolescence is both a time of opportunity and vulnerability, a time when much is learned, especially about the social world, but when exposure to stressful events can be particularly devastating. As we leave adolescence, a series of neurochemical changes make the brain increasingly less plastic and less sensitive to environmental influences. Once we reach adulthood, existing brain circuits can be tweaked, but they can’t be overhauled.
You might assume that this is a strictly biological phenomenon. But whether the timing of the change from adolescence to adulthood is genetically preprogrammed from birth or set by experience (or some combination of the two) is not known. Many studies find a marked decline in novelty-seeking as we move through our 20s, which may be a cause of this neurochemical shift, not just a consequence. If this is true — that a decline in novelty-seeking helps cause the brain to harden — it raises intriguing questions about whether the window of adolescent brain plasticity can be kept open a little longer by deliberate exposure to stimulating experiences that signal the brain that it isn’t quite ready for the fixity of adulthood.
Evolution no doubt placed a biological upper limit on how long the brain can retain the malleability of adolescence. But people who can prolong adolescent brain plasticity for even a short time enjoy intellectual advantages over their more fixed counterparts. Studies have found that those with higher I.Q.s, for example, enjoy a longer stretch of time during which new synapses continue to proliferate and their intellectual development remains especially sensitive to experience. It’s important to be exposed to novelty and challenge when the brain is plastic not only because this is how we acquire and strengthen skills, but also because this is how the brain enhances its ability to profit from future enriching experiences.
With this in mind, the lengthy passage into adulthood that characterizes the early 20s for so many people today starts to look less regrettable. Indeed, those who can prolong adolescence actually have an advantage, as long as their environment gives them continued stimulation and increasing challenges.
What do I mean by stimulation and challenges? The most obvious example is higher education, which has been shown to stimulate brain development in ways that simply getting older does not. College attendance pays neural as well as economic dividends.
Naturally, it is possible for people to go to college without exposing themselves to challenge, or, conversely, to surround themselves with novel and intellectually demanding experiences in the workplace. But generally, this is more difficult to accomplish on the job than in school, especially in entry-level positions, which typically have a learning curve that hits a plateau early on.
Alas, something similar is true of marriage. For many, after its initial novelty has worn off, marriage fosters a lifestyle that is more routine and predictable than being single does. Husbands and wives both report a sharp drop in marital satisfaction during the first few years after their wedding, in part because life becomes repetitive. A longer period of dating, with all the unpredictability and change that come with a cast of new partners, may be better for your brain than marriage.
If brain plasticity is maintained by staying engaged in new, demanding and cognitively stimulating activity, and if entering into the repetitive and less exciting roles of worker and spouse helps close the window of plasticity, delaying adulthood is not only O.K.; it can be a boon.
Laurence Steinberg, a professor of psychology at Temple University, “Age of Opportunity: Lessons From the New Science of Adolescence.”

FW NOTE:  This is a perfect follow-up to the previous piece. It expands and deepens the complexities of flourishing in both childhood and adult years, especially given the virtual brain-washings created by our highly digitized lives. I’ve believed I was relatively protected from media conditioning, but almost all of Scott’s examples have impacted me. Also, this is a good lead-in to the last two articles which are helping me recognize how pervasively distracting electronic gadgetry can be – even living at a retreat centre in the New Zealand bush.
Sometime this spring, during the first half of the final season of “Mad Men,” the popular pastime of watching the show — recapping episodes, tripping over spoilers, trading notes on the flawless production design, quibbling about historical details and debating big themes — segued into a parlor game of reading signs of its hero’s almost universally anticipated demise. Maybe the 5 o’clock shadow of mortality was on Don Draper from the start. Maybe the plummeting graphics of the opening titles implied a literal as well as a moral fall. Maybe the notable deaths in previous seasons (fictional characters like Miss Blankenship, Lane Pryce and Bert Cooper, as well as figures like Marilyn Monroe and Medgar Evers) were premonitions of Don’s own departure. In any case, fans and critics settled in for a vigil. It was not a matter of whether, but of how and when.
TV characters are among the allegorical figures of our age, giving individual human shape to our collective anxieties and aspirations. The meanings of “Mad Men” are not very mysterious: The title of the final half season, which airs next spring, will be “The End of an Era.” The most obvious thing about the series’s meticulous, revisionist, present-minded depiction of the past, and for many viewers the most pleasurable, is that it shows an old order collapsing under the weight of internal contradiction and external pressure. From the start, “Mad Men” has, in addition to cataloging bygone vices and fashion choices, traced the erosion, the gradual slide toward obsolescence, of a power structure built on and in service of the prerogatives of white men. The unthinking way Don, Pete, Roger and the rest of them enjoy their position, and the ease with which they abuse it, inspires what has become a familiar kind of ambivalence among cable viewers. Weren’t those guys awful, back then? But weren’t they also kind of cool? We are invited to have our outrage and eat our nostalgia too, to applaud the show’s right-thinking critique of what we love it for glamorizing.
The widespread hunch that “Mad Men” will end with its hero’s death is what you might call overdetermined. It does not arise only from the internal logic of the narrative itself, but is also a product of cultural expectations. Something profound has been happening in our television over the past decade, some end-stage reckoning. It is the era not just of mad men, but also of sad men and, above all, bad men. Don is at once the heir and precursor to Tony Soprano, that avatar of masculine entitlement who fended off threats to the alpha-dog status he had inherited and worked hard to maintain. Walter White, the protagonist of “Breaking Bad,” struggled, early on, with his own emasculation and then triumphantly (and sociopathically) reasserted the mastery that the world had contrived to deny him. The monstrousness of these men was inseparable from their charisma, and sometimes it was hard to tell if we were supposed to be rooting for them or recoiling in horror. We were invited to participate in their self-delusions and to see through them, to marvel at the mask of masculine competence even as we watched it slip or turn ugly. Their deaths were (and will be) a culmination and a conclusion: Tony, Walter and Don are the last of the patriarchs.
In suggesting that patriarchy is dead, I am not claiming that sexism is finished, that men are obsolete or that the triumph of feminism is at hand. I may be a middle-aged white man, but I’m not an idiot. In the world of politics, work and family, misogyny is a stubborn fact of life. But in the universe of thoughts and words, there is more conviction and intelligence in the critique of male privilege than in its defense, which tends to be panicky and halfhearted when it is not obtuse and obnoxious. The supremacy of men can no longer be taken as a reflection of natural order or settled custom.
This slow unwinding has been the work of generations. For the most part, it has been understood — rightly in my view, and this is not really an argument I want to have right now — as a narrative of progress. A society that was exclusive and repressive is now freer and more open. But there may be other less unequivocally happy consequences. It seems that, in doing away with patriarchal authority, we have also, perhaps unwittingly, killed off all the grown-ups.
A little over a week after the conclusion of the first half of the last “Mad Men” season, the journalist and critic Ruth Graham published a polemical essay in Slate lamenting the popularity of young-adult fiction among fully adult readers. Noting that nearly a third of Y.A. books were purchased by readers ages 30 to 44 (most of them presumably without teenage children of their own), Graham insisted that such grown-ups “should feel embarrassed about reading literature for children.” Instead, these readers were furious. The sentiment on Twitter could be summarized as “Don’t tell me what to do!” as if Graham were a bossy, uncomprehending parent warning the kids away from sugary snacks toward more nutritious, chewier stuff.
It was not an argument she was in a position to win, however persuasive her points. To oppose the juvenile pleasures of empowered cultural consumers is to assume, wittingly or not, the role of scold, snob or curmudgeon. Full disclosure: The shoe fits. I will admit to feeling a twinge of disapproval when I see one of my peers clutching a volume of “Harry Potter” or “The Hunger Games.” I’m not necessarily proud of this reaction. As cultural critique, it belongs in the same category as the sneer I can’t quite suppress when I see guys my age (pushing 50) riding skateboards or wearing shorts and flip-flops, or the reflexive arching of my eyebrows when I notice that a woman at the office has plastic butterfly barrettes in her hair.
God, listen to me! Or don’t. My point is not so much to defend such responses as to acknowledge how absurd, how impotent, how out of touch they will inevitably sound. In my main line of work as a film critic, I have watched over the past 15 years as the studios committed their vast financial and imaginative resources to the cultivation of franchises (some of them based on those same Y.A. novels) that advance an essentially juvenile vision of the world. Comic-book movies, family-friendly animated adventures, tales of adolescent heroism and comedies of arrested development do not only make up the commercial center of 21st-century Hollywood. They are its artistic heart.
Meanwhile, television has made it very clear that we are at a frontier. Not only have shows like “The Sopranos” and “Mad Men” heralded the end of male authority; we’ve also witnessed the erosion of traditional adulthood in any form, at least as it used to be portrayed in the formerly tried-and-true genres of the urban cop show, the living-room or workplace sitcom and the prime-time soap opera. Instead, we are now in the age of “Girls,” “Broad City,” “Masters of Sex” (a prehistory of the end of patriarchy), “Bob’s Burgers” (a loopy post-“Simpsons” family cartoon) and a flood of goofy, sweet, self-indulgent and obnoxious improv-based web videos.
What all of these shows grasp at, in one way or another, is that nobody knows how to be a grown-up anymore. Adulthood as we have known it has become conceptually untenable. It isn’t only that patriarchy in the strict, old-school Don Draper sense has fallen apart. It’s that it may never really have existed in the first place, at least in the way its avatars imagined. Which raises the question: Should we mourn the departed or dance on its grave?
Before we answer that, an inquest may be in order. Who or what killed adulthood? Was the death slow or sudden? Natural or violent? The work of one culprit or many? Justifiable homicide or coldblooded murder?
We Americans have never been all that comfortable with patriarchy in the strict sense of the word. The men who established our political independence — guys who, for the most part, would be considered late adolescents by today’s standards (including Benjamin Franklin, in some ways the most boyish of the bunch) — did so partly in revolt against the authority of King George III, a corrupt, unreasonable and abusive father figure. It was not until more than a century later that those rebellious sons became paternal symbols in their own right. They weren’t widely referred to as Founding Fathers until Warren Harding, then a senator, used the phrase around the time of World War I.

From the start, American culture was notably resistant to the claims of parental authority and the imperatives of adulthood. Surveying the canon of American literature in his magisterial “Love and Death in the American Novel,” Leslie A. Fiedler suggested, more than half a century before Ruth Graham, that “the great works of American fiction are notoriously at home in the children’s section of the library.” Musing on the legacy of Rip Van Winkle and Huckleberry Finn, he broadened this observation into a sweeping (and still very much relevant) diagnosis of the national personality: “The typical male protagonist of our fiction has been a man on the run, harried into the forest and out to sea, down the river or into combat — anywhere to avoid ‘civilization,’ which is to say the confrontation of a man and woman which leads to the fall to sex, marriage and responsibility. One of the factors that determine theme and form in our great books is this strategy of evasion, this retreat to nature and childhood which makes our literature (and life!) so charmingly and infuriatingly ‘boyish.’ ”
Huck Finn is for Fiedler the greatest archetype of this impulse, and he concludes “Love and Death” with a tour de force reading of Twain’s masterpiece. What Fiedler notes, and what most readers of “Huckleberry Finn” will recognize, is Twain’s continual juxtaposition of Huck’s innocence and instinctual decency with the corruption and hypocrisy of the adult world.
Huck’s “Pap” is a thorough travesty of paternal authority, a wretched, mean and dishonest drunk whose death is among the least mourned in literature. When Huck drifts south from Missouri, he finds a dysfunctional patriarchal order whose notions of honor and decorum mask the ultimate cruelty of slavery. Huck’s hometown represents “the world of belongingness and security, of school and home and church, presided over by the mothers.” But this matriarchal bosom is as stifling to Huck as the land of Southern fathers is alienating. He finds authenticity and freedom only on the river, in the company of Jim, the runaway slave, a friend who is by turns Huck’s protector and his ward.
The love between this pair repeats a pattern Fiedler discerned in the bonds between Ishmael and Queequeg in “Moby-Dick” and Natty Bumppo and Chingachgook in James Fenimore Cooper’s Leatherstocking novels (which Twain famously detested). What struck Fiedler about these apparently sexless but intensely homoerotic connections was their cross-cultural nature and their defiance of heterosexual expectation. At sea or in the wilderness, these friends managed to escape both from the institutions of patriarchy and from the intimate authority of women, the mothers and wives who represent a check on male freedom.
Fiedler saw American literature as sophomoric. He lamented the absence of books that tackled marriage and courtship — for him the great grown-up themes of the novel in its mature, canonical form. Instead, notwithstanding a few outliers like Henry James and Edith Wharton, we have a literature of boys’ adventures and female sentimentality. Or, to put it another way, all American fiction is young-adult fiction.
The elevation of the wild, uncivilized boy into a hero of the age remained a constant even as American society itself evolved, convulsed and transformed. While Fiedler was sitting at his desk in Missoula, Mont., writing his monomaniacal tome, a youthful rebellion was asserting itself in every corner of the culture. The bad boys of rock ‘n’ roll and the pouting screen rebels played by James Dean and Marlon Brando proved Fiedler’s point even as he was making it. So did Holden Caulfield, Dean Moriarty, Augie March and Rabbit Angstrom — a new crop of semi-antiheroes in flight from convention, propriety, authority and what Huck would call the whole “sivilized” world.

From there it is but a quick ride on the Pineapple Express to Apatow. The Updikean and Rothian heroes of the 1960s and 1970s chafed against the demands of marriage, career and bureaucratic conformity and played the games of seduction and abandonment, of adultery and divorce, for high existential stakes, only to return a generation later as the protagonists of bro comedies. We devolve from Lenny Bruce to Adam Sandler, from “Catch-22” to “The Hangover,” from “Goodbye, Columbus” to “The Forty-Year-Old Virgin.”
But the antics of the comic man-boys were not merely repetitive; in their couch-bound humor we can detect the glimmers of something new, something that helped speed adulthood to its terminal crisis. Unlike the antiheroes of eras past, whose rebellion still accepted the fact of adulthood as its premise, the man-boys simply refused to grow up, and did so proudly. Their importation of adolescent and preadolescent attitudes into the fields of adult endeavor (see “Billy Madison,” “Knocked Up,” “Step Brothers,” “Dodgeball”) delivered a bracing jolt of subversion, at least on first viewing. Why should they listen to uptight bosses, stuck-up rich guys and other readily available symbols of settled male authority?
That was only half the story, though. As before, the rebellious animus of the disaffected man-child was directed not just against male authority but also against women. In Sandler’s early, funny movies, and in many others released under Apatow’s imprimatur, women are confined to narrowly archetypal roles. Nice mommies and patient wives are idealized; it’s a relief to get away from them and a comfort to know that they’ll take care of you when you return. Mean mommies and controlling wives are ridiculed and humiliated. Sexually assertive women are in need of being shamed and tamed. True contentment is only found with your friends, who are into porn and “Star Wars” and weed and video games and all the stuff that girls and parents just don’t understand.
The bro comedy has been, at its worst, a cesspool of nervous homophobia and lazy racial stereotyping. Its postures of revolt tend to exemplify the reactionary habit of pretending that those with the most social power are really beleaguered and oppressed. But their refusal of maturity also invites some critical reflection about just what adulthood is supposed to mean. In the old, classic comedies of the studio era — the screwbally roller coasters of marriage and remarriage, with their dizzying verbiage and sly innuendo — adulthood was a fact. It was inconvertible and burdensome but also full of opportunity. You could drink, smoke, flirt and spend money. The trick was to balance the fulfillment of your wants with the carrying out of your duties.
The desire of the modern comic protagonist, meanwhile, is to wallow in his own immaturity, plumbing its depths and reveling in its pleasures. Sometimes, as in the recent Seth Rogen movie “Neighbors,” he is able to do that within the context of marriage. At other, darker times, say in Adelle Waldman’s literary comedy of manners, “The Love Affairs of Nathaniel P.,” he will remain unattached and promiscuous, though somewhat more guiltily than in his Rothian heyday, with more of a sense of the obligation to be decent. It should be noted that the modern man-boy’s predecessors tended to be a lot meaner than he allows himself to be.
But they also, at least some of the time, had something to fight for, a moral or political impulse underlying their postures of revolt. The founding brothers in Philadelphia cut loose a king; Huck Finn exposed the dehumanizing lies of America slavery; Lenny Bruce battled censorship. When Marlon Brando’s Wild One was asked what he was rebelling against, his thrilling, nihilistic response was “Whaddaya got?” The modern equivalent would be “. . .”
Maybe nobody grows up anymore, but everyone gets older. What happens to the boy rebels when the dream of perpetual childhood fades and the traditional prerogatives of manhood are unavailable? There are two options: They become irrelevant or they turn into Louis C. K. Every white American male under the age of 50 is some version of the character he plays on “Louie,” a show almost entirely devoted to the absurdity of being a pale, doughy heterosexual man with children in a post-patriarchal age. Or, if you prefer, a loser.
The humor and pathos of “Louie” come not only from the occasional funny feelings that he has about his privileges — which include walking through the city in relative safety and the expectation of sleeping with women who are much better looking than he is — but also, more profoundly, from his knowledge that the conceptual and imaginative foundations of those privileges have crumbled beneath him. He is the center of attention, but he’s not entirely comfortable with that. He suspects that there might be other, more interesting stories around him, funnier jokes, more dramatic identity crises, and he knows that he can’t claim them as his own. He is above all aware of a force in his life, in his world, that by turns bedevils him and gives him hope, even though it isn’t really about him at all. It’s called feminism.
Who is the most visible self-avowed feminist in the world right now? If your answer is anyone other than Beyoncé, you might be trying a little too hard to be contrarian. Did you see her at the V.M.A.’s, in her bejeweled leotard, with the word “feminist” in enormous illuminated capital letters looming on the stage behind her? A lot of things were going on there, but irony was not one of them. The word was meant, with a perfectly Beyoncé-esque mixture of poise and provocation, to encompass every other aspect of her complicated and protean identity. It explains who she is as a pop star, a sex symbol, the mother of a daughter and a partner in the most prominent African-American power couple not currently resident in the White House.
And while Queen Bey may be the biggest, most self-contradicting, most multitude-containing force in popular music at the moment, she is hardly alone. Taylor Swift recently described how, under the influence of her friend Lena Dunham, she realized that “I’ve been taking a feminist stance without saying so,” which only confirmed what anyone who had been listening to her smart-girl power ballads already knew. And while there will continue to be hand-wringing about the ways female singers are sexualized — cue the pro and con think pieces about Nicki Minaj, Katy Perry, Miley Cyrus, Iggy Azalea, Lady Gaga, Kesha and, of course, Madonna, the mother of them all — it is hard to argue with their assertions of power and independence. Take note of the extent and diversity of that list and feel free to add names to it. The dominant voices in pop music now, with the possible exception of rock, which is dad music anyway, belong to women. The conversations rippling under the surfaces of their songs are as often as not with other women — friends, fans,
Similar conversations are taking place in the other arts: in literature, in stand-up comedy and even in film, which lags far behind the others in making room for the creativity of women. But television, the monument valley of the dying patriarchs, may be where the new cultural feminism is making its most decisive stand. There is now more and better television than there ever was before, so much so that “television,” with its connotations of living-room furniture and fixed viewing schedules, is hardly an adequate word for it anymore. When you look beyond the gloomy-man, angry-man, antihero dramas that too many critics reflexively identify as quality television — “House of Cards,” “Game of Thrones,” “True Detective,” “Boardwalk Empire,” “The Newsroom” — you find genre-twisting shows about women and girls in all kinds of places and circumstances, from Brooklyn to prison to the White House. The creative forces behind these programs are often women who have built up the muscle and the résumés to do what they want.
Many people forget that the era of the difficult TV men, of Tony and Don and Heisenberg, was also the age of the difficult TV mom, of shows like “Weeds,” “United States of Tara,” “The Big C” and “Nurse Jackie,” which did not inspire the same level of critical rapture partly because they could be tricky to classify. Most of them occupied the half-hour rather than the hourlong format, and they were happy to swerve between pathos and absurdity. Were they sitcoms or soap operas? This ambiguity, and the stubborn critical habit of refusing to take funny shows and family shows as seriously as cop and lawyer sagas, combined to keep them from getting the attention they deserved. But it also proved tremendously fertile.
The cable half-hour, which allows for both the concision of the network sitcom and the freedom to talk dirty and show skin, was also home to “Sex and the City,” in retrospect the most influential television series of the early 21st century. “Sex and the City” put female friendship — sisterhood, to give it an old political inflection — at the center of the action, making it the primary source of humor, feeling and narrative complication. “The Mary Tyler Moore Show” and its spinoffs did this in the 1970s. But Carrie and her girlfriends could be franker and freer than their precursors, and this made “Sex and the City” the immediate progenitor of “Girls” and “Broad City,” which follow a younger generation of women pursuing romance, money, solidarity and fun in the city.
Those series are, unambiguously, comedies, though “Broad City” works in a more improvisational and anarchic vein than “Girls.” Their more inhibited broadcast siblings include “The Mindy Project” and “New Girl.” The “can women be funny?” pseudo-debate of a few years ago, ridiculous at the time, has been settled so decisively it’s as if it never happened. Tina Fey, Amy Poehler, Amy Schumer, Aubrey Plaza, Sarah Silverman, Wanda Sykes: Case closed. The real issue, in any case, was never the ability of women to get a laugh but rather their right to be as honest as men.
And also to be as rebellious, as obnoxious and as childish. Why should boys be the only ones with the right to revolt? Not that the new girls are exactly Thelma and Louise. Just as the men passed through the stage of sincere rebellion to arrive at a stage of infantile refusal, so, too, have the women progressed by means of regression. After all, traditional adulthood was always the rawest deal for them.
Which is not to say that the newer styles of women’s humor are simple mirror images of what men have been doing. On the contrary. “Broad City,” with the irrepressible friendship of the characters played by Ilana Glazer and Abbi Jacobson at its center, functions simultaneously as an extension and a critique of the slacker-doofus bro-posse comedy refined (by which I mean exactly the opposite) by “Workaholics” or the long-running web-based mini-sitcom “Jake and Amir.” The freedom of Abbi and Ilana, as of Hannah, Marnie, Shoshanna and Jessa on “Girls” — a freedom to be idiotic, selfish and immature as well as sexually adventurous and emotionally reckless — is less an imitation of male rebellion than a rebellion against the roles it has prescribed. In Fiedler’s stunted American mythos, where fathers were tyrants or drunkards, the civilizing, disciplining work of being a grown-up fell to the women: good girls like Becky Thatcher, who kept Huck’s pal Tom Sawyer from going too far astray; smothering maternal figures like the kind but repressive Widow Douglas; paragons of sensible judgment like Mark Twain’s wife, Livy, of whom he said he would “quit wearing socks if she thought them immoral.”
Looking at those figures and their descendants in more recent times — and at the vulnerable patriarchs lumbering across the screens to die — we can see that to be an American adult has always been to be a symbolic figure in someone else’s coming-of-age story. And that’s no way to live. It is a kind of moral death in a culture that claims youthful self-invention as the greatest value. We can now avoid this fate. The elevation of every individual’s inarguable likes and dislikes over formal critical discourse, the unassailable ascendancy of the fan, has made children of us all. We have our favorite toys, books, movies, video games, songs, and we are as apt to turn to them for comfort as for challenge or enlightenment.
Y.A. fiction is the least of it. It is now possible to conceive of adulthood as the state of being forever young. Childhood, once a condition of limited autonomy and deferred pleasure (“wait until you’re older”), is now a zone of perpetual freedom and delight. Grown people feel no compulsion to put away childish things: We can live with our parents, go to summer camp, play dodge ball, collect dolls and action figures and watch cartoons to our hearts’ content. These symptoms of arrested development will also be signs that we are freer, more honest and happier than the uptight fools who let go of such pastimes.
I do feel the loss of something here, but bemoaning the general immaturity of contemporary culture would be as obtuse as declaring it the coolest thing ever. A crisis of authority is not for the faint of heart. It can be scary and weird and ambiguous. But it can be a lot of fun, too. The best and most authentic cultural products of our time manage to be all of those things. They imagine a world where no one is in charge and no one necessarily knows what’s going on, where identities are in perpetual flux. Mothers and fathers act like teenagers; little children are wise beyond their years. Girls light out for the territory and boys cloister themselves in secret gardens. We have more stories, pictures and arguments than we know what to do with, and each one of them presses on our attention with a claim of uniqueness, a demand to be recognized as special. The world is our playground, without a dad or a mom in sight.
I’m all for it. Now get off my lawn.
A. O. Scott is a chief film critic for The Times. He last wrote for the magazine about a crazy thing that happened on Twitter.

FW NOTE:  Chudakov’s opening sentence says just what it is he writes about, and I gained much insight from his nine specific ways in which we are being changed…
Our constant use of cameras, TVs, computers, and smart devices is affecting our thoughts and behavior to a degree we may not even realize…
Watching and being watched are no longer confined to how newborns bond with their mothers or apprentice chefs learn from sushi masters. Watching now changes how we identify ourselves and how others understand us. “Selfies” are not an anomaly; they are personal reflections of a wholesale adoption of the new culture of watching. We are watching so many—and so many are watching us in so many different places and ways—that watching and being watched fundamentally alter how we think and behave.
While 50% of our neural tissue is directly or indirectly related to vision, it is only in the last 100 years that image-delivery technologies (cameras, TVs, computers, smart devices) arrived. Here is a list of some ways all this watching is changing us.
Today the average person will have spent nine years of their life doing something that is not an essential human endeavor: watching other people, often people they don’t know. I’m talking, of course, about watching TV.
When asked to choose between watching TV and spending time with their fathers, 54% of 4- to 6-year-olds in the U.S. preferred television. The average American youth spends 900 hours a year in school and 1,200 hours a year watching TV.
In Korea today there are eating broadcasts, called muk-bang: online channels streaming live feeds of people eating large quantities of food while chatting with viewers who pay to watch them.
A survey of first-time plastic surgery patients found that 78% were influenced by reality television and 57% of all first-time patients were “high–intensity” viewers of cosmetic-surgery reality TV.
We watch housewives and Kardashians, TED talks and LOL cats. We watch people next to us (via the Android I-Am app) and people in 10-second “snaps” anywhere an IP address finds them (via Snapchat). The more we watch, the less we notice how much we’re watching. It seems it’s not only what we’re watching but the act of watching itself that beguiles us. The more devices and screens we watch, the more we rationalize our watching, give it precedence in our lives, tell ourselves it has meaning and purpose. We are redefining—and rewiring—ourselves in the process. This is the new (and very seductive) culture of watching.
In Japan’s Osaka train station—where an average of 413,000 passengers board trains every day—an independent research agency will soon deploy 90 cameras and 50 servers to watch and track faces as they move around the station. The purpose: to validate the safety of emergency exits in the event of a disaster. The technology can identify faces with a 99.99% accuracy rate.
We watch to learn. Evolutionary eons have taught us to watch to learn where we are, what is around us, what we need to pay attention to, where danger and excitement lurk. “Watching others is a favorite activity of young primates,” says Frans de Waal, one of the world’s leading primate behavior experts. This is how we build and transmit culture, he explains.
What are we learning from all this watching?
Thanks to wifi built into almost anything with a lens, we are learning to share what we watch. Jonah Berger, Wharton Associate Professor of Marketing at the University of Pennsylvania, looked at video sharing and created an “arousal index,” explaining that “physiological arousal is characterized by activation of the autonomic nervous system, and the mobilization caused by this excitory state may boost sharing.” Google Think Insights calls the YouTube generation Generation C for connection, community, creation, curation: 50% of Gen C talk to friends after watching a video, and 38% share videos on an additional social network after watching them on YouTube. As
we watch emotionally charged content, our bodies—specifically, our autonomic nervous system—are compelled to share.
The experience of playing baseball, launching a missile attack, getting trapped in a mudslide, or chasing Maria Menounos is far different from watching those things. Yet now that we can watch almost anything—often while it happens—we must consider the neuroscience of “mirroring” that occurs when we watch others.
When our eyes are open, vision accounts for two-thirds of the electrical activity of the brain. But it is our mirror neurons—which V. S. Ramachandran, distinguished professor of neuroscience at the University of California, San Diego, calls “the basis of civilization”—that transport watching into the strange territory of being in an action where we’re not physically present.
As Le Anne Schreiber wrote in This Is Your Brain on Sports:
“[A]bout one-fifth of the neurons that fire in the premotor cortex when we perform an action (say, kicking a ball) also fire at the sight of somebody else performing that action. A smaller percentage fire even when we only hear a sound associated with an action (say, the crack of a bat). This subset of motor neurons that respond to others’ actions as if they were our own are called ‘mirror neurons,’ and they seem to encode a complete archive of all the muscle movements we learn to execute over the course of our lives, from the first smile and finger wag to a flawless triple toe loop.”
When we watch, we feel we’re there.
It appears that the idea of having some sense of a relationship with people who are not physically present, whom you do not know (in the conventional sense of having met them or being friends with them), arrived with the widespread adoption of television around 1950. Since then, these so-called parasocial relationships have become so common that we take them for granted. Television, virtual worlds, and gaming have created replacements for friends: people who occupy space in our media rooms and minds on an occasional basis.
Researchers now believe that loneliness motivates individuals to seek out these relationships, defying the obvious fact that the relationships are not real. The Real Housewives of Atlanta has 2,345,625 Facebook fans, who in some measure take real housewives into their own real lives.
People who watched a favorite TV show when they were feeling lonely reported feeling less lonely while watching. Further, while many of us experience lower self-esteem and a negative mood following a fight or social rejection, researchers found that those participants who experienced a relationship threat and then watched their favorite TV show were actually buffered against the blow to self-esteem, negative mood, and feelings of rejection.
It pays to have friends on TV.

From micro video security cameras (“less than one inch square”) to The Rich Kids of Beverly Hills, watching is now someone’s business plan. Eyeball-hungry producers especially want to blur the boundaries between the game of reality TV and the illusion of living real lives.
The result: Watch culture alters not only our sense of privacy in public; there is always someone in the vanity mirror looking back at us. (Author Jarod Kintz quipped: “A mirror is like my own personal reality TV show—where I’m both the star and only viewer. I’ve got to get my ratings up.”) As cameras obsessively follow other lives, our identity adjusts.Rather than acknowledge the artifice of lives deliberately programmed for storylines and conflicts—the lifeblood of so-called reality TV— we fuse our emotions and concerns with others’ professions, houses, cars, friends, husbands, and wives.
When watching assumes greater importance, the people we watch become personal replacements; they stand in our places and we in theirs. Models, stars, and athletes are the body doubles of watch culture. These doubles become our bodies: according to WebMD, reality television is contributing to eating disorders in girls. Since the boom of reality television in 2000, eating disorders in teenage girls (ages 13-19) have nearly tripled.
New technologies make us all paparazzi. 20 Day Stranger, an app developed by the MIT Media Lab Playful Systems research group and MIT’s Dalai Lama Center for Ethics and Transformative Values, makes it possible to swap lives with—and watch—a stranger for 20 days:
“As you and your distant partner get up and go to work or school or wherever else the world takes you, the app tracks your path, pulling related photos from Foursquare or Google Maps along the way. If you stop in a certain coffee shop, the app will find a picture someone took there, and send it to your partner.”
Ostensibly designed to “build empathy and awareness,” 20 Day Stranger delivers snackable images via smartphone, which strokes your inner voyeur while enabling yet another person to watch you and “slowly get an impression of [your] life.”
When Shain Gandee, star of MTV’s Buckwild, died, his vehicle stuck deep in a mud pit, Huffington Post’s Jesse Washington asked, “Was Gandee living for the cameras that night or for himself?”
This watcher-watched merger is growing uneasy. Many a real housewife—from Atlanta to Orange County—may begin to wonder: Whose life is it, anyway?
Professor Simon Louis Lajeunesse of the University of Montreal wanted to compare the behavior of men who viewed sexually explicit material with those who had never seen it. He had to drastically rethink his study after failing to find any male volunteers who had never watched porn.
The hallmark of watch culture is the remove. In the snug blind of the Internet or from the private places we take our devices, we are hidden, removed from interaction while watching action. Because we can now watch anonymously, we have opened a Pandora’s box of previously hidden urges. In such interactions, we are seeing a new kind of affinity: what researchers call “intimacy at a distance.”
In this faux intimacy, watching easily turns to spying. As our lenses take us to parts and pores we could have barely imagined only a generation ago, the urge to watch is so compelling that we adopt its logic—as we do with all our tools—and we easily move from watching what we can see to watching what we could see. With a camera in the baby’s room I could watch the nanny; with a camera on the third floor I could watch the clones in Accounting to see if they’re up to any funny business. Economic or security intentions ensure that this slope hardly feels slippery; we move down it easily, seamlessly slipping from watching to spying to invading and then to destroying—what others thought were their personal moments and what many of us consider as—privacy.
When we don’t know, we watch.
After the disappearance of Malaysia Airlines Flight 370, commentator Michael Smerconish and others argued that video should be fed in real time out of every airline cockpit to help investigators. Of course, pilots are in a professional class that is unique. But today there are many businesses where security and confidentiality are paramount. How long before we apply the “learn by watching” logic to software engineers or doctors? We have already applied it to all our public and commercial spaces.
With the array of gadgetry available to us all, it is virtually impossible not to want to see anything. The new culture of watching overcomes time and space and takes precedence over moral and ethical boundaries.
Watching not only changes our narratives—what we say about the world; it changes what we know and how we know it. Pew recently reported that we get more of our information now from watching news (via TV and mobile devices) than from any other method. But “information” in this sense is now affected by—even mixed up with—the other watching we do. Writing on CNN Opinion, Carol Costello asked, “Why are we still debating climate change?” In 2013, 10,883 out of 10,885 scientific articles agreed: Global warming is happening, and humans are to blame. Citing lack of public confidence in these scientists, Costello wrote:
“Most Americans can’t even name a living scientist. I suspect the closest many Americans get to a living, breathing scientist is the fictional Dr. Sheldon Cooper from CBS’s sitcom The Big Bang Theory. Sheldon is brilliant, condescending, and narcissistic. Whose trust would he inspire?”
There is a logic here that is difficult to understand rationally but is operative nonetheless: What we know is not what we experience but what we watch.
We watch housewives and Kardashians, TED talks and LOL cats. We watch people next to us (via the Android I-Am app) and people in 10-second “snaps” anywhere an IP address finds them (via Snapchat). The more we watch, the less we notice how much we’re watching.
So it is not surprising that watching boomerangs—creating watchers who watch us back from hidden or out-of-sightline cameras. Watchers monitor our faces and bodies coming and going in convenience stores, gas stations, banks, department stores, and schools. Newly formed companies have created thriving businesses watching people “passing through doorways, passageways or in open areas” to count them, track them, and analyze what can be seen from an “unlimited number of cameras.”
Even driving to the store you’re being watched, via your license plate.
Ironically, the culture of watching will compel us—sooner or later—to keep watch: to be mindful of how much we watch and how much all this watching changes us. That may be the best way to detect and positively affect what is happening right before our eyes.
Barry Chudakov is a Founder and principal of Sertain Research, author of Metalifestream & The Tool That Tells the Story; market researcher, brand and media consultant.

FW NOTE:  As a teacher myself, I identify with Clay Shirky and his struggle with policing media use in class and at home. But what got my attention the most was this quote: “The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class.” His illumination of the consequences and costs of digital-enabled ‘multi-tasking’ is of importance to all ages…
I teach theory and practice of social media at NYU, and am an advocate and activist for the free culture movement, so I’m a pretty unlikely candidate for internet censor, but I have just asked the students in my fall seminar to refrain from using laptops, tablets, and phones in class.
I came late and reluctantly to this decision — I have been teaching classes about the internet since 1998, and I’ve generally had a laissez-faire attitude towards technology use in the classroom. This was partly because the subject of my classes made technology use feel organic, and when device use went well, it was great. Then there was the competitive aspect — it’s my job to be more interesting than the possible distractions, so a ban felt like cheating. And finally, there’s not wanting to infantilize my students, who are adults, even if young ones — time management is their job, not mine.
Despite these rationales, the practical effects of my decision to allow technology use in class grew worse over time. The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, taught to a group of students selected using roughly the same criteria every year. The change seemed to correlate more with the rising ubiquity and utility of the devices themselves, rather than any change in me, the students, or the rest of the classroom encounter.
Over the years, I’ve noticed that when I do have a specific reason to ask everyone to set aside their devices (‘Lids down’, in the parlance of my department), it’s as if someone has let fresh air into the room. The conversation brightens, and more recently, there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.
So this year, I moved from recommending setting aside laptops and phones to requiring it, adding this to the class rules: “Stay focused. (No devices in class, unless the assignment requires it.)” Here’s why I finally switched from ‘allowed unless by request’ to ‘banned unless required’.
We’ve known for some time that multi-tasking is bad for the quality of cognitive work, and is especially punishing of the kind of cognitive work we ask of college students.
This effect takes place over more than one time frame — even when multi-tasking doesn’t significantly degrade immediate performance, it can have negative long-term effects on “declarative memory”, the kind of focused recall that lets people characterize and use what they learned from earlier studying. (Multi-tasking thus makes the famous “learned it the day before the test, forgot it the day after” effect even more pernicious.)
People often start multi-tasking because they believe it will help them get more done. Those gains never materialize; instead, efficiency is degraded. However, it provides emotional gratification as a side-effect. (Multi-tasking moves the pleasure of procrastination inside the period of work.) This side-effect is enough to keep people committed to multi-tasking despite worsening the very thing they set out to improve.
On top of this, multi-tasking doesn’t even exercise task-switching as a skill. A study from Stanford reports that heavy multi-taskers are worse at choosing which task to focus on. (“They are suckers for irrelevancy”, as Cliff Nass, one of the researchers put it.) Multi-taskers often think they are like gym rats, bulking up their ability to juggle tasks, when in fact they are like alcoholics, degrading their abilities through over-consumption.
This is all just the research on multi-tasking as a stable mental phenomenon. Laptops, tablets and phones — the devices on which the struggle between focus and distraction is played out daily — are making the problem progressively worse. Any designer of software as a service has an incentive to be as ingratiating as they can be, in order to compete with other such services. “Look what a good job I’m doing! Look how much value I’m delivering!”
This problem is especially acute with social media, because on top of the general incentive for any service to be verbose about its value, social information is immediately and emotionally engaging. Both the form and the content of a Facebook update are almost irresistibly distracting, especially compared with the hard slog of coursework. (“Your former lover tagged a photo you are in” vs. “The Crimean War was the first conflict significantly affected by use of the telegraph.” Spot the difference?)
Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)
The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals.
Jonathan Haidt’s metaphor of the elephant and the rider is useful here. In Haidt’s telling, the mind is like an elephant (the emotions) with a rider (the intellect) on top. The rider can see and plan ahead, but the elephant is far more powerful. Sometimes the rider and the elephant work together (the ideal in classroom settings), but if they conflict, the elephant usually wins.
After reading Haidt, I’ve stopped thinking of students as people who simply make choices about whether to pay attention, and started thinking of them as people trying to pay attention but having to compete with various influences, the largest of which is their own propensity towards involuntary and emotional reaction. (This is even harder for young people, the elephant so strong, the rider still a novice.)
Regarding teaching as a shared struggle changes the nature of the classroom. It’s not me demanding that they focus — its me and them working together to help defend their precious focus against outside distractions. I have a classroom full of riders and elephants, but I’m trying to teach the riders.
And while I do, who is whispering to the elephants? Facebook, Wechat, Twitter, Instagram, Weibo, Snapchat, Tumblr, Pinterest, the list goes on, abetted by the designers of the Mac, iOS, Windows, and Android. In the classroom, it’s me against a brilliant and well-funded army (including, sharper than a serpent’s tooth, many of my former students.) These designers and engineers have every incentive to capture as much of my students’ attention as they possibly can, without regard for any commitment those students may have made to me or to themselves about keeping on task.
It doesn’t have to be this way, of course. Even a passing familiarity with the literature on programming, a famously arduous cognitive task, will acquaint you with stories of people falling into code-flow so deep they lose track of time, forgetting to eat or sleep. Computers are not inherent sources of distraction — they can in fact be powerful engines of focus — but latter-day versions have been designed to be, because attention is the substance which makes the whole consumer internet go.
The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose.
The final realization — the one that firmly tipped me over into the “No devices in class” camp — was this: screens generate distraction in a manner akin to second-hand smoke. A paper with the blunt title Laptop Multitasking Hinders Classroom Learning for Both Users and Nearby Peers says it all:

We found that participants who multitasked on a laptop during a lecture scored lower on a test compared to those who did not multitask, and participants who were in direct view of a multitasking peer scored lower on a test compared to those who were not. The results demonstrate that multitasking on a laptop poses a significant distraction to both users and fellow students and can be detrimental to comprehension of lecture content.
I have known, for years, that the basic research on multi-tasking was adding up, and that for anyone trying to do hard thinking (our spécialité de la maison, here at college), device use in class tends to be a net negative. Even with that consensus, however, it was still possible to imagine that the best way to handle the question was to tell the students about the research, and let them make up their own minds.
The “Nearby Peers” effect, though, shreds that rationale. There is no laissez-faire attitude to take when the degradation of focus is social. Allowing laptop use in class is like allowing boombox use in class — it lets each person choose whether to degrade the experience of those around them.
Groups also have a rider-and-elephant problem, best described by Wilfred Bion in an oddly written but influential book, Experiences in Groups. In it, Bion, who practiced group therapy, observed how his patients would unconsciously coordinate their actions to defeat the purpose of therapy. In discussing the ramifications of this, Bion observed that effective groups often develop elaborate structures, designed to keep their sophisticated goals from being derailed by more primal group activities like gossiping about members and vilifying non-members.
The structure of a classroom, and especially a seminar room, exhibits the same tension. All present have an incentive for the class to be as engaging as possible; even though engagement often means waiting to speak while listening to other people wrestle with half-formed thoughts, that’s the process by which people get good at managing the clash of ideas. Against that long-term value, however, each member has an incentive to opt out, even if only momentarily. The smallest loss of focus can snowball, the impulse to check WeChat quickly and then put the phone away leading to just one message that needs a reply right now, and then, wait, what happened last night??? (To the people who say “Students have always passed notes in class”, I reply that old-model notes didn’t contain video and couldn’t arrive from anywhere in the world at 10 megabits a second.)
I have the good fortune to teach in cities richly provisioned with opportunities for distraction. Were I a 19-year-old planning an ideal day in Shanghai, I would not put “Listen to an old guy talk for an hour” at the top of my list. (Vanity prevents me from guessing where it would go.) And yet I can teach the students things they are interested in knowing, and despite all the literature on joyful learning, from Marie Montessori on down, some parts of making your brain do new things are just hard.
Indeed, college contains daily exercises in delayed gratification. “Discuss early modern European print culture” will never beat “Sing karaoke with friends” in a straight fight, but in the long run, having a passable Rhianna impression will be a less useful than understanding how media revolutions unfold.
Anyone distracted in class doesn’t just lose out on the content of the discussion, they create a sense of permission that opting out is OK, and, worse, a haze of second-hand distraction for their peers. In an environment like this, students need support for the better angels of their nature (or at least the more intellectual angels), and they need defenses against the powerful short-term incentives to put off complex, frustrating tasks. That support and those defenses don’t just happen, and they are not limited to the individual’s choices. They are provided by social structure, and that structure is disproportionately provided by the professor, especially during the first weeks of class.
This is, for me, the biggest change — not a switch in rules, but a switch in how I see my role. Professors are at least as bad at estimating how interesting we are as the students are at estimating their ability to focus. Against oppositional models of teaching and learning, both negative—Concentrate, or lose out!—and positive—Let me attract your attention!—I’m coming to see student focus as a collaborative process. It’s me and them working to create a classroom where the students who want to focus have the best shot at it, in a world increasingly hostile to that goal.
Some of the students will still opt out, of course, which remains their prerogative and rightly so, but if I want to help the ones who do want to pay attention, I’ve decided it’s time to admit that I’ve brought whiteboard markers to a gun fight, and act accordingly.
     THE 1%’S OBSCENE & GROWING SHARE OF ALL U.S. WEALTH…–The-most-important-chart-about-the-American-economy-you-ll-see-this-year?detail=email
To subscribe email  with “Subscribe” in the Subject line. Thank you.
To unsubscribe email with “Unsubscribe” in the Subject line. Thank you.
© Copyright 2002-2014, The Center for Third Age Leadership, except where indicated otherwise. All rights reserved worldwide. Reprint only with permission from copyright holder(s). All trademarks are property of their respective owners. All contents provided as is. No express or implied income claims made herein. This newsletter is available by subscription only. We neither use nor endorse the use of spam.
Please feel free to use excerpts from this newsletter as long as you give credit with a link to our page: Thank you!


Leave a Reply