Idea for an idle neuroscientist

3 May

In my quarantine, I’ve been curious about a thought-process that I would love to see some brain scientist, psychologist, or philosopher of mind take on.  It has to do with social networking, and has been exacerbated by the quarantine in the sense that we have more choice with whom we communicate socially because for the first time, except for those of us we live with, we have almost artificially equal choice about whom we choose to relate to socially.

This is in brief a problem about sharing content.  When we hear a joke, read an article or poem, see a photo or film, we can often instantly think of the friend or friends who we know would most like that. I’d call it a “I-need-to-recommend-this-to” function. I’ve been noticing this as I scroll through Facebook or Instagram, but it’s not a trivial question. Instead, this seems like an amazingly complex instant of nearly infinite relational thinking. Your brain is extracting some quality or qualities from the text, then matching that up against an infinite number of likes and preferences, personality traits, and predicted responses of anywhere from a few possible contacts to, among those of us who teach students or who are part of large interest communities, hundreds of people.

To illustrate in its simplest and most binary form, you hear a joke, let’s say a political joke. Asked to choose between which of two friends would most appreciate it, you’d likely choose the one whose politics are most closely mirrored by the joke. But now think about this: not only do you multiply that choice by the number of friends (including family) you have, but each text is really made up of multiple factors – not just the political views of the joke, but the style of the humor, the images, and who knows what other variables that make a joke or any verbal text appealing or aesthetic. And, you are thinking of an infinite number of unnamable traits among your pool of friends, not just simple political voting preferences. Yet we can do this, often without any doubt, and sometimes, which makes it even more complex, a degree of uncertainly.

Try this experiment.  To see what I mean, choose any photo or painting or novel or TV program – a photo on your phone, say. Then think as quickly as you can, “Of all the people I know, who is the one person this would most appeal to?” Something instant is going on in your brain, an incredible range of assessments relating the aesthetic or moral qualities of a text, with your understanding of the receptive and cognitive qualities of all the individuals you interact closely with. But not only thinking about their internal thought process (likes/dislikes/personality), but also about the nature of your communicative and affective relationship with each of those people as well. In a rapid, sometimes instantaneous intuitive thought-process of which we have no meta-awareness unless we are asked “Why?”

A photo of your cat you just took. You will rifle through your mental rolodex and pull out the people you know who like cats, who are interested in your life and your cat, someone you know will have the time or take the time to look, and then someone whose emotional response you can predict as the most favorable one. (Maybe it will be more than one person, so we’re capable of mapping this kind of relationship on to multiple people at once.) Sharing, on social media, allows people to choose their response. But direct messaging forces us to go through a very intentional but usually unconscious analysis of imprecise and unspeakable data, of which we only become self-conscious “on second thought.”

I am asking what is going on in our brains (and where) when we do this?


Notes on the 2028 Olympics

14 Sep

I’ve been in the midst of writing other things for publication and haven’t kept up with this or my other blog (although I have two draft posts I’ve never finished), and frankly like many people am so overwhelmed by the current cultural moment I don’t always have the mental clarity to sit down and say something useful.

But I wanted to write a short post about the awarding of the 2028 Olympics to Los Angeles the other day. I feel at times like I am the only person still interested in this, and I don’t even know why I am any longer (more on that another time), but indulge me in this – or you can stop here and go on to read something more useful. I just want to bookmark these ideas to come back to later.

The social and cultural implications of the Olympic Games are always deeper than we realize, and they are a kind of microcosm for the world, which shouldn’t be too surprising but is really.

There was a time, after the Montreal economic debacle, that no city wanted the Olympics – and L.A. was the sole candidate in 1984, an Olympics at the beginning of what we now know as the neoliberal age. Those games thanks to Peter Ueberroth (remember him?) made a profit, paving the way for the Olympics to become desirable again, at least for the corporate class, and for cities that wanted to assert an up-and-coming world status identity. Now, three decades later, well into a vastly changed capitalistic order (not to mention the collapse of the communist bloc and its adherents), and after the financial disasters of the Athens and Rio de Janeiro games, the Olympics that cities and countries competed for have once again become radioactive. Even people who have a positive view of the Olympic movement see them with a kind of nimbyism; no one wants to host, except maybe cities like Istanbul which are problematic for other reasons. Three of the five final candidate cities for the 2024 Games pulled out after having made it to the final round, usually bowing to popular, economic, and political pressure (and in the case of Budapest, arguably xenophobia was a factor, which kind of is at cross purposes to the meaning of the Olympics in the first place). And Boston, which had the USOC’s designation as the American candidate city, also pulled out, leaving Los Angeles as the substitute. The reason the IOC smartly awarded both the 2024 and 2028 Games in one stroke yesterday is because it is entirely conceivable that no city would have bid for the Games in 2028, and even the opposition in Los Angeles, now apparently quite small, would have had time to build.

The other thing of note is that even though it’s only 11 years away, for Americans 2028 seems inconceivably far in in the future. What is this country and its civic institutions going to look like then? Not only will there be greater impact felt from climate change, but we have no idea how the changes to our civil and political culture now underway in the past year or two will play out. Or, for that matter, can we predict how robust our economy, and specifically local, urban economies, are going to be, what other kinds of social needs are going to be unmet, and what revenue streams will look like given possible pending tax law changes. Seven years may not be long enough for a city to build and prepare for the Olympics, especially if they require a lot of new construction, but eleven years, volatile years, who can say with confidence what our world will look like? But again, there might not have been any candidate cities four years from now, and then what? Three Olympics have been cancelled, so there is precedence, but that took two World Wars.

We still have to ask, Who are the Games for? Many, like me, believe they still have a valuable role to play culturally – beyond economics – but Rio was such a disaster for the local people and quite possibly for Brazil nationally, we need to revisit that.


28 Feb

A lot has happened since last I posted here.  I try to keep this focused on culture, and not on politics outside of cultural issues. As readers know, we’ve been kind of hit over the head the past few months. How this has affected culture is beginning to show more as cracks in the foundation.

Today I was listening to the normally great WNYC radio host, Brian Lehrer, whose local show in New York covers all sorts of issues and local politics.  He did an interview segment with writer and psychologist Andrew Solomon about how parents should or can introduce their children to political issues in these troubled times. Lehrer is also one of the hosts of National Public Radio’s new call-in program, Indivisible, which promises to get people talking outside of their bubble and in the words of WNYC President Laura Walker, “find common ground.” The way that the show is run, and indeed the ultimate direction of today’s interview to me underscored the failure of American liberalism and, in particular, that failure within the press, to address what has happened and is happening in this country and the world.

Lehrer asked about striking the balance between educating children on the significance of the current moment without telling children what to think, or letting them make up their own minds as independent thinkers.  Although I was taken aback by the explicit assumption that parents are not allowed to educate their children about ways of thinking critically, or that taking them to a demonstration might infringe on their rights to think for themselves (full disclosure here: one of the best things my grandmother ever did for me was take me around her apartment building to get signatures for her anti-Vietnam War petition to her senators), Solomon made the good point that parents can reframe the discussion away from telling kids whom to vote for, and towards the implicit moral distinctions made between love and hate, respect and bigotry, inclusion and exclusion, racism, and so on, assuming we choose to live more moral lives.  Like a good liberal, you don’t tell people, even your children what to think, you let them make up their own minds, but you try to instill in them a sense of your own morality or, as Lehrer interpreted it, the difference between good and evil (even as we’ve seen a wedge driven between different takes on “good and evil” in the new national discourse).

Some parents called in and made some good points and asked good questions, including one woman whose 11-year-old son turned to her and said, “They’re all crooks, Mom, even Bernie Sanders.”  (The whole 17-minute interview segment is worth listening to.)  And this to me is why liberalism is losing.  First, on this level, when people get disgusted with the entire political process, and turn away from all candidates as equally bad, we know that benefits autocrats and harms participatory democracy, by definition, really. So low voter turnouts are not politically neutral; they benefit right-wing candidates.  Turning people against politicians – a big part of the current rhetoric of the last political campaign – and encouraging them to sit home does not have symmetrical results for both ends of the political spectrum.  It disproportionately harms those who run on more transparent, egalitarian, inclusive platforms that emphasize sharing of resources, citizen and informed participation, and global sustainability.

But at no point during this interview segment did anyone mention that parents, like the press, have a responsibility to tell listeners, whether children or adults, how to assess fact from fiction, truth from lies. There is much in political culture that is based on “opinion” (whatever that means, which is a topic for another day). But reasonable actions, regardless of one’s morality, cannot be made on the basis of misinformation, whether deliberate or not, or lack of information. And parents, as well as the press and teachers, have the solemn responsibility to teach the young how (and why!) to be better informed or when something is an absolute falsehood.  Truth.  Truth matters.  The American press did not do this until too little, too late, for the most part.  The New York Times and Washington Post seem to have woken up to this now, and some of the Times’s recent editorials, such as this one on immigration, are model summaries of critical thinking and the application of facts to analyze and undermine lies, deliberate lies by our leaders. You can still be neutral while denouncing lies and misinformation.  There may not be one absolute truth, and facts (and their ramifications) may be debatable, but we can’t allow them to be tossed aside as if they don’t matter and just believe the fantasies that tell us what confirms our prejudices. We may not be able to determine with certainty how facts relate to causes and consequences. But the search for that connection is vital to our survival.

Unfortunately, whenever I try to listen to Indivisible (which I feel like calling Unlistenable or Insufferable), it feels as though there were a directive on high from the NPR management never, ever to correct callers’ statements, no matter how blatantly false or misinformed.  Invariably within the first couple of calls, a listener repeats some idea that is demonstrably, empirically false. And the hosts – seasoned NPR journalists – let these falsehoods not only sit there unchallenged, but even gain credibility as they are further distributed over the airwaves.  I’m not saying the callers are unintelligent, or uneducated, or that my opinion is more valid than theirs. We can disagree when we are all speaking from a position of being informed.  But there are times when they express beliefs about social conditions and historical events that are flat-out wrong.  It’s not politically correct to say that, and the shorthand way of dismissing this is to say it is “elitist.”  Yet somehow it would be hard to imagine a patient opining about how to conduct surgery and the doctor having to follow the patient’s instructions because all opinions are equally valid. For example, when people base their opinions about immigration on the belief that immigrants are “streaming” across the Mexico-U.S. border, that crime waves are higher because of immigrants from Mexico or the seven banned countries or simply that crime is at a 50-year high, and not a low, or that Obama increased the debt more than any other President – all measurably false, to remove any doubt – they are drawing conclusions and promoting remedies based on information and ideas that are completely erroneous. Aside from the moral dimension, it’s aimless to discuss whether building a wall is the best response if the so-called need for one can’t even be demonstrated in reality.  If the press isn’t there to report the truth, and to call out misinformation in an adversarial way, who is?

But there remains this need for liberals – and dare I say it, white liberals – to “find common ground” and be reassured.  One problem is, it’s really hard to find common ground when you understand the policies of those people who disagree with you are actually going to cause you harm, if not kill you. This is a loud and clear message coming from Black America right now, whether in the form of two essential and devastating documentaries this season, I Am Not Your Negro and 13th, in the need for discussions of reparations as voiced by Ta-Nehisi Coates, or in the critique of broadcasters like Tavis Smiley, or on a less famous or public level, the lived experience of my students. It’s an uncomfortable truth that the ideal of “common ground” can’t fully be realized while “systemic racism” is a dominant cultural order, let alone one on the rise.

So when Andrew Solomon ends his interview by telling parents that it’s important to avoid heightening their kids’ anxiety, by telling them that things may get worse for the world under the current presidential administration, but “we’re going to be ok,” that to me seems less a prescription for lessening anxiety than a recommendation to teach your children how to practice denial.  That may be not only how we got into this mess, but what will keep us from getting beyond it. It may make for nice parenting, but it is neither good journalism nor sound advice for the future of the planet. If you really believe you’re going to be ok in four years, you’re in pretty good shape, comfortable, sociologically speaking. We have to start by admitting that there’s a good chance most of us are not going to be ok – if you know anything about climate change and its consequences, which is to say, science – and that catastrophes like nuclear holocaust, genocide, widespread gun violence, and ruptured oil pipelines that can contaminate the water supply for millions and wipe out entire indigenous communities, are preventable. But that’s only the case if we come together to start naming the truth or short of that, seeking it out, and cease ignoring facts while our press looks the other way rather than confront dominant falsehoods as is their job.

Our Flawed Political Leaders

6 Nov

I wrote this yesterday – just some idle thoughts on this political campaign season, a throwaway – and posted it on Facebook. Already at least three of my friends have copied and shared it.  So I’m putting it here just as a record it’s originally mine (easier to archive than Facebook).

(Meanwhile I’ve been carrying two, maybe three more in-depth posts around in my mind since August, looking for the time to develop them further here.)

As I prepare to vote on Tuesday, I find myself thinking and talking more and more about Lyndon Johnson (whom my students have never heard of, incidentally). I was not old enough to remember him and the obscenity of our war in Vietnam (my first actual memories began under Nixon), policies for which history has decided he was rightly despised. There is no apologizing for this, and this is what brought him down and overshadowed his presidency and his legacy, at first attacked by the progressive forces of Eugene McCarthy and Robert F. Kennedy, but undermined and succeeded by the likes of Nixon with the assistance of Cold War Democratic hawks and Dixiecrats.

And yet, here’s a list of what this often duplicitous and untrustworthy wheeling-and-dealing politician oversaw in just five years of his Presidency: The Civil Rights Act (actually two of them, the second including the Fair Housing Act), the Voting Rights Act, the Immigration Law of 1965 that ended quotas and preference for white Europeans, the first Endangered Species Act, and the acts that created Medicare, Medicaid, the Equal Employment Opportunity Commission, Head Start, the Corporation for Public Broadcasting (NPR and PBS), the National Endowment for the Arts, and the National Endowment for the Humanities, and the act that made food stamps a permanent program. (Not to mention appointing Thurgood Marshall to the Supreme Court.) None of these, which have altered and improved the lives of all Americans, would have existed or passed without him – flawed, dishonest, and hawkish as he was. Fifty years on it’s hard to imagine American life without these, even though arguably whatever social justice they helped to bring about at home stopped at the border of our empire. It’s also hard for us to conceive of the kind of political imagination that could envision these dramatic improvements that reshaped American life in just five years. (Even as there were more progressive voices inside and outside of government, some of whom formed useful alliances and some of whom remained in opposition.)

Little of this happened without considerable people pressure by progressives, progressives who were mobilized for years by violent and structural injustices in Southeast Asia and the U.S. South. As I’ve said before, democracy doesn’t end on Election Day, it begins then. No one we elect on Tuesday can or will be a savior. Not only is there no perfection, but it may take generations before we significantly change the course of our foreign policy away from its history of massive spending on war, violence, and weaponry, or address the damage we unleashed during “shock and awe” in 2003. Are we going to address economic inequality, racial injustice, or the steps needed to stop catastrophic climate change in the next four years? That’s up to us, but it will help not having someone in the office who would like to forcibly turn the clock back to a time before any of these social advances were part of the fabric of modern America. Nor will it help having someone in office too flagrantly disrespectful of science, sociological evidence, public policy, history, tolerance, and gender equity to recognize the relationship between the folly of willful ignorance, nationalism, hatred, and catastrophe.

Held together by a “Skeleton Crew”

15 Jun

When I had menial summer jobs during college, going to a good film the night before could upend the tedium for the entire shift the next day, and thoughts, impressions, and analysis of the film and its elements swirled in my mind. One of the strongest memories of this was working in a library, where I was doomed to change the labels on the front of card catalogue drawers all day – unscrew, remove old label and mylar covering, insert new label and mylar covering, provide paper backing for thickness, rescrew the assembly on to the front of the drawer – for minimum wage (then $3.35/hour), when I chanced to see Paul Schrader’s brilliantly written film, already in revival, Blue Collar with Richard Pryor in perhaps his greatest dramatic performance. That film, about auto workers in Detroit, brought me a lot to think about for all the hours the very next day and I still remember how in my mind’s replay it made the time pass but also let me bear down on the themes raised by the film and think about the unavoidable conflicts, class-based as well as racial, inherent in industrial capitalism.

Last night I had the chance to see another scripted Detroit-auto worker story, this time the new play, Skeleton Crew, by the young playwright Dominique Morisseau, with whom I had been unfamiliar until now. For one thing, I’ll be keeping an eye out for her other work from now on. This is a play that, in capturing the precarious existence even of skilled, union workers in the contemporary American economy, gives me hope that our theatre still can take on significant economic and social issues with sophistication and empathy, that theatre can do so much more than entertain by showing us the fragile humanity caught up in our crumbling economy. Our safety net has been ripped to tatters, even among the most strongly protected union jobs. Far from the labor optimism of Clifford Odets we now feel as if we are watching the sun set on union protection, as individual self-preservation is pitted every day against collective solidarity, because advancement comes at a moral cost.  In this sense, Morisseau’s play evokes Arthur Miller’s tragedies of psycho-economic conflict (Death of a Salesman most famously, but even more strongly both The Price and All My Sons). The dialogue is both natural and naturalistic, and yet at times with a tone as precise and ringing as that of The Crucible.

I’ll leave it to Ben Brantley in his rave review to provide more plot background to the play. But in brief, in this four-character play each of the thoroughly drawn characters occupies a tenuous position in the work hierarchy of a Detroit auto plant in danger of shutting down, including the union rep and two others who work with her on the assembly line  and the supervisor, now management, who has risen up from the union ranks to a position, though teetering, in the middle-class. We learn early on – though not all the characters know – that the plant will close, and it is up to the supervisor to make recommendations about who will be fired in advance or laid off, who will be transferred to other plants, who gets a good severance package, while the union has to scrape and scramble to protect its dwindling and vulnerable workers. One of the workers just bought a house, one is a year away from  full retirement benefits, one is saving up to start a small business, and one is about to go out on maternity as a single mom.

The play is not just an indictment of our economic system – our economic collapse as a country when it comes to providing a decent standard of living to increasing numbers of people (collapsing faster than Europe) – but also an inquiry into what happens to people morally when they get close to the line that separates management from workers, and those who think they can become secure from those who see themselves sliding into peril. We all have enough personal flaws and financial soft spots (e.g. cancer) to bring us down.  But the question remains whether the moral response in enough to offset the effects of an amoral economic system. Still, nothing in the play is contrived, there are no devices to move the plot forward, no sudden second-act revelation of secrets that forever changes the characters and the way we understand the play.  Life plays itself out without, as Brantley observes, melodrama. All of us who have worked in an office setting know the complicated ways that office mates get to know each another with a special kind of intimacy , as friends and sometimes not as friends even though we can spend as much awake time with them as we do with family.  The nature of the work relationship is different from worker and class solidarity – it is more complex, even in union shops (which I now know, working for the first time in my career in a unionized position). Friendship, comradeship, power plays, conflicts are all there, and we come to care about one another because of our frequent and purposeful contact.

This is highly engaged and perceptive theatre. What it offers over film is the intimacy of getting to know four complex and multidimensional characters by being physically close enough to touch them. And in so doing, and in seeing them in the flesh, as opposed to a two-dimensional screen, we can identify with their pain and anxiety, as (if) we come to know them. The actors have to become the people such that not one sentence can sound written.  The repartee, the comebacks, the conflicts must remain spontaneous.

Yet at the same time, there is the paradox – external to the play itself – that people who share the background and social status of the characters could not afford to see this production, even at off-Broadway prices. For that reason (among the demands of real life), I personally cannot have seen as much contemporary theatre as I would like so I cannot say categorically that this kind of new social realism is rare, but I suspect it is.I hope it’s part of a new wave.

The reason we remember Miller, Odets, Lorraine Hansberry, is that they expose something real yet complex about the relationship of individuals and families within the economic matrix. Perhaps this is what it means to be American in the post-manufacturing age. And furthermore, even though race hovers over this play and the deep vulnerability of its characters, the racial positioning of the characters themselves is far more ambiguous and complicated than what Hansberry’s Younger family had to deal with: both moving into the middle-class and remaining in the working-class are fraught with dangers of different kinds.

All of these tensions become that much more heightened as – in every industry, whether manufacturing or healthcare or higher education – fewer and fewer full-time workers relative to the growing need are being asked to do more, work more, give more. We are all becoming the very skeleton crews keeping this nation’s professional engines generating, whether products, service, care, or knowledge, while our brother and sister workers, and dads and moms, are severed, cast off, demoted to precarious, contingent positions or, as the play points out, moving from skilled labor in auto plants to jobs with no human impact in copy centers. Unemployment may technically be low by quantitative measures, but there remains just a skeleton crew doing meaningful work, in both the middle-class and the working-class, leaving bare our open wounds of aspiration.

The Privilege of ‘failure’ in the precarious economy

1 May

I sat tensely with the search committee in a conference room for the final interview for an academic tenure-track position.  I really needed the job, having been on the job market for three years and having only been able to find part-time, contract, or adjunct positions for the past two.  This was the first interview I had gotten as a finalist in all that time. One of the committee members turned to me and asked, “We noticed that ten years ago, you left your position [as executive at a non-profit] after less than one year.  Can you explain to us why you left that job after such a short time?”

I was prepared for the question so gave it my best spin.  I had had a conflict with the board president, I had made all these innovations and had measurable successes in that position, successes that were recognized within the larger community, but the board redefined the position, demoting it from executive to office manager, because they decided to retake control over the day-to-day operations of the organizations.  Any way I spun it, and without independent corroboration, left open the possibility I was difficult to work with, uncompromising, a poor communicator, or even incompetent.  And for all this to emerge in less than a year on the job indicates either a disastrously bad tenure, or an unforgiving board with no patience for disagreement.

Though one can never know if there is one definitive reason, needless to say I didn’t get the position for which I was interviewing.

I have never used this blog for trolling, settling scores, proving my political correctness, or sour grapes.  But I think it incumbent to point out that I’m willing to bet Johannes Haushofer has never faced such a question in any of his job interviews.

The reason I’m posting this entry is that no fewer than four people I greatly respect have re-posted Princeton University Assistant Professor Haushofer’s so-called “CV of Failures” or articles about it, on Facebook, which garners the predictable number of ‘likes’ in response by students and other academics. Even NPR, as it is wont to do, made a lighthearted report on the topic on Morning Edition.  One reporter wrote that the takeaway lesson from this is that “The real tragedy isn’t these failures — it’s when these failures convince people to stop trying.”

Even the professor who wrote the original article on which the idea was based, Melanie Stefan, drew two conclusions: this same one (“we construct a narrative of success that renders our setbacks invisible both to ourselves and to others. Often, other scientists’ careers seem to be a constant, streamlined series of triumphs. Therefore, whenever we experience an individual failure, we feel alone and dejected”) and what I think is a more valuable conclusion that even the most successful scientists, and academics, face a ratio of about six failures to every major success. That latter point is a valuable, and encouraging, insight.

But from their lofty positions at Princeton, Harvard, Oxford, Cal Tech, and Edinburgh, both Haushofer and Stefan miss the economic context.  And that’s what makes, to me, this approach so infuriating.  Both are writing from positions that reflect an inevitability of ultimate success and security, the uppermost echelons of academic success, especially for such young, promising scholars who got top positions right out of grad school.  But the adjunctification of higher ed (not to mention global poverty and the precarious economy) guarantees no such narrative of success for most of the people taking part, even from top tier graduate programs.  If you fail six out of seven times, but still end up with a position at a university in the ranks of Harvard or Princeton, then yes, by all means, teach younger people not to give up or get discouraged.  But be careful to avoid an error in logic.  There’s a big difference between the lesson “you’ll never get a position (or anything you want) if you give up” – which is logically true – and the lesson that “if you never give up, eventually you will get a position (or whatever you seek).”  The latter is a logical fallacy.  There is no demonstrable guarantee that refusal to give up will lead to success. Or put another way:

Giving up —-> No success

but the inverse is not true:

Not giving up –/–> [does not lead to] Success.

The other moral and practical part of the lesson that they leave out is that it’s not enough not to give up, but that learning from one’s failures is a central ingredient in overcoming them.  Picture the analogy of the fly trying to get through the glass window pane, or better yet, the sperm trying to fertilize the egg, because in this day and age, there aren’t enough eggs to go around. You not only have to figure out a way through or around the glass you can’t fully comprehend, but you have to do so in competition with dozens if not hundreds of others.

The narrative of inevitable success is also based on a fallacy of logic. The 6:1 failure-to-success ratio of the most successful scientists may be true, but in an age of declining positions for full-time academics (not to mention other industries) and economic precariousness, that ratio is much, much higher even for those like myself who nonetheless have ended up with great tenure-track academic positions.  And for many, the ratio is infinity, since they may never get the full-time position they seek.  Two successful tenured faculty told me during my job search, “Oh, you’ll be just like [so-and-so], who did great academic work but never had a permanent position.”  I could take that to the bank as I was fighting to make a mortgage payment (and went three years in middle age without health insurance).  For those who are fighting to find such a position, competing against literally hundreds of other applicants, being able to release a “CV of Failures” is unrealistic – they’re too busy trying to conceal them from the search committees.  It may not be a rejected fellowship or grant proposal, but not having any successful ones, or having been fired, or having poor evaluations, or gaps in employment.

Having been on both sides of this, I can say it’s a lot easier to stomach rejected grant proposals when you have a reliable batting average such that you “know” you will get some.  Many people are not in a position to get any grants.  Or, for example, the last NEA proposal I ever wrote – and, in my opinion, the best – was never considered by the panel because I was fired from my job before I could send in the required artistic samples to complete the application.

In a market economy with a high level of precariousness and underemployment, in which at this point a small minority of qualified people will be getting the academic positions for which they have trained (unlike, say, the medical or legal professions), any real failure, any real negative mark on your CV, is going to be enough to disqualify you permanently – or you may at least have the very real fear that it is so, even if you never give up.  That’s why only those who have reached a certain level of the highest academic success will dare to display their so-called “failures,” because they are in reality failures without consequence. And being able to have failures without consequence is a great privilege.

Irony of ironies, one of Professor Haushofer’s research areas is the psychology of poverty. And it is here that he shows himself to be tone deaf.  In the abstract to one of his articles from 2013, Haushofer notes, “low incomes predict lower intrinsic motivation and trust, less prosocial attitudes, and more feelings of meaninglessness…Income inequality is an additional predictor of psychological outcomes across countries, and is associated with loneliness, short-sightedness, risk-taking, and low trust.” Presumably, adjunct professors would fall into this category of low-income workers. So wouldn’t people with that psychological profile be more vulnerable to failure and frustration? In another study co-authored with his thesis advisor, they demonstrate that poverty leads to stress, other negative psychological feelings and feedback loops that reinforce behaviors that maintain poverty. The causality is in fact well-established.  Interestingly, in one study they cite, farmers worried about their economic status have been shown to perform worse in cognitive tests when reminded of their financial woes, and if that finding holds true across professions, it could easily lead to the feedback loop that those in low-paying academic positions face greater stress about success and are more likely to be affected (i.e. underperform) throughout all stages of the job search process. So while not giving up would be good advice even for them, he should know from his own research that such a pep talk (let alone listing one’s failures publicly) is not going to be enough to overcome the psychological effects of years of low income in underpaid, overworked adjunct teaching positions in which one also fights against feelings of “meaninglessness” especially in terms of unsupported or self-supported research.

As for me, I can encourage my students and friends not to give up and to tell them privately that I faced many setbacks in my job searches, but that I always tried to learn from each setback.  As for listing my failures publicly, I’ll wait until I have earned the privilege of tenure, a level of protection and job security that also happens to be under threat from an ever more precarious economy.

Addendum (24 June 2016): Here is another example of the parade of failure as a status symbol by the highly successful and privileged.  The idea that “a stressful and potentially embarrassing experience [can be] spun it into an opportunity,” or that as “Any TED talker could tell you, failure is so hot right now,” refers only to the most securely successful and well-paid in our workforce. For most people who lose their jobs, in this precarious economy (jobs that pay considerably less than $300K a year, a sum that if he saved any money, he never really needs to work again), losing one’s job is a disaster– financial as well as economic and psychological (if you’re not so cocky to know someone else in that same social class will snap you up).  For most people, even most professionals, who lose their jobs, this is no laughing matter.

The Feast Day of Óscar Romero

24 Mar

I have to start by saying I am not a Catholic, so everything that follows here comes from someone who is not coming from within the tradition.  And yet, every year when March 24th comes around, I find myself meditating on the life of Archbishop Óscar Romero who was assassinated on this date in 1980, just before – full disclosure again – I became involved in movements to keep the U.S. military and government from intervening in the wars in Central America.

Then I come across this Vatican Radio article (on Facebook no less) about the first occurrence of Romero’s Feast Day since his beatification by Pope Francis.  I don’t know how such things work, but I find myself moved that his first Feast Day can’t be celebrated as such because this year it falls on Holy Thursday.  I’m also deeply moved that there is also an effort to beatify Fr. Rutilio Grande, who is also known to Pope Francis himself.

Why does this matter, even to nonbelievers?  I mean, even if the idea of  Feast Day or a beatification has no meaning for you, what is it that makes this stand out as so important.  From my perspective, there are two reasons. Two very powerful, even emotional reasons.

First, especially in light of the illegal and inhuman EU-Turkey refugee deal last week, when all the governments of Europe and Turkey conspired to deny the most vulnerable populations of refugees their human and legal rights, it is clear that we live in a moment of such widespread mediocrity in our leadership worldwide (with the exception of the current Pope, but any others?) that not a one of these European leaders will go down in history leaving behind a single memorable legacy or achievement.  Every one of them is ultimately forgettable, because they stand for nothing when it comes to culture, morality, building a better and sustainable future, equity, vision, community, or humanity.  And at the worst possible moment: the decisions and actions we take over the next ten to twenty years in response to climate change will determine the fate of human civilization on this planet.  We can’t wait a couple of generations for better and more moral leadership to come along.  Time is running out, and the leaders we have selected are bureaucrats more interested in national security than human security, and profit and privatization instead of mutually supportive economic practices and goals.

Second, especially in the United States, the idea of morality has been almost completely hijacked by the right in the most sanctimonious of ways in principle and hypocritical ways in practice.  Moral leadership has become aligned with conservatism (even as the conservatives running for President sink to new lows in rhetoric and the morality they express from the campaign podium).  To read about the moral courage of someone like Romero amidst the background noise of the gutter and trivia of American politics is to throw our true bankruptcy into full relief.  And again, at a moment in geological time when we have to make life-or-death decisions determining the fate of life on the planet, our front rank of leaders (many of them elected) exhibit such moral weakness in the face of multinational corporations and the seductions of quick profits that we have little hope of ever finding our better selves, let alone putting them effectively into action.

It’s not a question of waiting for leaders as if they are going to be anointed and appointed from party apparatuses above to look after us.  It is instead a question of how exceptional and moral leadership bubbles up from the bottom but inserts itself by shifting the channels of power like small rivers that set their own course as they stream.  Think of not only Rutilio Grande, a priest rather than a member of the hierarchy, but of young leaders like John Lewis and Cesar Chavez and Berta Cáceres.  I’m sure I’m not the first person to pursue that mystery of our culture: Why is moral courage despised, to such an extent that we reward and let our world be run by those who instead value expedience, profit, and self-aggrandizement?  How did we develop a social system that from an evolutionary perspective works against our own species’ long-term interests?

Deceit is in the hearts of those who plot evil,
    but those who promote peace have joy.” – Proverbs 12:20


The Decimation of democracy’s critical class

13 Mar

There are two social institutions whose independence and viability are essential for the functioning of a democracy, and which are vulnerable to structural dismantling in a way that take at least a generation to repair.  They are vital for a democracy precisely because they muster the ability to criticize, question, and push back against Power, against the walls behind which government, business, military, or religious institutions exercise control. These sectors are the press and higher education. They operate, outside the walls, not as isolated voices but as collaborations of research and revelation, networks of thousands of individual voices operating as a chorus with shared commitment to uncovering and approaching the truth and then disseminating their findings to readers, students, and other colleagues.  (And I’m under no illusions any kind of uniformly enlightened academia or journalism – there can always be reactionaries and hacks in any large tent.  But then again the complexity of those sectors make such a spectrum possible.)

It is kind of accepted worldwide that a free press is absolutely essential for that reason, though it is not as widely accepted about universities in that kind of constitutional sense because there are those who believe universities are just for the teaching and mastery of job skills, not for the independent voice of social critique.  After all, freedom of the press is enshrined in our Constitution, though academic freedom is not.

The press does not exist merely to record and transmit the official story.  Universities do not exist merely to provide job training for future workers who will serve government or business without being called on to make decisions.  The basis of living in a democracy is the right to participate in decisions about the community’s future, and the basis of being a moral and effective worker is the ability to have a say in decisions that affect the corporation as well as the surrounding environment (natural as well as human).  The essence of good decision-making is not just critical thinking but also having a well-developed body of knowledge about the issues before us, knowledge that can be complex but one that includes valid evidence and perspectives, rather than ignoring them.

In this context, it is frightening to read, in a very moving investigative article by Dale Maharidge, that the number of full-time reporters for daily print newspapers in the U.S. has dropped 40% in the past nine years, and that rate may accelerate. (This article is really worth reading, devastating, and was the inspiration for this post in the first place.)  As Maharidge makes clear, it’s not just a question that daily print newspapers are being replaced with Internet journalism, but that older reporters with long and local historical knowledge are being let go while inexperienced younger reporters are stepping in. Second, the web-based news is more likely to be national or global rather than local, and even worse, as Maharidge contends, more likely to be to be centered on celebrity and what is entertaining, rather than on what has implications in people’s lives.  But perhaps most devastating is that this new generation of freelance journalists is being asked to work or write for little or no pay, or at best are paid only for the stories they can sell.  Certainly only in the rarest cases are Internet reporters well-compensated and receiving benefits, although recent unionization at Gawker and other news websites is an encouraging start.  At the same time, the type of stories being covered are changing from local and hard news that require interviewing and digging, to the kind of pieces that are either unquestioning repeats of political declarations by our leaders, or that are entertaining (including fear-mongering as a form of horror-show entertainment).

A 40% cut in practicing, full-time personnel would be devastating to any industry.  Not just for the lives and families affected, but for the loss of output, historical knowledge and knowledge of the craft by the elders in the field, and for the inevitable rush to the center among the survivors.  Picture a fishing vessel facing waves crashing over the sides and sweeping the crew overboard.  Those who want to survive will run towards the safer center and cower, rather than ever risk standing near the edge or exposing themselves to risk of any kind.

That kind of sizable cut would also imply that even assuming the nature of news stories were to stay the same, there would be that many fewer stories exposed by the press because there were more topics than the remaining writers could accommodate.  Imagine a 40% cut in the number of stories about climate change, for example, or remaining reporters now having to cover, say, the environment as well as another beat.  They won’t be able to produce as much, investigate as deeply or broadly, and will also have to master multiple fields with professional sophistication in order to interpret what they are being told. (I gnash my teeth sometimes when I hear even NPR reporters who can’t get the details right in immigration law reporting. And we are all still waiting for just one reporter with evidence to confront Ted Cruz on his oft-repeated claim that Obamacare has cost thousands of jobs.)  Put another way, instead of 50 reporters on the ground covering a war, now there would be 30, or there might be 50 but they have to cover more countries and more conflicts, and obviously can’t be two places at once. Stringers are constrained by having to write what will sell, rather than having the financial support of a newspaper to pay for their livelihood while they dig.  In every case, depth as well as the inductive and experienced knowledge from being on the ground are all sacrificed, and can’t be easily recovered. 

Once the business plan of daily newspapers and the field of journalism in general shifts to such an extent that such a high percentage of practitioners are lost, it’s hard to imagine the equal and opposite reaction on the other end of this.  In other words, the proverbial pendulum may not in fact exist and there won’t be a time when suddenly there’s  40% growth in jobs in declining industries like print media.  Newspapers are shutting down much faster than they are starting up. After all, even if there is a massive rehiring, it will take at least a decade for all the new hires to begin to acquire the kind of experience that presumably makes specialists wiser and more able to develop a network of sources.  (Personal pet peeve: there is nothing I hate more than random “person-on-the-street” sound bites, to get the impressions of either totally uninformed or prejudiced people, and usually just one at that, on the air, especially in lieu of interviews with informed parties on multiple sides of an issue.  But I will return to this in another post.)

The same goes for universities, especially researchers and writers.  Much more has been written on the shift over the past twenty years from full-time faculty, engaged in research and writing as well as teaching, to adjuncts hired to teach only, and at such low wages that they are forced to take on extraordinary teaching loads to make ends meet.

Universities are famous worldwide as crucibles of dissent and of research and science (no contradiction there).  And while teaching the young – not just teaching material but teaching the right to question – can be an exercise in freedom, the time and resources to conduct research is at least equally important.  It’s the R&D division of democracies, if you will, and what company can innovate and respond without investment in R&D?  Wipe this sector out and you wipe out an entire intellectual class (like it or not, for millennia every complex society has had its scholar class).  If governments and church denominations can control universities, especially the time and liberty to conduct research, as well as what is taught and what is disseminated to the public, then the critical potential of universities can be circumscribed.  In its most extreme form, this state or military control can lead to the assassination of university leaders, faculty, and students (for example, the murder of the Salvadoran Jesuits at the Universidad Centroamericana in 1989).  But there are more subtle and systemic ways as well, for example by tying research funding to military and business ends, cutting government funding, and most recently, filling boards with figures from business, not academia. As many have pointed out, this leads to restructuring the faculty so that the majority of classes are taught by underpaid, contingent workers with neither job security nor research portfolios, rather than comfortably-paid professors with lifetime appointments, institutional memory, and the ability to work with students on social and political issues without fear of losing their jobs.  I’m not saying anything new here that hasn’t been said and documented in more detail by others, both the “adjunctification” of universities, as well as the retreat from enlightenment, if you will, described by Jane Jacobs as well as, most recently, Marilynne Robinson, among many others.

In about 25 years, the percent of college courses in America taught by full-timers has dropped from about two-thirds to 30%.   The number of full-time faculty has not expanded with the increase in the population attending college, meaning that student-faculty ratios have increased as have faculty teaching loads.  The emphasis is less on the productive work of professional intellectuals as scholars, and more on providing credits for students to obtain their degrees, and in fields in which they are more likely to be able to pay off their debts, because tuition has outpaced inflation and so college is actually harder to afford now.

As I said, others have written about this more than I, and even I have written here about some of this.  But here’s the significant point: in one generation, American universities have changed to a business model that favors training, employment and paying off debt (for alumni) and part-time, contingent work over lifetime investment in faculty to do work including research, writing, and occupying a critical role in our society. Adjuncts can be outstanding teachers but their job function does not permit them the time or resources to be researchers or voices of conscience. And then, will it even be possible for current graduate students and undergraduates to find full-time careers as scholars and professors?  Some will, but how many – and who – will be sacrificed in the name of competition?  (A little bit like the journalists who are getting laid off.)  My heart broke for the young poli. sci. major from Florida who told Hillary Clinton in the Miami debate that she wanted to go on to get a Ph.D.  Sure, we need people like that, but will there be enough chairs in the market for her?  Or will she invest 5-10 years of her life only to get jobs that pay, total, $25,000 a year with no health insurance?

As for the research itself, why wouldn’t you want to be creating positions for more medical researchers, more sociological researchers, more science researchers, to address the most pressing problems of our time?  After all, if you want to find a cure for, say, colon cancers or dementia, why wouldn’t you want to have more researchers working on this and involving more young people in the research and showing them the ropes?  It’s simple common sense that 200 scientists working on a problem or treatment are more likely to come up with useful results than just 120 could.

It’s going to take a lot more national imagination to figure out a way to restore that intellectual class, including a restructuring of education funding  so that tuition doesn’t become the main economic lifeblood of every college and university.  That not only makes students feel they are “consumers,” it also means there is less money to invest in projects that may or may not produce significant short-term results. Such a renaissance of what universities can achieve for democracy and humanity is years away.  Same thing with rebuilding the journalism industry.  It’s not just local print dailies, but the kinds of stories and reporting, and as a by-product, civic involvement they were able to support.  That means getting readers to be interested in learning what is going on around them, and not just parroting and reinforcing their prejudices or following their favorite celebrities (including news personalities) as news.  Yes, the next generation could take this on, with the help of current (tenured) academics and experienced reporters – if they can find the money to support such work.

Alarmingly, we’re at a historical period when we really don’t have time.  The press and universities cannot be absent at what all evidence suggests is a crossroads in our decisions about how to handle climate change and whether or not to continue extracting fossil fuels.  Unlike past generations, this generation has the unique timing to come along when the decisions we make will affect habitability for the next few centuries, if not the fate of humanity itself.  We don’t have twenty years for universities and the press to come up with a critical agenda of questions and answers to allow us to find solutions and grill our elected leaders to do the same.  The disappearance of universities and reporters as significant critical voices is coming at the worst possible time, and we haven’t even found a way – or the political will – to begin to reverse the trend.


Chris Rock: Risky Insights or Flat Notes?

29 Feb

Even before last night’s Academy Award ceremony was over, online columnists were congratulating Chris Rock for his “thorny, meaty, and hilarious” and “brilliant and brave” opening monologue. While he certainly made points that took the Oscars in a better direction – and spared us the nearly revanchist embarrassment of Neil Patrick Harris as host – perhaps I am alone in finding it flat and oddly reassuring when it could have been risky and provocative.

I applaud the way Rock explained and exposed “sorority racism,” but it was also an opportunity to introduce America, using biting satire, to structural racism more broadly. Let’s recognize that Rock is now of a stature where he has little to lose.  So to say that the issues people are facing today are less serious or significant than those of fifty years ago may be, on some level, empirically true, but structural racism, economic and educational equality, mass incarceration, and police violence are still pretty significant manifestations of oppression that continues today.  It’s not just a question of “opportunity.”  It is a question of inequity when it comes to who produces pictures, who is hired in positions of power and decision-making, and who is actually purchasing most of the movie tickets in this country and this world.  Structural racism is perpetuated by those kinds of disproportionate imbalances between the producers, the artists getting work, and the people buying the tickets and buying into the dream.

So no, I don’t think his speech would qualify as “meaty” or as “brave.”  The past couple of weeks, I have been showing my students films about the Black Panthers, Nina Simone, and Tommie Smith and John Carlos – people who risked and sometimes lost everything and yet who are unknown by college students (even adult ones) today.  Compare them to Beyoncé, for example.  I found Rock last night to be not his usual edgy self, but safe, even at times reassuring that we will be able to get past today’s issues.  To me his message included a kind of subtext that said, Hollywood, we know once you provide more opportunity things will get better, without really digging in to why it’s more than just a Hollywood problem and more than just an opportunity deficit.  He had the stage and could have had a moment where the satire was as sharp as he has been in the past, but to me his message was blunted.

But one line really bothered me.  The joke, quoted as “When your grandmother’s swinging from a tree it’s really hard to care about best documentary foreign short” hit a sour note for me for several reasons.  First, lynching is one of those rare topics that to me doesn’t belong in any joke or any line that’s going to end with a laugh.  It’s beyond the pale, even if the satire is in the service of a larger, just point.  I know he wasn’t making light of it in any way, but even as a throwaway to an intro, the image is too horrible to even turn around and laugh, no matter the gallows of this particular humor.

But second, and more subtle, is that “best documentary foreign short” actually is very important for all of the same reasons why we struggle for diversity.  Those other categories at the Oscars, especially the documentaries but also foreign films from time to time, are exactly where the issues of racism, violence, injustice and so forth have been openly discussed when Hollywood and mainstream cinema has been way too timid to take risks.  People need to see those documentaries, precisely because they are not trivial, because they do cover lynching. In fact, this very year’s winner for Short Documentary is about honor killings in Pakistan, which are indeed lynchings – some irony there, no?  (Shout out to director Sharmeen Obaid, who generously agreed to meet with me and my students at last year’s Tribeca Film Festival showing of her radiant documentary Song of Lahore.)  Without documentaries like this one, how would we know or learn about killings like this around the world.  While I haven’t seen it yet, I am also eagerly awaiting 3 1/2 Minutes – 10 Bullets which was short-listed this year though not nominated.

To be brave you have to risk something.  To be meaty, brilliant, and thorny you have to provide insights that don’t just voice what most of the people in the room would like to say, but that takes them to a different level of understanding or provokes them to investigate further.  With great respect for Chris Rock’s career, I don’t think last night he achieved either.  Then again, I grew up in an era of Oscar telecasts with acceptance speeches that included congratulations from the Viet Cong, condemnations of fascism, McCarthyism, and anti-Semitism, parallels between U.S. intervention in Central America and Vietnam, the role of American corporations in the nuclear weapons industry and pollution, and more recently speeches by Michael Moore and Errol Morris two years in a row.  None of these Oscar speeches were so celebrated, and in fact most were derided as inappropriate.  Sad to say, but the Salon and Mother Jones commentators may be too young to remember when political protest wasn’t so safe and watered down as it is today.

*   *    *

On a brighter note, lost in the discussion of the absence of people of color in the acting categories, was the fact that of the award-winning filmmakers themselves, the winners’ circle was actually quite diverse.  The Best Director award went to Mexican director Alejandro Iñárritu for the second year in a row – and in fact the third consecutive year for a Mexican director.  As mentioned above, the winner for short documentary was Pakistani Sharmeen Obaid, winning her second Academy Award as well.  (She may be only the second woman to win two directing awards, after Barbara Kopple – I’ll have to check.)  The animated short film was directed and produced in Chile, and the director of the feature documentary, Amy, is British of South Asian descent.  Also, the producer of the full-length animated film is a U.S. American of Latino background.

While much can still be written about American cultural and cinematic hegemony – after all, there are thriving and major popular film industries coming out of India, Hong Kong, Mexico and many other countries, yet only the American Oscars are seen worldwide and American films exported with more force behind them than other countries’ films – the Oscars are changing with more foreign films and directors getting some recognition, or even work.  Who would have thought that the directors of Amores Perros or Y Tu Mamá También would come to Hollywood and win Oscars?  There is much here in the hidden diversity of Hollywood to be written about later, such as why, for example, no one ever seems to acknowledge that since 1980, there have been 33 women directors who have won Oscars for documentary films, including Barbara Kopple and Laura Poitras. That doesn’t excuse structural sexism – why women get to direct documentaries, and usually shorts, but not feature films – but it does complicate it.  But more on this another day.

*   *   *

Finally, very happy for Mark Rylance.  (Again, time was when the Academy would have given it to Stallone for sentimental and commercial reasons.)  Though I haven’t yet seen Bridge of Spies, he is one of the great actors of our time.  I’ve been privileged to see him onstage three times: in Cymbeline in New York playing multiple roles, where I first took notice (especially when he played multiple characters in the same scene) and when, as he told Leonard Lopate this week, he was a complete unknown, then in one of his Tony-winning performances in Jerusalem, also a great play, and in London, as Richard III at the Globe.  I missed two or three of his great performances in New York, but the last few years have been tough for me to see a lot of theater.  It’s great that more of his work is being recorded on the big and small screen and he’ll be recognized by a larger public.  Hope he gets some leading roles in film now.

*   *   *

Addendum (six days later): On the night of the Oscars, when I wrote this, I missed the middle third of the program while driving home.  For that reason, I didn’t get to see the appearance of the three young “accountants” – an Asian joke by Chris Rock that also included a Jewish stereotype.   Had I seen that I would have had lots to say about that too, another example of how Oscar broadcast writers never get it right, even when they have an opportunity for redemption.  I don’t know if Rock wrote that himself, or if they were writers he hired, or if he had veto power not to deliver such a joke, only that apparently the kids’ parents were not in on how their children would be used as the butts of the joke.  I also don’t know why the Oscar telecasts have lately been so badly written (not to mention inappropriate unfunny ad libs like Sean Penn’s gratuitous “who gave this guy a green card” before announcing a winner last year). Also glad to see that much of the press since last week has been more critical of Rock than the first reports I wrote about, especially this response in Colorlines. Strangely enough, back in the day when the choices were much more artistically conservative, I don’t remember this kind of controversy ever from the monologues of Johnny Carson or Billy Crystal.  So given that we like to think we have made so much progress when it comes to racial justice and equality, why have we become more insecure and more threatened about the topic of race and seem unable to find humor that doesn’t reinforce old forms of domination?

bell hooks to the rescue

1 Feb

I know that doubt can be one of the hallmarks of good teaching.  We want students to feel encouraged to challenge and reconsider their beliefs, especially those prejudices they have adopted without much thought and certainly, by definition, without considering the evidence.  But there’s another kind of existential doubt, when we’re so hammered by all the problems facing us in the world that we end up questioning the centrality of aspects of human life that we enjoy.  Can we make art, let alone study it, at a time when it is becoming more clear that without concerted action, climate change could kill us all?  And, given that citizens (and voters) are making choices about future leadership at a time when they are woefully uninformed about politics, current events, and science, what is the importance of studying the arts?

I know.  I’m not so doctrinaire that I believe we can have a society without art or education without art. Actually the opposite: I have always had a knee-jerk sense that arts and music and literature education have benefits that go beyond critical thinking and the wonderful list devised by Elliot Eisner.  But one place where I have gotten stuck is on the politics of the arts and arts education.

Doing my class reading for this week, I came across the following in bell hooks’s book of essays, Art on My Mind: Visual Politics:

“There must be a revolution in the way we see, the way we look.  Such a revolution would necessarily begin with diverse programs of critical education that would stimulate collective awareness that the creation and public sharing of art is essential to any practice of freedom.  If black folks are collectively to affirm our subjectivity in resistance, as we struggle against forces of domination and move toward the invention of the decolonized self, we must set our imaginations free.  Acknowledging that we have been and are colonized both in our minds and in our imaginations, we begin to understand the need for promoting and celebrating creative expression.”  (p. 4)

And this:

“Recently, at the end of a lecture on art and aesthetics at the Institute for American Indian Arts in Santa Fe, I was asked whether I thought art mattered, if it really made a difference in our lives.  From my own experience, I could testify to the transformative power of art.  I asked my audience to consider why in so many instances of global imperialist conquest by the West, art has been other [sic?] appropriated or destroyed… It occurred to me then that if one could make a people lose touch with their capacity to create, lose sight of their will and their power to make art, then the work of subjugation, of colonization, is complete.” (p. xv)

I know with these words I am in the right course, I am following the right course, and that this conversation is vital, even/especially in the context of sociology.

Believe me, it is so easy not to practice, even when there is such urgency to practice, to create, to push ourselves to make work that transforms, or even just questions, the status quo. Every school district, every budget cut that reduces arts and music in schools is performing, in doing so, that work of subjugation.  And if you can’t imagine you can’t be free, you can’t envision anything better or different, or you are simply a prisoner of what the state wants and needs you to be for them.

%d bloggers like this: