Language Isn’t What You Think


     When you think about it, the evolution of language is a compelling topic for literary folks, and ought to be required study for literary critics. People have an innate capacity for language. The neurological center—what we might by analogy call the cellular “processor”—lies in an organized nucleus of cells in that part of the brain right behind your left ear. Language is not a town-made capacity: it is hard-wired in, as are the other senses, such as eyesight, for example. Our vision has evolved to detect a useful, finite spectrum of electromagnetic radiation emanating from the outside world. Using eyesight, we can detect important things out there: I can see the prey I want to kill and eat, notice the vegetable world from which to select edibles, ogle the other members of my species with whom I long to mate. 

     There are those of us human beings, of course, who have preferred to mate with other species than our own. The example of shepherds lying with their sheep is Biblical in scope, and I myself have known a particular farmer who would have sex with one of his cows. The give-away was the animal hair and fecal matter spread all down the front of his overalls. And as I recall, Governor Winship in the Plymouth Colony hung one of the original pilgrims for having sex with a turkey. They also hung the turkey, which is sadly, grimly humorous. Those first pilgrims meant business.

     With all this acknowledged, no one would say that the interspecies sex was a consequence of poor eyesight. They could see what they were doing, make selective choices among alternative beings in the world—because their capacity for vision referred to a material world existing outside of their mental activity. 

     You can maybe imagine language acting in a similar way. Spear in hand, you and your companion are out hunting for a wooly mammoth to kill, when the guy beside you abruptly yells ‘Run!” or something similar. In this way language might be immediately useful, multiplying the scope of the other senses, which have also evolved to respond to environmental events. The immediate assumption might be that the language has expressed the need for intelligent, discriminant behavior, quickly executed in the material world, in response to changing material conditions. It wouldn’t do, for instance, to run toward the source of threat—and in fact, if your companion took the necessary time to do the thing rightly, he might yell “Run from the charging mammoth directly to our right.”

     The immediate assumption might be that the eyesight detected something in the environment to which the imperative linguistic product referred—and referred as well to the speed of the approach, to the direction from which it was advancing, and perhaps even to the intended mayhem that the advance suggested.

     Those philosophically minded hunters for whom language did not refer to any referent, for whom no real ‘signified’ existed behind the ‘sign’, might prefer to deconstruct the etymology of the verb ‘run’, to quibble with the definition of ‘mammoth,’ or to be concerned about the inaccuracy of the word “right.’ However, that misconcept of language would carry its own sad correction, and our brainy hunter would not live to reproduce either with his own species, or with any other preferred choice.

     These days, those philosophically minded hunters roam through many university literature departments—where they are also about to become extinct, I fear. But that is the subject of another conversation.


    “ If I now tell you that my old dog, with his few sad last grey hairs, is sleeping by my woodstove, I trust you would not come into my house expecting to find an elk, and that if you did, I would be justified in believing you were fucking weird and never letting you near my dog again.”

      I have already asked you to imagine yourself as a neolithic hunter roaming around, spear in hand,  and using language to negotiate dangers originating in the natural, unconstructed world. This time I’d like to imagine something a bit more probable: that we are contemporary neuroscientists. As such, we can acknowledge our incredulity at post-modern language theory—because we are starting with a different concept of evidence, and indeed with a different conceptual pedigree entirely. As scientists we are looking at the neurological bases of behavior, the source of which is an organ—the brain—that has evolved over immensities of time, in response to uncountable numbers of environmental interactions, so that its capacities are determined according to its fit in its material niche. There are other niches, but we do not fit in them: for instance, we cannot breathe too good under water, we cannot eat bamboo for any length of time and survive, we cannot in arid places go for months without water. It is up to other animals to fill those niches.

     We inhabit the niche we are designed to inhabit—which makes good tautological sense.  I have more to say about this topic, but because I am at heart a shy and modest person, and so do not want to flash my naked, unseemly nerdism, I have provided links to brief lectures: one regarding the neurological areas in the brain responsible for language (; the other regarding a neuroanatomical area that coordinates our mental and physiological rhythms—called circadian rhythms—with the cyclical presence and absence of sunlight (

     What these links will do is provide some evidence—as well as further links to the world of other related evidence—that I am not just making this shit up. The entire worldwide community of neuroscientists believes from vast experimental evidence that, down to the most intimate neurophysiological degree—down into our very cells—, we are tied to events in the natural world around us. And language, as a neurologically wired capacity, is a feature of that linkage. As thinking, speaking human beings, we are as totally synched to the events in the natural world as our iPhones, Droids and iPads are synched to our computers.

     Post Modernism has a briefer pedigree: perhaps if we stretch things we can extend it back to Kant and his belief that the noumenon cannot be understood, but we might all feel more confident with a less ambitious lineage extending from Nietzsche through Husserl and Heidegger into Levinas, Barthes and Derrida, then forward to the current intellectual heirs. This is a Continental heritage, and works most persuasively with Continental languages. The way in which Kanji, for instance,  purports to refer to its signified clearly works on principles that are not well-characterized by Western examples. 

     But even with the continental tongues, the referent to which a sign points is not commonly in question. If, for instance, I now tell you that my old dog, with his few sad last grey hairs, is sleeping by my woodstove, I trust you would not come into my house expecting to find an elk, and that if you did, I would be justified in believing you were fucking weird and never letting you near my dog again. Some of you might even catch the intentional allusion to Keats. Further, if we were honest among ourselves, we would recognize that the books and articles Derrida has written were published with the particular intent to communicate his ideas, regarding which he worked with discernible effort to convey accurately. If you happened to attend one of his lectures at the University of California, Irvine, where he taught in his later years (and from which, I blush to confess, I graduated) you could have enjoyed his personal, extended, elegant use of language as it was classically conceived—and even ask in interrogative sentences what he meant by the ‘trace’ that language unearths.

Part III

     “In which Postmodern Despair is Vanquished, and We Can Return to our Universities and Teach Poetry.”

     My point—my purpose in making my previous observations is this: there is a disconnection between language as it is now philosophically conceived in postmodern discourse, and language as it is commonly used—even among the philosophers themselves. When Derrida and the murmuration of his followers reduce meaning solely to the relationship inhering between the sign and the signified—the noun and its referent—-they are omitting the vast majority of linguistic functions. Accordingly, they have imported a reductivist platform that is being made to stand for the whole, immense range of expressive uses. Just to pick one immediate literary example, when Marc Anthony at Caesar’s funeral keeps repeating his observation, “And sure, Brutus is an honorable man,” the meaning of that phrase—understood by all who hear it—has nothing to do with the literal referent. 

     The unpublished intention, implicit in the sign/signified postulation, is to introduce an unacknowledged axiom: that the true purpose of language is to reveal the ontologically real. The postmodern formulation tacitly asserts that language is not conceived for quotidian uses (“While you’re out, will you bring me home a portabello sandwich from the Black Sheep deli?”) or for poetical, non-referential pleasures (“The world is blue like an orange.”). The essential, defining purpose of language is as a tool for the contemplative mind to extract the unknowable “ding an sich” —in the performance of which, as we are told over and over, language fails.

     Well, now, that purported failure logically follows only if we accept the reductive proposition that, first, language is merely a matter of nouns and referents, and that, second, its essential purpose lies in its philosophical discourse. However, we are not constrained, either by logic or by common usage, to accept either proposition. Shakespeare (see Marc Anthony above) along with just about every body else in the world has already discovered and published other useful propositions for language. Here, for instance, is one such provocative idea:  

From 1991 until sometime in 2000, this image/symbol is the name of the rock star ‘formerly known as Prince’. As such, it seems to me to turn postmodernism inside out, in that we have a sign connected to its signified without the medium of language at all. 

     To choose another instance, here is one of Charlie Chaplin’s famous opinions on the matter:

For Chaplin, language appears to be an expressive act—extended sequentially through time—that necessarily involves gesture, facial expression and tone of voice—all of which transcends the literal vocabulary, which in this particular instance is comprised of faux Italian*.

    Of course, the ambiguity of language might in fact not be a function of all languages, but merely a feature of the Continental ones. For example, here is just a little of the mathematical language describing the physical reality of the twenty-six dimensional flat spacetime: 

I admit that this is not a language that I find especially pertinent to how I live, but I do believe that this is the best language to be used by those men and women—those physicists—who are truly, successfully capturing the nature of the noumenon: the absolute physics of the universe.

     If we do not commonly find among physicists the despair so often present in postmodernism, we also fail to locate individual differences in their mathematical language that will allow for particular people to identify themselves. Math is a universal language. It is better able to control its meanings, but at the expense of human definition, for which French, German, English—indeed virtually every other language is far better suited, even though that individuation necessarily introduces ambiguities. What I mean when I articulate a thought is not always reliably grasped in its full import by my partner in conversation. My differences introduce ambiguity into expression. I am other than you are, and what I mean—the shades of purpose I convey, the tenor of my voice, the pacing I choose—is individually mine. 

     It is exactly this individuality against which philosophy has protested. And it is this protest that I, in my turn, would want to revalue. I am far from equating linguistic ambiguity with the despair of failed significance that we find everywhere lamented in postmodernism. I would argue instead that ambiguity—precisely because it prevents material control and the successful exercise of power—is a joyous escape from convention, the delight in play, the opportunity for humor, the wonder of the unexpected, the nature of hope.

Know what I mean?

*Here is the text of Chaplin’s Song:

 Se bella giu satore
Je notre so cafore
Je notre si cavore
Je la tu la ti la twah

La spinash o la bouchon
Cigaretto Portabello
Si rakish spaghaletto
Ti la tu la ti la twah

Senora pilasina
Voulez-vous le taximeter?
Le zionta su la seata
Tu la tu la tu la wa

Sa montia si n’amora
La sontia so gravora
La zontcha con sora
Je la possa ti la twah

Je notre so lamina
Je notre so cosina
Je le se tro savita
Je la tossa vi la twah

Se motra so la sonta
Chi vossa l’otra volta
Li zoscha si catonta
Tra la la la la la la

In The Wild


Part I

The speed with which medical conventions can domesticate the most outlandish requests, or re-frame even grotesquely violent behaviors, is an under-appreciated marvel of modern social life. In virtually every other setting—for instance, at your workplace—, it remains inappropriate for me to approach you, hand you a Dixie cup, and request that you fill it with your excrement. Work conventions rightly disallow this behavior, which would appear out of place, out of bounds, and downright weird.

However, if I approached you in my hospital office, handed you a paper cup with a wooden spatula, and there asked you to fill it with a stool sample, the request would be rendered perfectly normal because the conventions in that setting make it okay to require unacceptable things of you. In the right context, one of my colleagues might insert his finger into your anus, peer wisely into your vagina, thread a camera up into your colon to take pictures–or even cut you open from your collarbone to your pubic mound, slice out your heart, and put it in an ice chest. Only temporarily, mind you, because the promise is that someone will put it back when the time comes.

Although hospitals do their best to disguise the fact by building routinized, institutional facades (look at the architecture of the hospital pictured with this article), they are nonetheless the wildest places I can think of—far crazier than prisons, science laboratories, or military compounds, though they may share aspects of each of these establishments. Hospitals are the licensed institutions in which we hide the uncanny things of the world, chiefly by erecting conventions that suspend our incredulity. Hospitals assert the commonplace, affirm routines, profess the customary, declare humane incentives: just mundane practice going on in here, another ordinary day of saving lives, move along, nothing to look at. And in fact, you will not be allowed to look behind the closed doors.

But let me tell you, if at some point in your life, you discover that you need to be frightened–that you want to challenge your complacencies in ways you do not control beforehand–then hospitals are where you want to be. Nowhere else have I routinely touched people, circumstances, fates that otherwise I never would have imagined.

I never would have thought, for example, if left on my own—never thought to step behind the Senator showing me and my friends around the White House, and try to give him a bear hug. I am just too inhibited that way, and I don’t like Republicans. But Andrew, an adolescent high school student from one of the blue New England states, was comfortable with open displays of good feeling, and felt obliged to make a public declaration of his patriotism by embracing the natty senator addressing the New England Debate Club.

Naturally, Andrew called down upon himself the hordes of Secret Servicemen positioned throughout the building. The event I am writing about happened years ago at this point, well before 911, but even then the Secret Servicemen didn’t take chances–and given the initial urgency, it is to their credit that they pretty quickly recognized this was not a criminal assault, there were no bombs involved, and that everybody was safe, after a fashion. They had no idea what was really going on, but their expertise was with threats and violence, which they were trained to recognize when they saw them, and Andrew’s behaviors fit neither category.

For one thing, he was pretty disorganized in his attempt–not hesitant so much as uncoordinated in a weird way. He was also spouting a sort of ‘word salad’ that might have been mistaken at first for an unknown foreign language–except recognizable, though misused, words in English were mixed in. He wasn’t hard to deter from his intended purpose, the teacher-chaperones intervened with the government men, and eventually one of them took him back to his hotel room.

By report, he seemed to improve over the course of the afternoon, though his parents were called nonetheless, and arrangements were made to whisk him back home–in the course of which, however, he suffered another, more severe and persisting disturbance to his language production and his mental organization. He never made it home per se, but was taken directly to my Medical Center and admitted through the ER, where he was taken for a CAT scan. He had an aneurism in his left middle cerebral artery, which had begun to leak, causing disturbances in his language and motor control over parts of his right bodily extremities.

If he had been 75 years old, I think the symptoms would have been recognized more quickly than his were, because mid-adolescence is not typically an age at which to develop strokes. Though no one said anything, my sense at the time was that the adults around him all were assuming he had been taking some recreational drug, and his goofiness was the result. In truth, he would have been better off if he had simply been wasted on something fun, the effects of which were temporary. But instead of merely needing de-tox, he was awaiting brain surgery to clip the artery.

I was enlisted to evaluate him to establish a portrait of his current level of cognitive performance, which would provide a baseline for his post-surgical therapies. Accordingly, I visited over two afternoons as arrangements were made for his surgery. There were the formal portions of the evaluation, which mapped out memory, attention span, visual-spatial organization, executive planning, and of course his linguistic functioning, which looked pretty decent even with the aneurism. There were also the informal parts of our interaction that let him tell me, off and on, that he liked political science, that he was an only child, and that his favorite band was Tool or Nirvana, depending. I preferred Alice in Chains to either one of them–which no way could he believe, man, because Cobain was such a great guitarist, and the best writer. And besides, Courtney Love was hot.

This could be a fun job. For two days we definitely had the best music going on the floor.

Part II

Other days, on other units, were less musical. On Thursdays we had neuropathology rounds at 7 a.m. in a group of inter-connecting rooms in the basement sub-floor near the morgue. There the neurosurgeons, the radiologists and the neuropsychologists (i.e. me and two others) would assemble among the pathologists to engage in Brain Cuttings: an instructive event during which the brains of persons who had died would be sliced in coronal sections to allow the group of us to examine the gross pathologies, before the sections would be given to other pathologists in other rooms to stain and photograph. Often there would be two or sometimes three brains trussed up in a vat of formalin, where they had been immersed in order to solidify and preserve them for the cutting.

In themselves, brains are remarkably fragile–which is the reason they each float in the cerebrospinal fluid inside each of our respective skulls. Suspended in that salty fluid, they weigh about one fifty-sixth of what they would on land, so to speak. The brain’s own weight would be lethal otherwise; it would collapse fatally on itself simply by the pull of gravity, squashing the life out. So it is a delicate thing to remove the brain from the braincase, and slide it into the fixative that will solidify it enough to allow manipulation. It takes a sensitive and dexterous hand–governed by the attentive mind of a sociopath. Here we’d be standing by this large butcher’s block, on which the pathologist would set the brain he just fished out of the formalin tank, and in the adjoining room, separated by a kind of shower curtain, we could hear the whirr of the diamond saw as the other pathologist was cutting off the top of a recently-deceased-person’s head. It took some getting used to. Hannibal Lecter might have trained in a place like this.

There were two pathologists: a male and female team. The man–call him Dr. Taft–always chose to cut the brains; the woman–call her Dr. Adler–had the knack of extricating the slippery cerebral mass from its protective layers of skin, bone and meninges without brutalizing it, and getting it into the formalin with minimal damage. I don’t know if she ever knew whose forehead it was into which she pressed that whirling saw blade, but the rest of us had to know. Otherwise we could not relate whatever pathologies we saw in the fixed brains to the particular medical histories that proved so fatal to our patients. That meant that each brain, which was labeled by the kind of tag you might find dangling on an appliance in Sears, could be connected to a clinical history, which one of us would read to the group while Dr. Taft prepared to start cutting slices.

You cannot believe some of the stories. One brain I remember looked as if it had been shot with a spray of ice. This had been a young pregnant woman who, some time during her third trimester, had been engaged in love-making with her husband. Given the size of her huge gravid womb, the couple had chosen to have oral sex. Unbeknown to either of them, as the poor guy was going down on his wife, his excited heavy breathing was introducing air into her vagina, up the birth canal, through the dilating cervix, and into the placenta, where it was absorbed through the immense plexus of blood vessels there. Their sexual excitement, in other words, fed multiple air bubbles into her blood stream, which abruptly killed her when her pumping heart shot all those emboli into her brain. She never knew what hit her.

You can imagine what it must have been like. For one second or so he thought he had brought his wife to climax, only to realize that, no, something was stunningly wrong. By the time he had called 911, she was already dead, and by the time the EMT’s arrived, they had lost the baby too. The group of us stood there in numbed silence, not making eye contact, and waiting just to get the session over with.

I never went to neuropathology rounds without preparing for them. The fundamental premise was, of course, that someone had died–which, naturally enough, occurs with some frequency in hospitals. But it nonetheless required a sort of steeliness, a resolve to take it all on, to walk into that room and slice up someone’s brain. It wasn’t for everyone. We strolled down unadorned corridors toward restricted rooms where, like it or not, our Thursday exercises assumed religious proportions. After all, our rituals required human sacrifice. With the permission of the deceased, whose organs we were using, we called up the gods of science, and retrieved truths as we found them in the literal world of the dead. We had mortality itself on the cutting block, and took the opportunity to dissect the accidents of disease and infirmity, tease out vital membranes, and prepare as best we could against the onslaughts waiting for every last one of us.

These were perilous ceremonies, requiring perhaps a sort of ancient Mayan sensibility. Mercy wasn’t in it. Dr. Taft accidentally inflicted a vicious wound in his hand with the knife he was using to transect one of those brains, and though he survived the resulting systemic infection, it was a very near thing. He was ill for a prolonged time, and he wound up losing some of the function in that hand.

My own changes came some months before Taft’s injury, and had a different provenance all together. As we gathered around the ceremonial block that Thursday, and as Taft prepared to make his first cut at the frontal pole of a male brain, I heard someone–one of the interns, probably–begin reading the clinical history of the specimen we were about to study: The patient was a 17-year-old adolescent male named Andrew, the voice intoned,  a neurosurgery patient with an aneurism, who post-surgically bled out in the recovery room. Oh man. Oh man. I could see the extensive, black irregular pool of blood that had guttered into the left parietal lobe, the left anterior temporal lobe. I didn’t want to see the rest of the ugly stuff in the lateral ventricles, once Taft cut back to them.

I was looking instead for the place where Andrew kept Kurt Cobain’s music, the place where he knew the words to Lithium, Aneurysm, Heart-Shaped Box–the secret area where he stored his version of Nirvana. Everyone would have, if they had known him.

What Did You Think?

Current events lately have vividly published the persistence of human violence against other human beings, which in turn has raised conversations everywhere around me in my particular circle of friends regarding issues of social justice and free will. You might be engaging in them yourself in response to Trump’s latest transgression, or the last atrocity in Orlando, or in remembrance of the massacre at the Emanuel African Methodist Episcopal Church, or the most recent instance of a collegiate man drugging a woman to rape her in peace. Take your pick: there are ample occasions to goad you toward thoughtful formulations of behavior.

Among my friendly conversations, the question arises whether these aggressions are necessary. Do people who want to hurt other people have any real choice in the matter, or are they compelled to aggress against others by deterministic forces–by environmental conditions and pressures, let’s say, or by genetic, hard-wired proclivities? Social violence is clearly both frequent and widespread enough to prompt outcries for just legal responses and political solutions. However, behavioral formulations are compelling not merely to political consciousness, but also to science itself with its outright investment in material causation. The aim in science, of whatever category, is to think preferentially about what determinism means: how one material entity effects another created thing: how atoms conglomerate to cause material substances, for instance, how gravity sticks us to our places, and so on. If any given person is determined to behave in a certain way–perhaps especially, though not exclusively, if the behavior is violent–, then we may want to understand the mechanisms responsible for the outburst.

Recently there have been a couple of articles in the scientific news that have introduced uncertainty into the accepted model of causative instrumentation. Historically the primary axiom has been that no effect can be created without a cause—which, on the face of it, seems a reasonable axiom to believe in. A material effect must be determined by a material cause. If something can be created without a cause, then we are in realms other than science–religion, for instance, or the paranormal, in which scientists would be admitting to the efficacy of ghosts haunting the world, miracles arising out of nowhere, and magic affecting the substrate of reality. Gandalf could be real, Harry Potter is roaming somewhere in London, and the Loch Ness Monster is still eating Scotsmen.

The trouble is, at no level has causality been determined, even as the degree of analysis sharpens, and so thus far there are no solutions to be had. Science keeps telling us that it really should be simple: every action has an equal and opposite reaction, that sort of thing. It seems an easy matter to conclude that I throw a brick because the mechanical action of my arm is caused by thoughts, ideas and emotions directing me to throw it. The arm must be caused to throw, or it won’t move. Or, to be more provocative, if someone goes into a family planning clinic in Colorado to shoot the doctors, he must have motivations and ideas that caused his behavior. He is not a computer that can be taken over and controlled from without–like aliens directing his actions using bluetooth devices from their starship. (Of course, he might presumably be psychotic, but even then science would still say that the lack of reason is caused by material malfunctions among the dopaminergic neurons—but that is another conversation).

Hence brain functioning is itself laid out for exploration. The presumption is that brains are biological mechanisms that operate along recognized mechanistic principles. Thoughts arise within our minds according to functions that can be assayed: vision is intact, recognition of consensual reality is intact, and the aims of the behavior are programmed by cultural heritage, using language as its primary means of indoctrination. The programming must act upon the material substrate–we learn things–in the same way that programming a computer must act on the electrical components inside it. In examining the brain, we analyze its functions into its causative units. The mechanistic features of brain function derive from the neuronal components: a neuron releases a chemical neurotransmitter, which crosses a synaptic gap to bathe receptors on another neuron, which absorbs the flow of ions that— according to the electrical charge of the ions—then potentiates either an excited neuronal discharge, or an inhibition to excitation. The neuron is turned on or off.

Pursuing the analysis further, we have to descend another level into the neuron itself, because its functional mechanisms must be characterized too. If we want to influence brain activity, we have to know how to influence the neurons that purportedly create it. This is the basis, for instance, of the proliferation of psychoactive medications: SSRI’s to alter mood, neuroleptics to alter psychotic disturbances, anti-seizure meds to decrease kindling, neuronal hormonal function to influence sex and appetite, mood stabilizers, major sedatives, and so on.

So how do neurons manufacture their neurochemicals? How transport them? How create energy to fuel their activities? This is the level of scientific inquiry into cause and effect addressed by those articles to which I alluded above, the links to which I give here:



In them we have research—pure, hard, materialistic research published within refereed journals—that is concluding there is no simplistic determinism of the nature that science presumes to be seeking.

This new research is revealing the hidden, underlying premise of all previous brain research, which was taking as its axiomatic model the theories of classical chemistry and physics. But suddenly those deterministic models are surmounted by the non-deterministic activities of quantum mechanics. The revelation of those articles is that the classical models no longer explain neuronal function—upon which brain activity depends. This is the same theoretic disagreement to be found between Einstein (who famously states the ‘God does not play dice with the universe’) and Niels Bohr, whose work in quantum theory discovered the indeterminacy at the essence of atomic activity. Chance is at the heart of the universe, not material control.

It appears, in other words, that determinism may be a matter of the level of inquiry. I can build a car engine that–when an air/fuel mixture is introduced into the cylinder at the correctly determined time–will cause an explosion, which in turn will cause a piston to move that is connected to a drive shaft that, in its turn, is connected to a wheel. All those material things and activities derive from a use of materials at the daily level of human visibility. We can’t ask too many questions about the actual nature of those events and materials, however. We just use them. If you do keep asking, causality fades away in the process of continued analysis. It’s the difference between mechanical engineering and theoretical physics.

If brains cannot be constrained by classically deterministic models, then we at least approach another conversation about freedom of thought, and the relation between brain activity and cultural education. We need new models of brain function and its relationship to behavior; these two articles, and the world of research they imply, are opening up a theoretical space in which such models can be imagined and researched. In particular, the possibility arises for a model that escapes the simple dualism between material being and spiritual ectoplasm–between science and religion, normal and paranormal, matter and mind.

There is some urgency for such a defining conversation, given the brutalities arising daily within an interconnected world of faults and problems. There is no persuasive materialist explanation that proves, for instance, that a person is physically determined to take an automatic weapon and preferentially train it on other people—let’s say (because I think the stakes are very high here) people of color worshipping in a church, people who also happened to befriend the appalling man in their Bible study class before he pulled out his handgun. Dylann Roof was not constrained to act as he did. There was no locus of control originating outside of his own volition: no chemical imbalance, no cultural programming, no poverty, no ignorance, no aliens directing him.

Nevertheless, he did shoot those people anyway, of his own choice and planning. He was personally accountable, and the possibility of individual freedom and responsibility are perhaps not terrible concepts to reconsider in general within our culture of proliferating virtual realities and fantasy escapes. People–and the governments they populate–can do better.



Planets Dressed as Girls, Running Home



I. Preliminary Matters We Need to Address Before We Bask in the Enduring Radiance of Aracelis Girmay’s Poetry

As practical, contemporary Americans, we have gotten used to science in our lives, with all its quite remarkable ambitions. Just think about it. Everywhere in the media, we have men and women attempting to predict the future climate of the whole planet. In other laboratories, physicists are working to define the nature of the particles that make the particles that make atoms—which make every physical thing in the universe. Astronomers peer backward toward the very beginning of the creation, and calculate the temperatures that existed at something like 10 to the -10,000,000th of a second after the very instant of the Big Bang. Continue reading “Planets Dressed as Girls, Running Home”

My Girl Wears Long Skirts

A long time ago, now, in 1991, I was on rotation in an adult psychiatric inpatient unit in a far away medical center, where I met a young woman—Wanda—who had several monstrous knife wounds to her throat and face. She was stitched up with black surgical thread, and resembled one of the female golems in a Frankenstein movie. Part of the fascination in seeing her, and noting the wounds, was to imagine how on earth she had managed to survive. She had tried to cut her own throat with one of her mother’s butcher knives, but it wasn’t sharp enough, so she started stabbing herself instead, and almost succeeded in hitting her right carotid artery before someone nearby—maybe her mother or another family member—was able to wrench the knife away from her, and call 911.

When I met her, she had just been admitted onto the unit, and was being watched 24/7 by an aide hovering nearby, but she nevertheless one afternoon, with no warning whatsoever, leapt out of her chair, sprinted as fast as she could run, and threw herself head-first through the window at the end of the second-floor hallway. However, because the window was fitted with unbreakable plexiglass, she didn’t go through it to plunge to her death, though she did break her nose and further bruise her poor face. Hospital staff got to be pretty twitchy around her, until the shit-load of Haldol she was given began to have an effect, and she was sedated enough to allow less impulsive people to keep up with her.

She was serious about her self-harm. From her own point of view, she felt she had reason to be so deadly in her attempted suicide: she had caused the Second World War, and she was simply unable to bear the excruciating pain of her grief and guilt. Those terrible years of wartime anguish and physical suffering were entirely her fault: 60 million soldiers killed, 4.5 million Jewish people—and uncounted gypsies—sent to the death camps, Japan bombed with nuclear warheads. Most of the northern hemisphere ran thick with blood, and fried in a radioactive fire because of her.

Her diagnosis was schizoaffective disorder, and she was heart-breaking to be with, when she didn’t terrify you with her potential for another lethal gesture, so staff bound up their anxieties as best they could and tried to alleviate, somehow, her mortal sorrow. In 1991 she was 22 years old, having been born in 1969, which—if you do the math—was 24 years after the conclusion of the war in 1945, and 30 years before it began in 1939. The favorite strategy among us, therefore, was to choose a time when she seemed receptive, and point out in a kindly way that, since she wasn’t yet alive when the war began, she could not have been the origin of it. This seemed to be an unanswerable refutation to the basis of her emotional torment. She could not possibly have caused the war, so be of good cheer.

Although I don’t know that any of us actually thought about it this way, we all in fact shared a two-fold assumption common to well-meaning people the world over: a. that emotional turmoil was refutable (i.e. that we could talk Wanda out of it); and that, b. common sense would be the source by which the refutation prevailed. Neither assumption proved to be true. Despite her long stay on the inpatient unit, Wanda never was reconciled to happiness, but simply became less violent in trying to relieve herself of her psychic pain. She became more stoical, or at least became used to living with her sense of personal responsibility, and inured to everyone else trying to cheer her up.

Our argument was never persuasive either. I was, of course, one of those compulsive explainers trying to set her right. I was fully committed, I admit it: her passion was supported by no more than the merest bubble, it seemed to me, just the frailest sense drifting away from humane connection, and so I talked with her. I listened as hard as I could when she tried to convey her deep conviction of her causal guilt. She had started walking, she told me, on Church Street when she saw the foil wrapper of a piece of gum, and she knew that the foil was reflecting God’s thoughts to her about cast off people because the wrapper was in the gutter near a mailbox with the American eagle on it which meant war, and war was WWII, and so she caused it because there she was.

Actually she said a considerable amount more than this, but I couldn’t follow it all. I was in truth able to follow hardly any of it. What I have represented here are only the few details I could remember long enough to write them down, and I remembered them that long chiefly because, out of the welter of things she said, these items were those that I could make sense of for her. That is, because I thought I saw coherence in the sea of verbal production, I was able to remember these details, but not any of the others.

In its actual presentation, none of her thinking was coherent. She was far less organized than what I have represented in her tangential logic because I cannot disorganize my mind enough to reconstruct the brute chaos of hers. She could not sequence her thoughts, could not put them in an order that made sense, even to her. She was over whelmed, prostrate before fragments and huge ungated emotions, and therefore she was terrified: there were no conventions of meaning, all mental structure was sundered, she had no fundamental recognition of cause and effect, or even temporal series. She couldn’t tie her shoes, couldn’t manage her hygiene, would eat only sometimes if food happened to be placed in front of her. Like I said, she was heart-breaking.

Lately among scientists studying the human mind, there is a temptation to see in this mental place of Wanda’s a freedom from conventional restraints of thought that is promoted as a model of the creative process. Dr. Nancy Andreasen’s article in the current July/August Atlantic Monthly is a case in point. Science is always interested in isolating the important variables (and as a corollary, ignoring those that it construes as unimportant to its theory), and in studies of creativity, the favored candidate is a concept of freedom, an escape from conventional thought, the ability to avoid the usual grind of daily associations, to make something new, and thereby elude the standard conclusions of a “average mind—with one thought less each year,” as Ezra Pound imagined it. From this point of view, Wanda represents a class of people who are about as free from conventional standards of thought as anyone science could hope for. If you want a group of people whose thought lacks customary, middle-class organization, then schizophrenia will do.

Accordingly, once exemplars of freedom have thus been identified, the interest then is to discover whether creative people have a higher incidence of mental illness among their families than the average banker has occurring in his. The general logic is that creative people find their creativity because they are just a little mentally ill–not entirely whacked out like Wanda, but just weird, a-typical, eccentric, flaky but not funky. They “think different”, like Steve Jobs. They had rather spend their time playing with words, or drawing pictures, or like Georgia O’Keeffe painting large images of flowers that look like female genitalia—all instead of making money the way rational, non-weird people prefer to do. This theory works best if you do not really know the details of psychotic life–which science gets around nicely by preferring to ignore individuals, and instead to look at the class as a whole to isolate apparent similarities among them as a group. It is easier to construe mental illness as a model for imaginative freedom if you do not actually listen to individuals like Wanda, do not spend time with them to uncover how their minds are actually working, or failing to work–and do not bother much with empathy.

It is also easier to generalize across individuals to reach group commonalities if you do not have a concept for creativity itself. Why bother to define creativity, Dr. Andreasen reasons, if you can just look instead at people who are supposed to embody creativity, whatever that is? As she points out, “If it walks like a duck and quacks like a duck, it must be a duck.” And besides, if you accept analogies about ducks in place of logical argument, then her problem is now much easier: she just needs to identify who the creative people are among us. Fortunately for Dr. Andreasen, there are various literary awards that do this work for her. Creative people, she tells us, are those whom academic professors have given prizes to.

I am going to assume that the naiveté of this reasoning is self-evident to everyone who is not a tone-deaf neuroscientist–though I will permit myself to point out, irritably, how this ceremonial process has somehow failed to recognize the likes of Emily Dickinson, Marcel Proust, Virginia Woolf, James Joyce, and virtually every person of color writing before 1993–none of whom were awarded anything, certainly nothing like what Pearl S. Buck received (Nobel Prize), or even Antonio Egas Moniz (another Nobel laureate), who ‘invented’ lobotomy, which was understood to cure mental illness. Dickinson, of course, couldn’t even manage to get published in her lifetime.

And though I am sorely tempted, I do not want to spend too much time here with the weaseling details of Dr. Andreasen’s peculiar argument. I am after much bigger game: the impulse itself to romanticize mental illness as a prodromal feature of creativity–which is at once an unfeeling injustice to people suffering with psychotic spectrum psychosis, and a cynical misunderstanding of the nature of creative activity. Wanda’s mental flights out of organized thinking were not redeemed by imaginative insight into the nature of war, nor into the trauma of violent death, nor the social tragedy of senseless slaughter. Nor was there insight into her own mental processes, no exposure of underlying psychic conflicts, no understanding regarding the nature of her sense of guilt and responsibility. Nor did she produce anything that an outside audience might find edifying. She was, in short, not creative. She was psychotic. Big difference.

I suppose it could be argued that we should not be too hard on Wanda for failing to create anything memorable since she was not an artist to begin with. But I want to mention an occasion I had to study the transcript of a Rorschach that had been given to a famous writer (whom I shall not name) when The Famous Writer (TFW) had been hospitalized psychiatrically in an institution that I shall also avoid naming. (I do not want to be publishing protected medical information). Unlike Wanda, who was not and never will be considered a creative person by Dr Andreasen’s standards, TFW would fit her definition. So I read the Rorschach transcript with fascination. The Rorschach is a test designed something like Georgia O’Keeffe’s paintings: the ambiguous image encourages the subject to project whatever structures are inside his or her mind to organize that visual ambiguity into a coherent percept. Some things may be more common than others to see.


There are ten cards, and to my surprise at the time, there was nothing in any of TFW’s responses that was artistic: no startling imagery, nothing of intelligent word magic, little that was coherent, and even less that was memorable–apart from the one line that I happen to recall, with which I have titled this article: My girl wears long skirts. And in its context, that one sentence wasn’t even a pertinent response, but represented a moment of tangential thinking–the mind drifting away from its subject. Upon recovery, TFW also said the same thing that I am saying: there was nothing redemptive about the episodes of mental disorganization that was suffered, no depths plumbed that were useful to subsequent creative projects, no lines resurrected from the chaos that were ever used in subsequent written products. The periods of illness were life wasted. What TFW created when he wrote was of an order entirely different than what his mind did when it disintegrated. There was no lesson to be learned from his own mental illness, certainly no general theme to be promoted about creativity at large.

The creative act does not arise out of a freedom from restraint, or an escape per se from conventions. As my instance of Wanda can attest, psychotic escape both throws away conventional meaning, and in fact destroys the principles of meaning itself. Creativity, on the other hand, is better understood as a freedom to perform among a multiplicity of choices. It is the mental compass to digest the whole range of worldly features, the discretion to select among the play of cultural, social, and natural fragments, the comfort with randomness within shapeless substances. Creativity invents out of chaos, not out of the void–as Mary Shelley noticed in her Introduction to Frankenstein: Invention, she writes, does not consist in creating out of void, but out of chaos; the materials must, in the first place, be afforded: it can give form to dark, shapeless substances, but cannot bring into being the substance itself.

Creativity is no more associated with psychotic illness than it is with alcoholism (which also disinhibits the mind), or personality disorders (which filter social data through a non-standard perspective) or lung cancer (because creative people smoke) or congestive heart failure (because writers, at least, sit at their desks all day and get fat). There may be correlations among all these things, but of course correlations are cheap. Did you know, for example, that when I was born, there were earthquakes in Chile and 40 people died in China?—which, sad to say, do not attest to the worldwide influence of my birth. Three things (my birth, earthquakes, the death of 40 people) just happened somewhere at the same time. But there is nothing causative here.

The impulse to pathologize creativity is part of the present cultural drive to find pathology everywhere within our daily, usual, precious mental life–so that sorrow is now depression, physical energy is now hyperactivity, mental energy is now mania, flights of imagination are now psychoses, shyness is now social anxiety, individual differences are now personality disorders. The collusion between science, medicine, and money makes a compelling horror story, but that narrative is beyond the scope of my work here. Let me suggest that, if you want to be frightened, don’t watch Shark Week, but instead read Blaming the Brain, by Elliot Valenstein, or Anatomy of an Epidemic, by Robert Whitaker. Both will disturb your sleep.

Gender Differences

photo 1-2


photo 2-2


I’d like to be unorthodox, and propose a choice: you can bear with me for a minute and let me explain what these pictures are about, or you can skip to the George Carlin quote in paragraph 7, whereupon this article might appear more overtly crafty. Or at least more conventionally organized.

But if you do that, you’ll want to come back to these photographs anyway: they display a region in the brain called the medial pre-optic area, which is a locality involved in, among other things, the expression and regulation of various important hormones— which in turn regulate various important behaviors you’ll probably want to know about. The brain sections shown here are from a gerbil–actually, two gerbils: a male and female. I took these pictures during my years in training in a neuroscience laboratory, where my lab mates and I were pursuing neuroanatomical differences between the two genders. The top picture depicts the medial Sexually Dimorphic Area (mSDA) of a male (on the left) and female (on the right) brain. The bottom picture depicts the SDA pars compacta (SDApc) of a male (on the left) and a female (on the right) brain. At the time they appeared some 22 years ago in The Journal of Comparative Neurology, these pictures and others like them caused discrete but obvious excitement among neuroscientists–who as a group are turned on by the most unlikely things.

The source of the excitement were those differences visible in the neuroanatomy between the two brains–the male and the female. Anyone can see them, which is the point of these pictures. The black irregular dots, lines and smudges densely evident in the male brain are neurons and interconnecting tracts full of vasopressin. Males have a considerable amount of these neurons in these brain areas, and females have comparatively little. Such differences were first discovered by my lab director in a set of studies in 1982–and so for the first time evidence was found to indicate that there were structural differences between male and female neurology.

This was a big deal. Given the materialist way that scientists think, to locate a difference in the brain meant that they could locate the source of differences in behavior. This is of course an axiomatic belief in science and medicine. The great longing is to connect an underlying neurological structure (basal ganglia, let’s say) and its resulting behavior (Parkinson’s disease, for example). Once a correlation can be established, then the next step is to create a technology that can repair the brain, and thereby change, in this case, the deteriorations of the Parkinson symptoms. We could save Michael J. Fox (among all the other people suffering with the disease). There are many diseases we are all familiar with–Alzheimer’s disease, Multiple Sclerosis, Parkinson’s, Schizophrenia–that are studied under this primary conceptual model of brain/behavior research.

Now with that in mind, the pictures above show a brain area that is involved in other pretty interesting behaviors, among them aggression, stress responses, parental behavior, and the really big one: sexual behaviors. And not too long after the work in our lab was published, a gentleman in a lab in San Diego claimed to have found an analogous structure in human brains, which he pronounced as the source for homosexual behaviors. You maybe can grasp the initial excitement of that discovery. All sorts of godly people were clamoring for follow-up research that would allow surgical interventions in the brains of gay men to relieve them of their unChristian urges–techniques that would allow that nucleus to be ablated without actually killing the poor sinner whose brain they wanted to tweak so they could control the nature of his desire.

In the nick of time, it was discovered that the brain area found in the human beings studied in that one lab could not be found in other human brains by other scientists in other labs, which suggested that this purported homosexual nucleus was merely an artifact of the immunocytochemistry used to stain for it in San Diego. Much of the Republican world wept in consternation. About this same time, other scientists announced related findings that somehow had, until that time, escaped their detection: human sexual behavior is really really complicated. Naturally, other corollary discoveries soon followed: there is no end to the number of brain areas involved in sexual behavior, including those also involved in violence and aggression, and there certainly is not one tiny nucleus hidden somewhere that governs everything.

There was at that time at least one young scientist (i.e. me) who thought that this hope for a solitary place in the brain that governed everything should have passed out of common belief after Descartes picked the pituitary gland as the resident palace of the soul. That was in the 17th Century, after all, and I had thought the whole project would have been abandoned after 3 futile centuries. But, no, there remains an interest in material explanations to account for the differences between genders–an abiding fevered energy pursuing why, as George Carlin observed, “Women are crazy, and men are stupid.”

Carlin has, to my way of thinking, an especially poignant way of articulating the observed differences, and he is equally memorable regarding the conclusions he reached about the source of those differences: “Women are crazy because men are stupid“. Well, yeah, he was on to something, though I suspect that he derived his hypothesis by taking into account other sources of evidence than looking at the brain cells of gerbils.

There are plenty of them–other sources, I mean. And in the spirit of George Carlin, let’s look at, oh, maybe one random example: the recent movie Her. In this film, for those of you who may not have seen it, we follow a sensitive male in the person of Theodore Twombly, who makes his living writing love letters for other less articulate males–those who are tongue-tied, who are less in touch with their sensitivities, and need help. Theo is a contemporary Cyrano de Bergerac, eloquently seducing women for the pleasure of dumb, under-socialized, but physically attractive men. He himself has had his own successes with at least one woman, which regrettably proved temporary: his marriage to her failed. And therefore, with the logic of a precocious fifteen-year-old, he decides to have telephone relationships with other women, with whom he does not actually need to talk, except insofar as they try to bring each other to sexual climax by referring to their dead cat fetishes.

In the end, that doesn’t work for him any better than his marriage did, though the cause of the failure did not turn out to be what I expected. His problem is not that the whole relationship is a fantasy conducted over a telephone, but rather that it is still engaging, however weirdly, with an actual person. Even on the telephone Theo is constrained to interact with an individual different than he is, with likes and sources of satisfaction other than those he would prefer.

Therefore, with movie wisdom he discovers his true love in Samantha, who is a virtual intelligence that sounds like Scarlett Johansson (and not like Phyllis Diller, which is lucky), and is constructed to fulfill his every wish. She is smart, entertaining, subservient to the nature of his interests, available at all hours of the day and night, adjusts to his sleep and work schedules, and admires the way he thinks. She even concocts a considerate plan for sex, using a physical woman hooked up through ingenious blue tooth devices to the computer, so that Theo might have an actual consummation with an actual physical being, who in turn is electrically connected to the cyberlife of Samantha. What could go wrong?

Well, let me tell you I have yet to find a single woman I know who has been remotely seduced by the premises of this ideal. Of course, from the point of view of science, my female sample is merely anecdotal evidence, but the logic of their friendly complaint seems to me to bear the weight of generalization. The guy in the movie wants his own romantic illusion, which is too much of a mirage even to allow the electronic succubus as a possible erotic evolution to his connection to Samantha. He only wants exactly what he wants, and only by his own terms. No blemishes, no human smells or fluids, no hair, no independence. Eventually, even the artificial intelligence is smart enough to leave him before things go too far, and he imagines some coercive means to compel her to fit his ideas of a soul mate. Fidelity appears to be one of them, since he is crushed by the disclosure that Samantha has been ‘mating’ 600-and-something other men. We can almost hear Othello lamenting in the background “that we can call these delicate creatures ours, and not their appetites.

Theo as Everyman prefers to keep his ideals remote, and unhampered by materials he cannot control. Which is a problem in real life, where most of us have to reside, since any attempt to import them into the daily round of experience is bound to be frustrating, disappointing, aggravating. They have no relationship to reality, which simply refuses to behave according the way men have wanted it to act. Naturally, a certain percent of males will want to do something about that–say, for example, drive through parts of the UC Santa Barbara campus and shoot all the blonde women who wouldn’t have sex with them–whom they didn’t actually know, never actually tried to relate to, but only stood in for their ideal. The male preference has always been to prefer the idea, and compel the material world to accommodate to it. And the social world, the political world, the religious world, and every other world that males have populated. Hence the truth value of Carlin’s category, “Men are stupid.”

I won’t presume to indicate what crazy thing women might prefer, except to note that thus far it has not included shooting all the blonde men they know, or taking assault weapons into elementary schools to massacre all the children, or even making bombs out of horse manure and use them to blow up Oklahoma day care centers. Those decisions remain the province of masculine choice and action, guided by a masculine version of idealism. As our generals told us during the Vietnam War, we have to destroy the village to save it. These separations in behavior between men and women are startling, or at least I find them startling. Here we have a set of behaviors—or perhaps a proclivity for a set of behaviors—that distinguishes the members of the stupid group from those in the crazy group.

Wouldn’t it be interesting to identify the basis for those differences? Doesn’t it seem important? That’s what led me into the laboratory in the first place, where I took those photographs. Just like any other male, I was inclined to pursue an ideal to its logical conclusion—though in my case I neither had an interest in virtual women like Theo, nor a desire to change someone’s brain surgically to alter his sexual orientation, to alter his concept of beauty. Instead, I had my own proclivities—let’s call it my own inclination to look into the male brain, identify the region of violent sociopathy, and remove it. It sounds like such a good idea on paper, at least from the point of view of a masculine call to direct intervention. Ideas like this always look good in theory to somebody hunting after the definitive remedy, the perfect fix, the final solution.

I probably at this point should confess that I am still male, and still can be fooled by my logical extremities. So you probably shouldn’t take everything I say to heart. Besides, I am not the materialist I used to be. I’ve given up the idea that we can fix evil by materially removing the source of it from the brain. Some new pill won’t work either, though that isn’t stopping the pharmaceutical industry from imagining further expensive medicines to try on our children. It’s not going to go away.

Evil isn’t, I mean, evil won’t be going away any time soon–though maybe our unarmed, idealistic women have counsels to offer, or proposals to counter male ideas about material domination. It’s possible. Let’s go ask them.