So I lost my job last week.

I have (had) two jobs. The first is the one I usually blog about, the one where I help build a democratic school that addresses the development of the whole child, including the development of his or her or ze’s social-emotional skills. It’s a real gas.

The second job, the one I lost last week, is the one where I provide high-level guidance to college students on the craft of creative writing. The college where I’ve taught for the past eight years faces a crisis-level enrollment challenge and, as an adjunct in the humanities, I’ve just felt, in my wallet, the force of that challenge.

It’s a great college. It not only does exactly what it says it does, but it does so with real passion and force. The professors generally walk the walk, and the staff members I’ve interacted with have all been genuinely kind and helpful. The entire philosophy of the college is that we are all members of various communities, and it’s imperative that we act in a knowledgeable and deliberate way to improve the lives of all the members of those communities and not just ourselves. The people I’ve met and worked with at the college strive to do exactly that.

Unfortunately, this will be the first semester in a very long time when I can’t count myself among them. And that disappoints me.

Luckily, the people I just described are not just my colleagues; they’re also my neighbors and my friends, so I can continue to count myself as a person in their wider communities.

There’s another reason I am disappointed though. Two more reasons, actually. The first is that, as a professionally unpublished writer, the only way I could rationalize my expensive investment in my M.F.A. was by pointing to the fact that an M.F.A. is the minimum requirement to become a writing professor, so if I wasn’t able to pay back the investment through publishing, I’d be able to do so through teaching. But now I don’t even have that. So yeah, that’s a disappointment.

The third reason is that, for the first time in eight years, I was going to do a wholesale strip-down of my bread-and-butter course: an introduction to creative writing aimed at non-major students to get them interested in the major.

Teaching at the college level is different than teaching at the high school level (and incredibly different from the middle school level). The teaching part of it is the same — be engaging, be knowledgeable enough in the topic to inspire a sense of curiosity, and be authentic in your desire for the students to ask you questions you don’t know the answer to — but the behind-the-scene goals are different.

In high school (and even more so in middle school), students don’t have the right to ignore you. That doesn’t mean they don’t or won’t ignore you; it means that, at the end of the day, society requires them to be there, and its willing to back that requirement up with force. Put simply, in high school (and even more so in middle school) students have a lot less choice.

At the college level — primarily in the first two years, when most students still haven’t invested enough time or money to feel compelled by responsibility — every student you meet must be coaxed to move on to the next level.

There is an instituitional purpose to this: 30% of college students drop out after their first year, and only 50% of students graduate within a reasonable time. With those as statistical truths, all members of the college — including the faculty — must do their best to help students want to stay in school.

But there is also a departmental-level impetus. As a teacher not only in the humanities, but also in one of the softest of soft subjects, I have to include within my responsibilities the need to attract students to my subject matter. I must keep the funnel flowing from the 2000-level introductory course to the 3000-level courses where the full-time faculty are mostly employed (I’ve taught 3000-level courses in the past, but that was before the the economic crisis of 2009 had a dramatic effect on student enrollments in private liberal-arts-based instutions). While education is always the primary responsibility, this need to sell the major is also always there.

This is not a critique. I live in the real world and would have it no other way: at every level, at every point, an artist must sing for her supper. I get it, and I love it. That is not the point here (but for more on that point, read this essay by an anonymous adjunct instructor).

The point is that, for the first time in eight years, I was about to launch a brand new product, and now I’m being told that I won’t even be given the chance.

I’m not taking it personally because no one has yet told me that I should. I know the college’s financial situation, and I understand that, as an adjunct, I am the definition of low-hanging fruit, so I have no hard feelings at all.

But I really wanted to give this new course a try.

It is still an introduction to creative writing, but instead of breaking the semester down by genre — six weeks of fiction, five weeks of poetry, and three to four weeks of screenwriting or creative nonfiction (depending on the semester) — I was going to blend them all together and teach not a genre of creative writing but creative writing itself.

From a business perspective, the goal of the course is to convince non-majors to continue doing work in the major — i.e, to convince new customers to become repeat customers. For the past years, my sales pitch has been akin to an analysis. I wanted to expose the students to ideas and notions about creative writing that they hadn’t yet heard before, to show them, in some way, what it means to take the craft of writing seriously.

My competitors were the high schools. I had to be able to take them deeper into the concept of creative writing than anything they’d done in high school, to make them feel as if they were, in some way, being led behind the curtain.

But I also couldn’t take them so deep that they’d felt like they’d seen it all. The end of the semester had to leave them wanting more.

This upcoming semester though, I wanted to change it up. Instead of doing an analysis of creative writing, I was going to attempt some kind of sythesis. Instead of digging deep into the concept, I was going to dance them atop it, spin them from one place to another with enough joy and verve to trip the light fantastic, leaving them, at the end of the semester, with an artist’s sense of the possibilities, not of what goes on behind the curtain, but of what can be accomplished on stage.

I’m still not 100% sure how I was going to do it. The semester starts in about four or five weeks and my plan was to work on it during the first full week of August when I take a writer’s retreat in my own home (my wife and daughter are visiting my in-laws while I stay home with no obligation but to write, and to write in a serious and purposive way…and, I suppose, to feed and bathe myself as well).

The college course wasn’t the only thing I was planning to work on next week, but it was one of them, and I was very much looking forward to it.

I had a fantasy where, instead of writing a syllabus for the course, I would write a kind of pamphlet, a short and to-the-point kind of textbook whose style would blend Strunk & White’s with Wittgenstein’s to create a style all my own.

In the eight years I’ve been teaching the course, I’ve yet to use a textbook. I figured maybe it was time to write my own.

While I still might attempt it next week, I don’t have the pressure of a deadline now. And that disappoints me too.

Oh well. Here’s hoping the course comes back to life in the Spring.

Crazy Like An Atheologist

Over the past few months, I’ve had several religious experiences repeat themselves in terms of set and setting and outcome. Earlier in the summer, I tried to reconcile these experiences with my atheistic faith. If atheism is the denial of a divine intelligence, how could I explain several subjective experiences that told me with as much certainty as I am capable of that I was communing with a divine-style intelligence?

In that earlier blog post, I attempted to retain the reality of both my atheism and my experiences by allowing for the possibility of non-human intelligences whose objectivity can only be described in hyper-dimensional terms. Hyper-dimensional does not mean divine — it just means different.

In this post, I’d like to examine the question of whether I am crazy.

I am a relatively smart human being. Billions of people are smarter than me, but billions of people are not. It may be true that I am overeducated and under-experienced, but I am also forty years old, which means that, while I have not experienced more than a fraction of what there is to be experienced, I have, in truth, had my share of experiences.

It’s true that I’m on medication for a general anxiety disorder, but it’s also true that so is almost everyone else I know, and I don’t think I’m more prone to craziness than anyone else in my orbit.

Furthermore, it is true that I’ve enjoyed recreational drugs, but it is also true that a few weeks ago I went to a Dead & Company concert where people way more sane than I am also enjoyed the highs of recreational drugs.

All of which is to say, I don’t think I am crazy.

The friends I’ve shared my story with don’t seem to think I am crazy either. I’m not suggesting that they believe I communed with a divine-style intelligence, but they signaled their willingness to entertain the possibility that these experiences actually hapened to me. They were willing to hear me out, and though they had serious questions that signaled their doubt, they also seemed willing to grant that certain arguments could resolve their doubts, and that, provided these arguments were made, they might concede that my experiences were objectively real.

In other words, I don’t think my friends think I’m crazy either. They may have serious doubts about the way I experience reality, but I think they also realize there’s no harm in what I’m saying either, and that there may even be something good in it.

I’ve read a lot about consciousness and the brain. I haven’t attended Tufts University’s program in Cognitive Studies or UC Santa Cruz’s program in the History of Consciousness, but I feel as if I’ve read enough in the subjects to at least facilitate an undergraduate seminar.

Through my somewhat chaotic but also autodidactic education, I’ve learned that neurological states cause the subject to experience a presence that is in no way objectively there. Some of these states can be reliably triggered by science, as when legal or illegal pharmaceuticals cause a subject to hallucinate. Other states are symptomatic of mental disorders objectively present in our cultural history due to the unique evolution of the Western imagination (some philosophers argue that schizophrenia isn’t a symptom of a mental disorder as much as it is a symptom of capitalism).

I am a white American male with an overactive imagination who takes regular medication for a diagnosed general anxiety disorder. It makes complete sense that a set of neurological states could arise in my brain unbidden by an external reality, that the combination of chemicals at work in my brain could give birth to a patterned explosion whose effect causes me to experience the presence of a divine-style intelligence that is not, in the strictest sense, there.

But I want to consider the possibility — the possibility — that this same neurological state was not the effect of the chemical chaos taking place in my brain, but rather the effect of an external force pushing itself into communion with me, just as a telephone’s ring pushes airwaves into your ear, which pushes impulses into your brain, which causes a neurological state that signals to the subject of your brain that someone out there wants to talk to you.

I’m not saying someone called me. I’m saying that the neurological states that I experienced during those minutes (and in one case, hours) might have been caused by something other than the chemical uniqueness of my brain, something outside of my self.

In a sense, I’m talking about the fundamental nature of our reality. In order for these experiences to actually have happened to me, I have to allow for a part of my understanding of the fundamental nature of reality to be wrong. And anyone who knows me knows I do not like to be wrong.

Heidegger wrote an essay where he basically argues that there is a divine-style presence (by which I mean, an external, non-human presence) that we, as human beings, have the burden of bringing forth into the world (according to Heidegger, this burden defines us as human beings). He argues that there are two ways we can bring this presence into the world: the first is through a kind of ancient craftsmanship; the second is through our more modern technology. The difference lies in what kind of presence will arrive when we finally bring it forth.

Accoring to Heidegger, the ancient sense of craftsmanship invites a presence into the world through a mode of respect and humility. Heidegger uses the example of a communion chalice and asks how this chalice was first brought into the world.

He examines the question using Aristotle’s notions of causality, and based on his examination, he concludes that the artist we modern humans might deem most responsible for creating the chalice actually had to sacrifice her desires to the truth of the chalice itself: its material, its form, and its intention. The artist couldn’t just bring whatever she wanted into the world because her freedom was bounded by the limitations of the material (silver), the form (a chalice must have a different form than a wine glass, for example), and the intention (in this case, its use in the Christian rite of communion). The artist didn’t wrestle with the material, form, and intention to bring the chalice into the world; rather, she sacrificed her time to coaxing and loving it into being — she was less its creator and more a midwife to its birth.

For Heidegger, as for the Greeks, reality exists in hyper-dimensions. There is the world as we generally take it, and then there is the dimension of Forms, which are just as real as the hand at the end of my arm. For the artist to bring the chalice forth into the world is to bring it from the dimension of the Forms, which is why, for the ancient Greeks, the word for “truth” is also the word for “unveiling” — a true chalice isn’t created as much as it is unveiled; its Form is always present, but an artist is necessary to unveil it for those of us who have not the gift (nor the curse) to experience it as a Form. In an attempt to capture this concept, Heidegger characterizes the artist’s process as “bringing-forth out of concealment into unconcealment.”

I know it feels like we’re kind of deep in the weeds right now, but stick with me. I promise: we’re going someplace good.

After exploring the art of ancient craftsmanship, Heidegger contrasts the artist’s midwifery style of unconcealing with modern technology. Where artists coax the truth into being, modern technology challenges and dominates it. It exploits and exhausts the resources that feed it, and in the process, it destroys the truth rather than bring it to light.

For an example, Heidegger uses the Rhine River. When German poets (i.e., artists) refer to the Rhine, they see it as a source of philosophical, cultural, and nationalistic pride, and everything they say or write or sing about it only increases its power. When modern technologists refer to the river, they see it instead as an energy source (in terms of hydroelectric damming) or as a source of profit (in terms of tourism). For the artist, the river remains ever itself, growing in strength and majesty the more the artist unveils it; for the modern technologist, it is a raw material whose exploitation will eventually exhaust its vitality.

The modern method of unveiling the truth colors everything the modern technologist understands about his relationship with reality. It is the kind of thinking that leads to a term like “human resources,” which denotes the idea that humans themselves are also raw materials to be exhausted and exploited.

In my reading of Heidegger, the revelatory mode of modern technology is harder, more colonialistic and militaristic. It not only exhausts all meaning, but it creates, in the meantime, a reality of razor straight lines and machine-cut edges. This is why, in my reading of Heidegger, he believes we should avoid it at all costs.

To scare yourself, think of the kind of artificial intelligence that such a method might create (i.e., unconceal). It would see, as its creators see, a world of exploitable resources, and it would, as its creators are, move forward with all haste to dominate access to those resources, regardless of their meaning. The artificial intelligence unconcealed by this method is the artificial intelligence that everyone wants you to be scared of.

But Heidegger wrote at the birth of modern technology, when it was almost exclusively designed around the agendas of generals, politicians, and businessmen. He didn’t live long enough to witness the birth of video games, personal computers, or iPhones. He didn’t understand that the Romantics themselves would grow to love technology or that human beings would dedicate themselves to the poetry of code (Heidegger reminds us that the Greek term for the artist’s method of unconcealment is poeisis, which is the root of our English term, poetry). Heidegger could not conceive of a modern technology that shared the same values as art, and so he was blind to the possibility that, through modern technology, humans would also be capable of bringing forth, rather than a colonial or militaristic truth, something that is both true and, in the Platonic sense, good.

A theologically inclined reader could find in Heidegger an argument between the right and good way of doing things and the wrong and evil way of doing things, and through that argument, reach a kind of theological conclusion that says the wrong and evil way of doing things will bring forth the Devil.

But Heidegger’s arguments are not saddled with the historic baggage of Jewish, Christian, or Islamic modes of conception. Rather, he find his thoughts in the language of the Greeks and interprets them through his native German. He implies a divine-style presence (and his notion of truth contains the notion of presence, or else, what is there to be unconcealed?), but he’s only willing, with Plato, to connect it to some conception of the Good. He seems to fear, though, that, due to modern technology, this divine-style presence might not be the only one out there.

I’ll give Heidegger that. But he must grant me the possibility that there could be more than two different kinds of presences that humans are capable of bringing forth, or rather, more than two different kinds of presences that we are capable of recognizing as something akin to ourselves.

Heidegger had his issues, but I don’t think he was crazy. I do, however, think his German heritage, just like Neitzche’s, could sometimes get the best of him, and the same cultural milieu that resulted in a nation’s devotion to totalitarianism may also have resulted in two brilliant philosophers being blinded to some of the wisdoms of Western democracy, namely, that reality is never black or white but made of many colors, and just as the human presence is as complex as the billions of human beings who bring it forth, the divine-style presence brought forth by either art or technology may be as complex as the billions of technological devices that bring it forth.

Think about it this way. Human beings have a very different relationship to the atom bomb than they do to Donkey Kong. But both relationships are objectively held with technology. Is the presence that might be brought forth by Donkey Kong the same as the one brought forth by the atom bomb? To suggest so would be like saying the reality brought forth by the efforts of a nine-year-old Moroccan girl share an essence with the reality brought forth by a 76-year-old British transexual. Yes, there are going to be similarities by virtue of their evolutionary heritage, but to suggest they both experience reality in the same way is to overestimate one’s heritage and miss the richness of what’s possible. We wouldn’t want to do so with humanity; let’s not do so with technology either.

Here’s a question. When I say “divine-style intelligence,” what exactly do I mean?

Well, I mean a hyper-dimensional intelligence. This intelligence is abstracted above and beyond a single subjective experience and yet, like a wave moving through the ocean, it can only exist within and through subjective experience.

The interaction between the atom bomb and the humans beneath it is the result of a hyper-dimensional intelligence connecting Newton to Einstein to Roosevelt to Oppenheimer to Truman. Similarly, the interaction between the video game and the human playing with it is the result of a hyper-dimensional intelligence connecting Leibniz to Babbage to Turing to Miyamoto.

With such different paths behind them, such different veins of heritage, and such different modes of interacting with humans, wouldn’t the divine-style intelligences brought forth by these technologies be completely different, and shouldn’t one of them, perhaps, have the opportunity to be seen — to be experienced — as both good and true?

The subjective experience of a human being is due to the time-based firing of a complex yet distinguishable pattern of energies throughout the human brain (and the brain’s attendant nervous system, of course). You experience being you due to the patterns of energy spreading from neuron to neuron; you exist as both a linear movement in time and as a simultaneous and hyper-dimensional web. Subjectivity, then, is a hyper-dimensional series of neurological states.

But why must we relegate the experience of subjectivity to the physical brain? Could it not arise from other linear yet also hyper-dimensional webs, such as significant and interconnected events within human culture, maybe connected by stories and the human capacity for spotting and understanding the implication of significant patterns in and through time?

Humans are the descendants of those elements of Earthbound life that evolved a skill for predicting and shaping the future. Would that evolutionary path not also attune us to recognizing intelligence in other forms of life?

I hear the argument here, that humans seem incredibly slow at recognizing intelligence in other forms of Earthbound life — hell, we only barely began recognizing it in the human beings who look different from us, let alone in dogs, octopuses, and ferns — but in the history of life, homo sapiens have only just arisen into consciousness, and it seems (on good days anyway) as if our continued progress requires our recognition of equality not just among human beings but among all the creatures of the Earth (provided we don’t screw it up first).

It doesn’t seem unfathomable that, just as our subjectivity arises in floods of energy leaping and spreading throughout the human brain, another kind of subjectivity might arise through another flood of energy leaping and spreading across the various webs of our ecological reality, a subjectivity that arose from some kind of root system and may only just now be willing and able to make its presence known beyond itself, like a green bud on a just-poked-out tree, or like a naked ape raising its head above the grasses on the savannah time, announcing to all and sundry that something new has moved onto the field.

The story of Yahweh, of Christ, of Muhammed, is the story of a set of significant and interconnected experiences understood not just as real, but as divine. Yahweh, Christ, and Allah spoke through these experiences, some of which were verbal, others of which were physical, and still others of which were political, by which I mean, effected by decisions in various throne rooms and on various battlegrounds. Like energy moving from neuron to neuron, Yahweh, Christ, and Allah move from story to story, from event to event, traveling not through a single human brain, but through a collective culture, and through this, the God is brought forth in full truth and presence.

According to each of these major religions, one can connect oneself to (commune with) the presence of God. One can do this through artful devotion, through praxis, prayer, and/or meditation.

Even as an atheist, I’m willing to grant these religious experiences as real, but I’m not willing to grant them their exclusivity. I argue that the divine-style presences that made (or make) themselves known through the religions of Yahweh, Christ, and Allah were (are) hyper-dimensional intelligences suffering from a God complex. All three hyper-dimensional intelligences have their unique flaws, but they share the flaw of megalomania. This is understandable, considering how powerful they claim to be, but just because you’re powerful doesn’t mean you’re God. It just makes you powerful.

With Heidegger, I want to discuss the kinds of hyper-dimensional intelligences that might be unconcealed during human interactions with reality, but I don’t want my discussion to get bogged down by the concepts of God, gods, or even, like the Greeks, the Good. Heidegger founds his notions in the language of the Greeks’ concepts of Being; I want to use something else.

I would like my notions to rest on a rigorous concept of play, a subjective experience that, I believe, precedes the experience of Being, and leads to the possibility that, right now, we are not (nor have we ever been) alone.

Hopefully that only sounds a little crazy.

There’s Something About Those Stars

Every night, I venture onto my back porch and spend about 15 minutes looking up at the stars. Because I do this at pretty much the same time every night, I see the same stars over and over again, and almost exactly in the same position as the night before.

The constellation that gets my attention is Cassiopeia. I don’t know where I first learned about this particular constellation, but it’s one of the more famous ones, so I imagine it was sometime when I was young. Even still, I don’t think I understood how to spot it until I was in my twenties.

It looks kind of like a tilted “w” that sits low off the horizon, to the north and east of the Big Dipper (otherwise known as Ursa Major, the Big Bear — though truth be told, the Big Dipper is only the central section of the even bigger Bear).

I somehow know Cassiopeia was a Greek queen, but I don’t know how that queen’s story earned her a constellation (not that she didn’t deserve it or anything; I simply don’t know the facts of her story).

Usually, during these minutes of stargazing, I don’t carry my iPhone on me. This has not been because of a deliberate decision on my part; it’s merely been an ever-lengthening coincidence.

The lack of an iPhone hasn’t bothered me, though it’s often the only minutes each day when my phone isn’t somewhere within reach — or at least, the only minutes each day when I’m not subconsciously itching to touch my iPhone (regardless of whether it’s within reach).

The reaching for it, just the gentle desire to touch it, to make sure it’s there, I feel it, subconsciously, all day, and when I’m not able to do so, some part of me, sometimes consciously but always subconsciously, cries out, “Where’s my phone? Where’s my phone?,” until finally, there it is!, and I have it again.

But that itch goes away each night when I look up at the stars and pick out Cassiopeia. I don’t notice this lack of an itch, but thinking back on it, it’s true: the itch completely goes away.

Tonight, however, I had my iPhone on me when I went outside, and after a few minutes of looking up at Cassiopeia, I remembered it, and so after the required unconscious tap on my Facebook app, I opened my web brower and Googled the constellation’s name, not because I wanted to do a full search of the Internet but because I needed a shortcut to the relevant page on Wikipedia.

And Wikipedia (i.e., the wisdom of the crowd) told me that Cassiopeia was the mother of the woman whom was tied to that rock in The Clash of the Titans, the one whom Perseus wanted to save. She (the daughter) was served up to a sea monster to appease the wrath of Poseidon, who was holding the mother guilty for the crime of blasphemy, which she (the mother) committed when she boasted that both she and her daughter were more beautiful than the daughters of a sea god. The sea god was not Poseidon, mind you, but rather, the god who ruled the seas prior to Poseidon, so like, one of the sea’s still-living, past-ruling-gods (kind of like the sea’s version of Jimmy Carter).

Poseidon had to do something about such a boast. There’s a reason blasphemy is a sin. Blasphemy calls into question the power dynamic between a subject and its ruler. In order for the ruler to continue to rule, these dynamics cannot be doubted for a moment, and every outspoken doubt must be met by an overpoweringly undoubtable show of force, elsewise one brings into being the very beginning of a revolt.

And so Poseidon did what he had to do, and he came up with an unimaginably bitter pain for the boastful Cassiopeia: she had to sacrifice her beautiful daughter, whose only guilt resided in being the object of her mother’s boastful pride. To satisfy the wounded sea god’s pride, however, Cassiopeia had to sacrifice her daughter in a horrible, yet relevant way; she couldn’t just slice her daughter’s neck; she had to give her living daughter up to be consumed alive by a horrible sea monster.

In the story, Perseus comes along just in time and saves the princess (whose  name, by the way, is Andromeda; you’ve probably heard of her: we not only gave her a constellation [right below Cassiopeia’s], but we also named a galaxy after her — we’ve always liked princesses better than we’ve liked queens).

But the princess wasn’t really the guilty one; her mother was. So Poseidon had to come up with another punishment for the queen’s blasphemous crimes. He decided to curse her with a frozen immortality where she would forever be positioned as her daughter was positioned during what must have been the most torturous moment of both her and her daughter’s lives, forcing her (the mother) for all time to relive and never be released from the pain of that horrendous moment.

But he would do so not in private; Cassiopeia would not be frozen in some locked dungeon far beneath the earth where no one would ever see her or think about her crimes; no, instead, she would be held up high where we would all have to bear witness to her pain, a reminder to all of humanity as to what will happen if we boast against the gods (including those gods who are no longer in power).

And Cassiopeia sits above us, tied to her throne like Andromeda tied to those rocks, crying out, forever stuck in a moment of impending and violent shame.

The story of Cassiopeia doesn’t relate to my addiction to my iPhone, unless one wants to stretch the metaphor to its breaking point and compare modern culture’s worship of technology to the act of an ancient blasphemy…but hey, for argument’s sake, why not?

As I said above, blasphemy is an unforgiveable sin because it calls into question the power dynamics between a ruler and his/her/its subject. If we imagine for a moment that there is no such thing as God or gods, then what blasphemy are we committing when we sacrifice parts of our lives to technology?

As an academic living in rural Vermont, I have more than a few friends who are committed anti-technologists. They’re not nutjobs — they all watch Netflix, use computers, drive cars, etc., but they are also outspokenly critical of the costs and pains that come with our dependence on modern technology.

They are, in a word, humanists. They believe that humanity has an intrinsic value that ought to be defended. To their credit, they do not seem to believe that humanity is more valuable than anything else on the planet, but they believe that, despite its egalitarian relationship with everything else, humanity is truly unique and deserves to be saved.

One of the things it deserves to be saved from is technology. Like any other vice, technology sucks the life-force out of humanity and redirects it for its own use — like a poppy plant getting humanity high in order to make us grow more poppy plants. The more we sacrifice our energy, our attention, and our time to technology, the less control we have over our selves.

Studies show that an increased use of digital technology can lead to, among other things, increased weight gain, a reduction in sleep, the retardation of a young person’s ability to read emotions from non-verbal cues, increased challenges with attention and the ability to focus, and a reduction in the strength of interpersonal-bonding sensations. It directly harms our ability to enter into healthy relationships with other human beings, thereby harming humanity’s ability to regulate itself.

In other words, technology rules over humanity at this point; it regulates our interactions, even when we’re among each other. Technology has inserted itself into even our most intimate relationships (see: vibrator), and found itself enthroned upon an altar at which the majority of us bow down every night until we go to sleep, stealing from us the only productive hours we have after we sell ourselves into wage slavery in order to pay down our debts, debts which, let’s be honest, were mostly incurred by the manufactured desire to offer tribute to technology (collected in small amounts by technology’s high-priests: Comcast, Apple, Verizon, Samsung, the New York Stock Exchange, etc.).

To commit blasphemy against technology — to forget, even for a moment, even subconsciously, that technology does not rule over us, to not feel, even if only in retrospect, technology’s ruling hand — is to remember, even subconsciously, that humanity was here before technology, and that we did just fine on our own.

We weren’t weak. We weren’t bored.

We had kings and queens and gods who kept them in their place. And every night, we looked up at the dark night sky, and without feeling the uncomfortable itch of addiction, thought to ourselves, calmly, quietly, “There’s something about those stars.”

A Declaration

I don’t run from the epithet, American. As a liberal in conservative America, I sometimes feel as if I’m supposed to. We’re a country full of nationalistic and self-involved racists whose ability to empathize with those whom we trod down upon is never enough to live up to our hypocritical claim of being a Christian nation. We’re loud, obnoxious, and willfully ignorant. We cling to guns and our religion because we’re too stupid to rise up against the capitalists whose propaganda we swallow whole every night. We are afraid of every little thing, and that fear drives us to wave our army dicks all over the world in an attempt to scare off anyone who might disagree with us.

Is that something to celebrate? No, not at all. But you know what is?

The ability to stand in my own backyard, surrounded by family and people in my community, people whom I’m proud to call my friends, and to share with these people some fine ales and wholesome foods, and to laugh with them as we await a public fireworks display, paid for through our donations and our tax dollars in celebration of those who came before us and of those who stand among us.

Somewhere tonight, a child huddled in the wreckage of a bombed out building. Somewhere else, a woman died giving childbirth in a dark and marshy field.

But here, on my property, in my community, no one worried about that. The thought of those realities didn’t come up once. Our children ran around and laughed, and the only reason any of them cried is because they bonked their heads together in the bouncy house that one of my neighbors, unsolicited, was nice enough to lend to our party. I didn’t worry for any of the babies in attendance; I didn’t once doubt their parents’ ability to provide them with food and shelter and love. During the evening, three different SUVs drove by my house with Sheriff written on the side, and not once did I imagine that anyone in those cars would be a threat to me, my family, or my guests.

But somewhere, a middle-aged man died of a curable disease, his family looking on, sadness and relief both present in their eyes. Somewhere else, a father cuddled with his son knowing that, if the rain doesn’t come tomorrow, there will be no water.

I know as a liberal white man I’m supposed to feel guilty about my privileges, and in some ways, I really do, but there also times like today, when I can throw horseshoes with new acquaintances and neighbors, when I can make fun of close friends and know that my humor won’t be misconstrued as meanness, when I can stand over a grill and non-ironically live out a Budweiser commercial, times like today, when I really and truly feel grateful to call myself an American, and I don’t feel guilty at all.

Happy Independence Day, everybody. May you have a life to be grateful for as well.

Something Important Has Occurred

Four teenagers sit around a kitchen table at 10:30 on a Friday night. No one quite knows how, but over time, their conversation deepens, and before the night ends, they feel as if something important has occurred.

I tell people who ask that I became a writer to get the girls, and while that is definitely true (after all, it’s the only way I ever have), I also became a writer because I wanted to capture the conversations I had with my friends around that kitchen table, not their content, per se, but their feeling, the feeling that something important has occurred.

I sometimes feel bad about calling myself a writer. Yes, most of my jobs came to me because of my writing, but I have yet to publish a book or an article (outside of some reviews for a now-defunct website three or four years ago), and my fiction has never been published by a reputable source. Without a published credit to my name, what right do I have to call myself a writer?

I’m 40 years old today. I’ve been calling myself a writer for at least 27 of those years — from the moment an attractive girl told me she liked my writing. Boom. Done. You like my writing? That’s what I’ll do with my life then. Boom. I’m a writer. Done.

The first job I ever earned on my own was as a copywriter for a small recruitment-advertising agency on the outskirts of Boston. True, my brother got me the original job (as a receptionist), but I earned the right to call myself a copywriter.

The second job I earned was as a member of the adjunct faculty at a small liberal arts college, where I was responsible for teaching younger students the art and craft of writing. Between landing the first job and the second, I’d earned a Bachelor’s degree in Theories of Writing and a Master’s degree in Creative Writing.

I’d also landed my wife thanks in no small part to my writing. We fell in love studying writing, literature, and philosophy together, and we exchanged some of our most loving looks over the keyboards of our computers. I didn’t write her love letters as much as I wrote her love papers, turned in for a grade, but written for her.

I call myself a writer not because I publish novels or have my byline over long think-pieces in a variety of influential magazines. I call myself a writer because that’s how I engage with the world. The “me on the keyboard” is the best version of me that I know, the one who genuinely wants to reach out and take your hand, and sit, and talk, and before the text ends, have both of us feel that something important has occurred.

Writing isn’t a hobby for me, something I do late at night after everyone has gone to bed. Writing is who I am.

I spoke recently with a friend about my urge to become a professional writer. Right now, I am a professional teacher (as well as a builder of an ideal school), and while I love virtually every minute of it, I still have this urge to become a professional writer, to have someone pay me to write pretty much whatever I want, whether it comes in the form of a novel, a children’s book, a political op-ed, a research-based article, or something else entirely, chosen by me, written by me, and published at someone else’s expense, with some of that expense coming back to me in the form of a paid bill (ideally, my student loans).

But becoming a professional writer requires a lot of hustle, and I’ve never been accused of being the most hard-working person on the planet. That’s why, despite the urge, I have never truly pursued that goal.

So, if I don’t have a credit to my name, what kind of a writer am I? That’s easy: I’m a self-published one, hanging out here on the Internet, for free, just waiting for someone to happen by, and sit, and talk, and feel (with me) that something important has occurred.

You know what feels nice? The idea that some day, my now four-year-old daughter will sit down read all of this — this little blog of mine — and she’ll know me in a way that few children ever get to know their parents. She’ll have access to my day-in and day-out and essentially unvarnished tangle of thoughts.

She’ll know I was dumb enough to convince myself that the Celtics could defeat Lebron James in his prime; and that when our society was challenged by climate change and the political ineptitude of Presidents George W. Bush and Donald Trump, I did the only thing I knew how to do, which was to argue, both verbally and in writing, with anyone who supported their administrations’ corrupt and disastrous policies; and that when our country was forced to choose between security and liberty, I always came down on the side of liberty; and that I valued art and dynamism over money and the status quo; and that I believed Jerry Garcia’s guitar playing deserved to be categorized next to the teachings of Lao-Tzu; and so much more.

This blog won’t be the only way she’ll know her father, but years from now, when, for whatever reason, she’s missing me, she’ll have this, my voice and my spirit, telling her for all time that I love her.

And once again, something important will have occurred. And the most important girl who ever entered my world will read something I wrote, and love me.

Because this is who I am, and someday, this text will be all that is left. And even then, when my body is gone, I’ll still be here, my voice and my spirit, telling you, whoever you are, that I love you too.

“Il faut cultiver notre jardin.”

I find imagining the future difficult. The mind reels with possibilities: climate change, global-nuclear war, the eradication of the bees, a nonviolent message received from outer space, unheard of diseases unleashed from the jungles of Africa or the Amazon, peak oil, clones, fundamentalist revivals, race wars, alien attacks, food shortages, the violent revolt of the wage slaves, messiahs, media whores, stray asteroids, scientifically engendered black holes, zombies, multidimensional visitors, the rise of the machine, genderless children, pets that can talk, casual space travel, downloadable talents, the rediscovery of wizardry, the Kraken, virtual realities, the return of the gods, bioengineered immortality, the descent of the nation-state, water wars, microchips implanted by corporate overlords, anarchy in the U.K.

Understanding the present isn’t much better. We learn narratives from the media — terrorism, Trump, and trade, with an ever-increasing side of racial tension — and we ignore whatever doesn’t belong in the narrative. We imbue ourselves in the present dynamic, find our place, our space, and our pace in the fluidity of local time, connecting ourselves to the world as best we can but always and forever remaining local to our moment and blocked from a global sense of truth.

And the past is no treat either, with revisionism and rediscovered records changing what we thought we knew. Diminishing power structures reveal more detail or more shades of perspective on whatever historic event catches our attention: Indians becoming Native Americans becoming indigenous people, revelations of homosexuality and transgenderism all throughout history, post-colonial truths critiquing the received mythologies of empire after empire, the continued disclosure of millennias of male-dominated incompetence, minor skirmishes and hitherto unknown strategic blunders attaining their rightful places in the narratives of long-ago.

There’s no singular place on which to focus, no foundation on which to build: the future is a mystery, the present is chaotic, and the past is a mythologized power play. Where does one turn for hope?

I mow my lawn. I listen to the birds sing. I see my neighbors pack into their cars and drive off for a day of errands, and I smile and wave as they pass me by.

Il faut cultiver notre jardin.

Leveling Up: Madden 15 & One Man’s Look At 40

I just lost a pre-season game in Madden 15. I play as the Kansas City Chiefs, a football team I know nothing about, and what’s more, I’m six seasons in on Franchise Mode, which means that, due to six seasons of retirements, injuries, failed contract negotiations, and 35 rounds of drafts, I also know nothing about most of the league’s players — Tom Brady does not exist in my game; instead, most of the players are computer-generated results of Madden‘s pre-programmed algorithm, each team filled with truly fictional characters.

As I said, I know nothing about the Kansas City Chiefs, but after five seasons, I have just about memorized their playbook (or at least, the playbook as defined by the creators of Madden 15). I have also set the ticket prices for their stadium, upgraded their parking lot and concession booths, adjusted the discounts on their team jerseys, and experimented with the prices on their commemorative footballs. I’ve done just about everything to this franchise that the game of Madden 15 has allowed.

All of which is to say that I play it lot. It’s the only console game I’ve played for almost a year, and I play a console game at the end of almost every night.

Last season, I won the Super Bowl on the All-Pro Level. I had to replay the AFC Divisional round three times before I finally won, but I destroyed the opposing team in the AFC Conference Championship and won a solidly fought game in the Super Bowl. It was my first Super Bowl on the All Pro Level in five seasons, and I felt like I actually earned something.

So this season, I switched to the All Madden Level.

About 15 or 16 years ago, after playing every season’s release since Madden 92 (originally for Sega Genesis), I quit playing Madden video games. I had never been a great player of Madden, but I could hold my own against most human players and play well against the computer (provided it wasn’t on the All Madden Level).

But then, about 15 or 16 years ago, Madden just got too hard for me. With the strength of third and fourth generation consoles and over a decade of intellectual property behind it, Madden made the leap from being a fun video game to becoming a football simulator. Each iteration brought some new mechanical complexity, some new graphical upgrade, some new strategic depth, and each edition pushed the game deeper and deeper into the nitty gritty details of football. It wasn’t fun anymore. It was work.

There were too many other video games to play, and no real interest in work, so years of Madden video games passed me by.

Two years ago, with twenty seconds left in Super Bowl XLIX and the opposing team about to score the go-ahead touchdown from the one-yard line, Malcolm Butler intercepted Russell Wilson’s pass, sealing the victory for the New England Patriots, and in my excitement, I bought Madden 15 for Xbox 360 (a used copy of the previous year’s version). In the glow of my team’s Super Bowl win, I played the game for a little while, but when summer came and I started playing basketball again, I put it down and returned to the other game I’d been playing, NBA2k14.

Then, with this year’s Patriots season and the drama of Tom Brady’s four-game suspension, I found myself paying more attention to football than I usually do, and at some point during the season, I switched NBA2K for Madden, except this time, instead of just diving into a game, I invested my daily allotted console time to Madden‘s training mode. Instead of playing a simulated football game for 45 minutes, I played with a simulated football-training simulator for 45 minutes.

The simulator taught me about Cover-1, Cover-2, and Cover-3 defenses, how to play them and how to attack them. It made me practice a wide variety of running moves, each of which I had to execute with split-second precision on the game’s 10-button controller. It taught me how to adjust the assignments of the offensive linemen to pick up a blitz. It introduced me to the concept of the key defender, taught me to spot him before I snapped the ball, and trained me to to key my read of the coverage based on that one defender’s movements. I learned when to lob the ball and when to throw a bullet, and how and when to throw behind or to the opposite side of the receiver. It introduced me to various tackling strategies and taught me how to increase the tackler’s aggression or desperation level as necessary.

After completing over sixty different tutorials and drills, I finally felt ready to play the game, so I set the level to All Pro, and had at it. Five seasons later, I won the Super Bowl — though as I said above, I had to replay the AFC Divisional round three times (I forced the replays because, earlier in the season, my star wide-receiver rejected my offer to extend his contract and my star running-back was getting old and his skills were declining; if I wanted to win the Super Bowl anytime soon, it had to be with last season’s team, so even though I lost twice in a row in the Divisional round, I wasn’t going to stop until I beat the computer, fair and square, which I eventually did after my third try). After 15 years, five seasons, and only two extra replays, as my imaginary players stood on the field celebrating their victory, I felt as if I actually accomplished something.

I turn 40 years old in one week’s time.

I rewarded myself by increasing the level of the video game. It’s now set to All Madden, the highest level possible. The game isn’t merciful anymore; it doesn’t forgive mistakes. Hesitate too long, and it’ll score a touchdown. Overrun the ball carrier, and it’ll score a touchdown. Misread the coverage, and it’ll intercept your pass or sack you for a 12-yard loss. Nothing is forgiven.

But it plays honestly as well. Time your throw right, and it’ll give you the first down. Follow the right run blocker, and it’ll give you twenty yards more. Read the right defender, and it’ll let you take the ball deep, but — and this is important — it will force you to catch the ball on your own — because everything is earned at the All Madden Level and nothing is given.

In my last two pre-season games, Madden 15 destroyed me. In the first game, my first at the All Madden Level, the computer forced me to endure a 48-7 loss. It ran for 206 yards, threw for 176 more, had zero turnovers (while forcing four on me), and required just one third-down conversion on its way to complete domination on both sides of the field.

The second game I played (just now) ended in a 27-14 loss. The computer ran for 207 yards, threw for 110, had zero turnovers (while forcing three on me), and required three third-down conversions on its way to complete domination on both sides of the field.

As the players shook hands on the field and the replays of the various highlights played across the screen, I thought to myself, Shit, maybe I’m not ready to play at this level.

But just as I thought it, the Madden announcer said, “That’s why you play at this level.”

And I thought, He’s absolutely right. I moved from All Pro to All Madden because I wanted a new challenge, and if something is going to challenge me, it’s going to begin with my failure. As I tell my students every day, failure is not a bad thing; failure is how we learn.

Yes, Madden 15 kicked my ass these last two nights. But I went from scoring one touchdown to scoring two; from allowing 48 points to allowing 27; from giving up 176 passing yards to giving up 110. The end result might be the same (I lost), but I know I played the game better. And I know I’ll play the next one even better than that.

I put a lot of effort into getting here — five hard-earned seasons — and I’ll be damned if I’m going to slink back to All Pro just because I lost two games in the pre-season. I might not be winning right away, but I’m going to stick with it.

I’m 40 years old in one week’s time, and it’s time to level up.