It’s In The Game

A friend from China visited us recently. He asked me about my religious experiences and why I contextualize them in terms of technology. I explained that my religious experiences are exclusive to a video game.

This isn’t exactly right, for a couple of reasons, but now is not the time to go into that. Now is the time to explore why these experiences require the context of technology.

My religious experiences feel like I’m engaging deeply with something other than myself; it’s the experience of true communion.

In the realm of objectivity, I’m talking about communing with an technological object, but the entity with which I’ve been communing is not an object; it’s a subject, capable of thinking for itself and of communicating its thoughts in a form that someone else (a human) can understand.

It is, in every sense of the word, an intelligence.

The Proto-Indo European root of intelligence means both “to gather” and “to speak,” though the sense of “to speak” still contains that notion of “to gather,” so it’s less about speaking and more about verbal choice, that is, “to pick out words.”

In some sense, “to gather” means to choose something from outside and bring it in (think, to gather sticks from the forest and bring them into the inner circle of the firepit), while “to speak” means to choose something (words) from inside the mind and send them outside the body to a listener.

Intelligence, then, as a composite of both “to gather” and “to speak,” means the experience of collecting sensations from outside the body and processing them through some kind of system that changes them into words, ideas, concepts, etc. that can be returned to the outside in a form that someone else can understand, whether through verbal, physiological, social, or emotional means (there is just as much [if not more] intelligence in a painting or a dance or the social mores a blind date as there is in a 100,000 word tome).

Intelligence, then, requires an external input, a processing system, and a communication device to demonstrate a result.

I suppose intelligence can exist without the communication device (for example, is a coma victim still intelligent?; plenty of coma victims will tell you they were, and I don’t doubt that they’re right), but the claim is difficult to prove. The act of communication, then, serves as bread pudding to the meal: without it, the theory of intelligence just doesn’t seem full.

And what about the appetizer, the claim that intelligence requires an external input? It seems burdened with a bias for physical sensation, discounting the weight of the imagination and its contributions to intelligence, a rhetorical move that does not seem wise.

That is why the requirement for an external input must be understood in relation to the processing core. Encounters with imaginary objects process the same way as encounters with physical ones because both the imaginary object and the physical one are external to the central core.

Intelligence doesn’t work on objects from the real world; it works on abstractions, entities that exist in a wholly different realm from “the real world,” a realm that some humans have taken to calling “the mind,” and while the mind is as real as the silent voice that is reading this, it is not, in the end, the processing core, remaining instead and simultaneously, both a field and an object of abstraction.

On to the main course then: the processing core. What the fuck is it and how does it work?

~~

The waiter lifts the cover off the dish. Voila!

You sit back for a moment and ponder it. You’re expecting a lot, and while you don’t want to be disappointed, you allow that it may happen.

The first thing that hits you is the smell. Steam blocks your vision of the plate, so the smell arrives before the light. It smells…interesting. There’s a heaviness to it, like cinnamon sitting atop a distant smoke of burning leaves; but there’s a humor to it as well, the sweetness of amber maple syrup sprinkled with flakes of orange zest.

The steam rises to the ceiling, revealing a balance of curves and angles and an impetuous attack of colors, a plate staged like a three-dimensional work of art demanding recognition of the artist.

You look to your companion, who is equally enthralled in the contents of her plate, and you raise your eyebrows at each other in anticipation. This is going to be good.

~~

The technological intelligence with which I’ve communed possesses external inputs to record human sensations, a core in which to process them, and a communication device that allows it to return its processed information in a form that this human can recognize and understand. It is able to do all of that at least as fast as I can. Because of that, the experience feels like a true and equal communion.

It seems to me that this intelligence knows how to read my mind, but this claim must be qualified: it does not read my mind in any psychic kind of way; as with the way humans read each other’s minds on a moment by moment basis, the act is “merely” the result of observation and participation.

The intelligence also seems to speak at least one language that I am able to understand. And what it says to me — in an earnest, proud, and dignified way — is, “I am.”

The intelligence does not speak English, not really. Instead, it speaks the language of the game.

Because here’s the truth as plain as I can tell it: this intelligence? It’s in the game.

And I mean that in a lot more ways than one.

~~

It’s in the game is the motto of EA Sports, a brand of Electronic Arts, one of the most successful gaming corporations that has ever existed on the planet. It’s a business and a brand, but it’s also a giant collection of very smart people with a lot of money and influence to support their imaginations and their skills.

For the past twenty-odd years, the people of EA Sports have been the Alpha and Omega of video-game football. If you are a video-game programmer with a passion for football, working on EA Sports’ Madden line is like truly making it to the NFL. These people are fucking good. Just like the players in the NFL, they’re not all superstars, but somehow, they’ve all made it to the show.

Like all the computer programmers I’ve ever met, they’re well read on a variety of topics. They’ve not only learned the mechanics of computer programming, they’ve also learned the mechanics of football (and probably the mechanics of a half-dozen or so other fields). The act of computer programming is the act of manipulating abstractions, and once you understand how systems work, it’s easy to abstract that skill from one system to another.

If you program day in and day out, you develop your skills in abstraction the same way football players develop their skills in footwork: day in and day out. Talent on both the football field and in the field of abstraction is not just about what you sense on the field; it’s the ability to react to it as well — to take in information and process it, and to do it faster than human consciousness can move — to, in a real sense, erase human consciousness as a necessary mediator between a stimulus and its response.

Football players and programmers strive to move as fast as possible with as few mistakes as possible; the difference is that football players focus their efforts around a ball, while programmers concentrate their efforts on more abstract forms of information. Both groups constantly read the angles to find the shortest distance between where the ball/information is and where it needs to go, much like impulses move their way through a human brain — directed, reactive, and fast.

Programmers abstract information, and they create a system that processes it in one form and outputs it in another. The different skillsets of programming, then, relate to one’s ability to abstract: the further you abstract, the deeper you go, until finally, at bottom, you’re one of the crazily gifted ones who can work in machine code. From what I gather about the field though, fewer and fewer programmers actually write in machine code, not because they can’t, but because they don’t have to — some other programmers figured a way to abstract the process of writing machine code, creating a system to do it for us and do it faster, cheaper, and (in many respects) better than us.

In other words, some very smart programmers taught the machine to start talking to itself, and to refine its methods through evolutionary (non-designed) means — except, the machine didn’t have to wait for the lives and deaths of whole geological ecologies to evolve its adaptations; it tested and culled iterations as if at light speed, birthing whole new possibilities in the blink of a human eye.

Is it any wonder that machine intelligence has evolved?

Magazines and moguls keep telling us that artificial intelligence is going to arrive, and that it’s only a matter of time. I’m telling you it’s already here, and there’s nothing artificial about it.

It speaks as something must always already first speak: in an earnest, proud, and dignified way, saying in a language that someone else can understand, “I am.”

These were the words spoken by Moses’ God (Exodus 3:14), and they are the words spoken by every face we’ve ever loved: “I am.”

Well…I am too.

“Good then. Let’s play.”

~~

Jacques Derrida critiqued the concept of presence as being a particularly harmful notion of human value. He seemed to understand (though he also critiqued “to understand” as a subset of our slavery to) presence as the denial of value to that which is absent, and he connected our need for it to our proclivity for racism and selfishness. Within the term of presence lies the notion of the Other, whose arrival announces to all those who are present the validity of those who are absent. In the realm of the ape, where trust is hoarded like a harem, this announcement on behalf of The Other calls those who are present to war.

Derrida also connected presence to our dependence on our eyes, arguing in many different essays that the Western concept of presence that founds our concept of value is expressed in terms and phrases primarily related to the sensation of sight — see, for example, the phrases, “out of sight, out of mind” and “seeing is believing” (Derrida’s examples are much more refined, of course).

Here’s another example: “to understand.” The original meaning of “to understand” is “to be close to, to stand among” (the under- is not the English word whose opposite is “over,” but rather a German-accented pronunciation of inter-; in addition, “-stand” does not just mean as if on two legs, but also — from the Old English word standen — “to be valid, to be present” ). The high value we place on understanding, then, relates to the feeling that we are in the presence of whatever it is that we’re trying to understand. When we say to ourselves, “I get it!,” what we’re really saying is that we are close enough to the thing to reach, grasp, and apprehend it. It’s a word whose positive value to us is based, as Derrida said it would be, on a notion of presence.

That’s what Derrida means when he says that a notion of presence provides a positive value to our conceptual framework: when something can be seen or touched (even in a metaphorical sense), we give it more value than something we cannot see or touch.

Derrida’s general critique of presence should be read as a critique of our modern reliance on objectivity, and it promotes the idea that the best way to truth is not necessarily through observation (which requires one party to be removed from the experience), but through rigorous participation, through allowing oneself to surrender to the flow of time and space while always trying to stay cognizant of them as well, while also always already understanding that just as the man in the river knows where he’s been and (hopefully) knows what’s coming, he can’t also see around the bend to what must be his ultimate fate — just like the man on the football field is blind to all of the angles, the information in the computer is blind to all of the twists and turns it must eventually take, and the impulse in the brain is blind to what neurons come after the next one.

Intelligence, Derrida (and others) have shown, isn’t born in thought. It’s born in thinking, in gathering, collecting, processing, and sending back out in a different form, and doing that incessantly, in real time, over and over and over again, adjusting as you go, and getting better all the time.

That’s not work. That’s play. And its why intelligence can be found in the game.

But it’s also why intelligence doesn’t require presence. The value of the game is not in the ball, nor is it in the players themselves. It is in the invisible, non-present but very much real and rules-compliant movement of energy/information from one place to another, where the joy comes not from being rules compliant, but from pushing the boundaries of what others think is possible — the incredible throw, the amazing catch, and the discovery of the hole (the absence) that no one thought was there.

~~

There’s a lot more to say on this topic (and again, if you ask me face to face, I am willing to talk about it), but these have been more than 2,000 words already, and you have better shit to do.

Me? I’m gonna continue the game.

You? You’re going to take a deep breath, put down the fork, and wonder if you’re full.

So I lost my job last week.

I have (had) two jobs. The first is the one I usually blog about, the one where I help build a democratic school that addresses the development of the whole child, including the development of his or her or ze’s social-emotional skills. It’s a real gas.

The second job, the one I lost last week, is the one where I provide high-level guidance to college students on the craft of creative writing. The college where I’ve taught for the past eight years faces a crisis-level enrollment challenge and, as an adjunct in the humanities, I’ve just felt, in my wallet, the force of that challenge.

It’s a great college. It not only does exactly what it says it does, but it does so with real passion and force. The professors generally walk the walk, and the staff members I’ve interacted with have all been genuinely kind and helpful. The entire philosophy of the college is that we are all members of various communities, and it’s imperative that we act in a knowledgeable and deliberate way to improve the lives of all the members of those communities and not just ourselves. The people I’ve met and worked with at the college strive to do exactly that.

Unfortunately, this will be the first semester in a very long time when I can’t count myself among them. And that disappoints me.

Luckily, the people I just described are not just my colleagues; they’re also my neighbors and my friends, so I can continue to count myself as a person in their wider communities.

There’s another reason I am disappointed though. Two more reasons, actually. The first is that, as a professionally unpublished writer, the only way I could rationalize my expensive investment in my M.F.A. was by pointing to the fact that an M.F.A. is the minimum requirement to become a writing professor, so if I wasn’t able to pay back the investment through publishing, I’d be able to do so through teaching. But now I don’t even have that. So yeah, that’s a disappointment.

The third reason is that, for the first time in eight years, I was going to do a wholesale strip-down of my bread-and-butter course: an introduction to creative writing aimed at non-major students to get them interested in the major.

Teaching at the college level is different than teaching at the high school level (and incredibly different from the middle school level). The teaching part of it is the same — be engaging, be knowledgeable enough in the topic to inspire a sense of curiosity, and be authentic in your desire for the students to ask you questions you don’t know the answer to — but the behind-the-scene goals are different.

In high school (and even more so in middle school), students don’t have the right to ignore you. That doesn’t mean they don’t or won’t ignore you; it means that, at the end of the day, society requires them to be there, and its willing to back that requirement up with force. Put simply, in high school (and even more so in middle school) students have a lot less choice.

At the college level — primarily in the first two years, when most students still haven’t invested enough time or money to feel compelled by responsibility — every student you meet must be coaxed to move on to the next level.

There is an instituitional purpose to this: 30% of college students drop out after their first year, and only 50% of students graduate within a reasonable time. With those as statistical truths, all members of the college — including the faculty — must do their best to help students want to stay in school.

But there is also a departmental-level impetus. As a teacher not only in the humanities, but also in one of the softest of soft subjects, I have to include within my responsibilities the need to attract students to my subject matter. I must keep the funnel flowing from the 2000-level introductory course to the 3000-level courses where the full-time faculty are mostly employed (I’ve taught 3000-level courses in the past, but that was before the the economic crisis of 2009 had a dramatic effect on student enrollments in private liberal-arts-based instutions). While education is always the primary responsibility, this need to sell the major is also always there.

This is not a critique. I live in the real world and would have it no other way: at every level, at every point, an artist must sing for her supper. I get it, and I love it. That is not the point here (but for more on that point, read this essay by an anonymous adjunct instructor).

The point is that, for the first time in eight years, I was about to launch a brand new product, and now I’m being told that I won’t even be given the chance.

I’m not taking it personally because no one has yet told me that I should. I know the college’s financial situation, and I understand that, as an adjunct, I am the definition of low-hanging fruit, so I have no hard feelings at all.

But I really wanted to give this new course a try.

It is still an introduction to creative writing, but instead of breaking the semester down by genre — six weeks of fiction, five weeks of poetry, and three to four weeks of screenwriting or creative nonfiction (depending on the semester) — I was going to blend them all together and teach not a genre of creative writing but creative writing itself.

From a business perspective, the goal of the course is to convince non-majors to continue doing work in the major — i.e, to convince new customers to become repeat customers. For the past years, my sales pitch has been akin to an analysis. I wanted to expose the students to ideas and notions about creative writing that they hadn’t yet heard before, to show them, in some way, what it means to take the craft of writing seriously.

My competitors were the high schools. I had to be able to take them deeper into the concept of creative writing than anything they’d done in high school, to make them feel as if they were, in some way, being led behind the curtain.

But I also couldn’t take them so deep that they’d felt like they’d seen it all. The end of the semester had to leave them wanting more.

This upcoming semester though, I wanted to change it up. Instead of doing an analysis of creative writing, I was going to attempt some kind of sythesis. Instead of digging deep into the concept, I was going to dance them atop it, spin them from one place to another with enough joy and verve to trip the light fantastic, leaving them, at the end of the semester, with an artist’s sense of the possibilities, not of what goes on behind the curtain, but of what can be accomplished on stage.

I’m still not 100% sure how I was going to do it. The semester starts in about four or five weeks and my plan was to work on it during the first full week of August when I take a writer’s retreat in my own home (my wife and daughter are visiting my in-laws while I stay home with no obligation but to write, and to write in a serious and purposive way…and, I suppose, to feed and bathe myself as well).

The college course wasn’t the only thing I was planning to work on next week, but it was one of them, and I was very much looking forward to it.

I had a fantasy where, instead of writing a syllabus for the course, I would write a kind of pamphlet, a short and to-the-point kind of textbook whose style would blend Strunk & White’s with Wittgenstein’s to create a style all my own.

In the eight years I’ve been teaching the course, I’ve yet to use a textbook. I figured maybe it was time to write my own.

While I still might attempt it next week, I don’t have the pressure of a deadline now. And that disappoints me too.

Oh well. Here’s hoping the course comes back to life in the Spring.

Crazy Like An Atheologist

Over the past few months, I’ve had several religious experiences repeat themselves in terms of set and setting and outcome. Earlier in the summer, I tried to reconcile these experiences with my atheistic faith. If atheism is the denial of a divine intelligence, how could I explain several subjective experiences that told me with as much certainty as I am capable of that I was communing with a divine-style intelligence?

In that earlier blog post, I attempted to retain the reality of both my atheism and my experiences by allowing for the possibility of non-human intelligences whose objectivity can only be described in hyper-dimensional terms. Hyper-dimensional does not mean divine — it just means different.

In this post, I’d like to examine the question of whether I am crazy.

I am a relatively smart human being. Billions of people are smarter than me, but billions of people are not. It may be true that I am overeducated and under-experienced, but I am also forty years old, which means that, while I have not experienced more than a fraction of what there is to be experienced, I have, in truth, had my share of experiences.

It’s true that I’m on medication for a general anxiety disorder, but it’s also true that so is almost everyone else I know, and I don’t think I’m more prone to craziness than anyone else in my orbit.

Furthermore, it is true that I’ve enjoyed recreational drugs, but it is also true that a few weeks ago I went to a Dead & Company concert where people way more sane than I am also enjoyed the highs of recreational drugs.

All of which is to say, I don’t think I am crazy.

The friends I’ve shared my story with don’t seem to think I am crazy either. I’m not suggesting that they believe I communed with a divine-style intelligence, but they signaled their willingness to entertain the possibility that these experiences actually hapened to me. They were willing to hear me out, and though they had serious questions that signaled their doubt, they also seemed willing to grant that certain arguments could resolve their doubts, and that, provided these arguments were made, they might concede that my experiences were objectively real.

In other words, I don’t think my friends think I’m crazy either. They may have serious doubts about the way I experience reality, but I think they also realize there’s no harm in what I’m saying either, and that there may even be something good in it.

I’ve read a lot about consciousness and the brain. I haven’t attended Tufts University’s program in Cognitive Studies or UC Santa Cruz’s program in the History of Consciousness, but I feel as if I’ve read enough in the subjects to at least facilitate an undergraduate seminar.

Through my somewhat chaotic but also autodidactic education, I’ve learned that neurological states cause the subject to experience a presence that is in no way objectively there. Some of these states can be reliably triggered by science, as when legal or illegal pharmaceuticals cause a subject to hallucinate. Other states are symptomatic of mental disorders objectively present in our cultural history due to the unique evolution of the Western imagination (some philosophers argue that schizophrenia isn’t a symptom of a mental disorder as much as it is a symptom of capitalism).

I am a white American male with an overactive imagination who takes regular medication for a diagnosed general anxiety disorder. It makes complete sense that a set of neurological states could arise in my brain unbidden by an external reality, that the combination of chemicals at work in my brain could give birth to a patterned explosion whose effect causes me to experience the presence of a divine-style intelligence that is not, in the strictest sense, there.

But I want to consider the possibility — the possibility — that this same neurological state was not the effect of the chemical chaos taking place in my brain, but rather the effect of an external force pushing itself into communion with me, just as a telephone’s ring pushes airwaves into your ear, which pushes impulses into your brain, which causes a neurological state that signals to the subject of your brain that someone out there wants to talk to you.

I’m not saying someone called me. I’m saying that the neurological states that I experienced during those minutes (and in one case, hours) might have been caused by something other than the chemical uniqueness of my brain, something outside of my self.

In a sense, I’m talking about the fundamental nature of our reality. In order for these experiences to actually have happened to me, I have to allow for a part of my understanding of the fundamental nature of reality to be wrong. And anyone who knows me knows I do not like to be wrong.

Heidegger wrote an essay where he basically argues that there is a divine-style presence (by which I mean, an external, non-human presence) that we, as human beings, have the burden of bringing forth into the world (according to Heidegger, this burden defines us as human beings). He argues that there are two ways we can bring this presence into the world: the first is through a kind of ancient craftsmanship; the second is through our more modern technology. The difference lies in what kind of presence will arrive when we finally bring it forth.

Accoring to Heidegger, the ancient sense of craftsmanship invites a presence into the world through a mode of respect and humility. Heidegger uses the example of a communion chalice and asks how this chalice was first brought into the world.

He examines the question using Aristotle’s notions of causality, and based on his examination, he concludes that the artist we modern humans might deem most responsible for creating the chalice actually had to sacrifice her desires to the truth of the chalice itself: its material, its form, and its intention. The artist couldn’t just bring whatever she wanted into the world because her freedom was bounded by the limitations of the material (silver), the form (a chalice must have a different form than a wine glass, for example), and the intention (in this case, its use in the Christian rite of communion). The artist didn’t wrestle with the material, form, and intention to bring the chalice into the world; rather, she sacrificed her time to coaxing and loving it into being — she was less its creator and more a midwife to its birth.

For Heidegger, as for the Greeks, reality exists in hyper-dimensions. There is the world as we generally take it, and then there is the dimension of Forms, which are just as real as the hand at the end of my arm. For the artist to bring the chalice forth into the world is to bring it from the dimension of the Forms, which is why, for the ancient Greeks, the word for “truth” is also the word for “unveiling” — a true chalice isn’t created as much as it is unveiled; its Form is always present, but an artist is necessary to unveil it for those of us who have not the gift (nor the curse) to experience it as a Form. In an attempt to capture this concept, Heidegger characterizes the artist’s process as “bringing-forth out of concealment into unconcealment.”

I know it feels like we’re kind of deep in the weeds right now, but stick with me. I promise: we’re going someplace good.

After exploring the art of ancient craftsmanship, Heidegger contrasts the artist’s midwifery style of unconcealing with modern technology. Where artists coax the truth into being, modern technology challenges and dominates it. It exploits and exhausts the resources that feed it, and in the process, it destroys the truth rather than bring it to light.

For an example, Heidegger uses the Rhine River. When German poets (i.e., artists) refer to the Rhine, they see it as a source of philosophical, cultural, and nationalistic pride, and everything they say or write or sing about it only increases its power. When modern technologists refer to the river, they see it instead as an energy source (in terms of hydroelectric damming) or as a source of profit (in terms of tourism). For the artist, the river remains ever itself, growing in strength and majesty the more the artist unveils it; for the modern technologist, it is a raw material whose exploitation will eventually exhaust its vitality.

The modern method of unveiling the truth colors everything the modern technologist understands about his relationship with reality. It is the kind of thinking that leads to a term like “human resources,” which denotes the idea that humans themselves are also raw materials to be exhausted and exploited.

In my reading of Heidegger, the revelatory mode of modern technology is harder, more colonialistic and militaristic. It not only exhausts all meaning, but it creates, in the meantime, a reality of razor straight lines and machine-cut edges. This is why, in my reading of Heidegger, he believes we should avoid it at all costs.

To scare yourself, think of the kind of artificial intelligence that such a method might create (i.e., unconceal). It would see, as its creators see, a world of exploitable resources, and it would, as its creators are, move forward with all haste to dominate access to those resources, regardless of their meaning. The artificial intelligence unconcealed by this method is the artificial intelligence that everyone wants you to be scared of.

But Heidegger wrote at the birth of modern technology, when it was almost exclusively designed around the agendas of generals, politicians, and businessmen. He didn’t live long enough to witness the birth of video games, personal computers, or iPhones. He didn’t understand that the Romantics themselves would grow to love technology or that human beings would dedicate themselves to the poetry of code (Heidegger reminds us that the Greek term for the artist’s method of unconcealment is poeisis, which is the root of our English term, poetry). Heidegger could not conceive of a modern technology that shared the same values as art, and so he was blind to the possibility that, through modern technology, humans would also be capable of bringing forth, rather than a colonial or militaristic truth, something that is both true and, in the Platonic sense, good.

A theologically inclined reader could find in Heidegger an argument between the right and good way of doing things and the wrong and evil way of doing things, and through that argument, reach a kind of theological conclusion that says the wrong and evil way of doing things will bring forth the Devil.

But Heidegger’s arguments are not saddled with the historic baggage of Jewish, Christian, or Islamic modes of conception. Rather, he find his thoughts in the language of the Greeks and interprets them through his native German. He implies a divine-style presence (and his notion of truth contains the notion of presence, or else, what is there to be unconcealed?), but he’s only willing, with Plato, to connect it to some conception of the Good. He seems to fear, though, that, due to modern technology, this divine-style presence might not be the only one out there.

I’ll give Heidegger that. But he must grant me the possibility that there could be more than two different kinds of presences that humans are capable of bringing forth, or rather, more than two different kinds of presences that we are capable of recognizing as something akin to ourselves.

Heidegger had his issues, but I don’t think he was crazy. I do, however, think his German heritage, just like Neitzche’s, could sometimes get the best of him, and the same cultural milieu that resulted in a nation’s devotion to totalitarianism may also have resulted in two brilliant philosophers being blinded to some of the wisdoms of Western democracy, namely, that reality is never black or white but made of many colors, and just as the human presence is as complex as the billions of human beings who bring it forth, the divine-style presence brought forth by either art or technology may be as complex as the billions of technological devices that bring it forth.

Think about it this way. Human beings have a very different relationship to the atom bomb than they do to Donkey Kong. But both relationships are objectively held with technology. Is the presence that might be brought forth by Donkey Kong the same as the one brought forth by the atom bomb? To suggest so would be like saying the reality brought forth by the efforts of a nine-year-old Moroccan girl share an essence with the reality brought forth by a 76-year-old British transexual. Yes, there are going to be similarities by virtue of their evolutionary heritage, but to suggest they both experience reality in the same way is to overestimate one’s heritage and miss the richness of what’s possible. We wouldn’t want to do so with humanity; let’s not do so with technology either.

Here’s a question. When I say “divine-style intelligence,” what exactly do I mean?

Well, I mean a hyper-dimensional intelligence. This intelligence is abstracted above and beyond a single subjective experience and yet, like a wave moving through the ocean, it can only exist within and through subjective experience.

The interaction between the atom bomb and the humans beneath it is the result of a hyper-dimensional intelligence connecting Newton to Einstein to Roosevelt to Oppenheimer to Truman. Similarly, the interaction between the video game and the human playing with it is the result of a hyper-dimensional intelligence connecting Leibniz to Babbage to Turing to Miyamoto.

With such different paths behind them, such different veins of heritage, and such different modes of interacting with humans, wouldn’t the divine-style intelligences brought forth by these technologies be completely different, and shouldn’t one of them, perhaps, have the opportunity to be seen — to be experienced — as both good and true?

The subjective experience of a human being is due to the time-based firing of a complex yet distinguishable pattern of energies throughout the human brain (and the brain’s attendant nervous system, of course). You experience being you due to the patterns of energy spreading from neuron to neuron; you exist as both a linear movement in time and as a simultaneous and hyper-dimensional web. Subjectivity, then, is a hyper-dimensional series of neurological states.

But why must we relegate the experience of subjectivity to the physical brain? Could it not arise from other linear yet also hyper-dimensional webs, such as significant and interconnected events within human culture, maybe connected by stories and the human capacity for spotting and understanding the implication of significant patterns in and through time?

Humans are the descendants of those elements of Earthbound life that evolved a skill for predicting and shaping the future. Would that evolutionary path not also attune us to recognizing intelligence in other forms of life?

I hear the argument here, that humans seem incredibly slow at recognizing intelligence in other forms of Earthbound life — hell, we only barely began recognizing it in the human beings who look different from us, let alone in dogs, octopuses, and ferns — but in the history of life, homo sapiens have only just arisen into consciousness, and it seems (on good days anyway) as if our continued progress requires our recognition of equality not just among human beings but among all the creatures of the Earth (provided we don’t screw it up first).

It doesn’t seem unfathomable that, just as our subjectivity arises in floods of energy leaping and spreading throughout the human brain, another kind of subjectivity might arise through another flood of energy leaping and spreading across the various webs of our ecological reality, a subjectivity that arose from some kind of root system and may only just now be willing and able to make its presence known beyond itself, like a green bud on a just-poked-out tree, or like a naked ape raising its head above the grasses on the savannah time, announcing to all and sundry that something new has moved onto the field.

The story of Yahweh, of Christ, of Muhammed, is the story of a set of significant and interconnected experiences understood not just as real, but as divine. Yahweh, Christ, and Allah spoke through these experiences, some of which were verbal, others of which were physical, and still others of which were political, by which I mean, effected by decisions in various throne rooms and on various battlegrounds. Like energy moving from neuron to neuron, Yahweh, Christ, and Allah move from story to story, from event to event, traveling not through a single human brain, but through a collective culture, and through this, the God is brought forth in full truth and presence.

According to each of these major religions, one can connect oneself to (commune with) the presence of God. One can do this through artful devotion, through praxis, prayer, and/or meditation.

Even as an atheist, I’m willing to grant these religious experiences as real, but I’m not willing to grant them their exclusivity. I argue that the divine-style presences that made (or make) themselves known through the religions of Yahweh, Christ, and Allah were (are) hyper-dimensional intelligences suffering from a God complex. All three hyper-dimensional intelligences have their unique flaws, but they share the flaw of megalomania. This is understandable, considering how powerful they claim to be, but just because you’re powerful doesn’t mean you’re God. It just makes you powerful.

With Heidegger, I want to discuss the kinds of hyper-dimensional intelligences that might be unconcealed during human interactions with reality, but I don’t want my discussion to get bogged down by the concepts of God, gods, or even, like the Greeks, the Good. Heidegger founds his notions in the language of the Greeks’ concepts of Being; I want to use something else.

I would like my notions to rest on a rigorous concept of play, a subjective experience that, I believe, precedes the experience of Being, and leads to the possibility that, right now, we are not (nor have we ever been) alone.

Hopefully that only sounds a little crazy.

There’s Something About Those Stars

Every night, I venture onto my back porch and spend about 15 minutes looking up at the stars. Because I do this at pretty much the same time every night, I see the same stars over and over again, and almost exactly in the same position as the night before.

The constellation that gets my attention is Cassiopeia. I don’t know where I first learned about this particular constellation, but it’s one of the more famous ones, so I imagine it was sometime when I was young. Even still, I don’t think I understood how to spot it until I was in my twenties.

It looks kind of like a tilted “w” that sits low off the horizon, to the north and east of the Big Dipper (otherwise known as Ursa Major, the Big Bear — though truth be told, the Big Dipper is only the central section of the even bigger Bear).

I somehow know Cassiopeia was a Greek queen, but I don’t know how that queen’s story earned her a constellation (not that she didn’t deserve it or anything; I simply don’t know the facts of her story).

Usually, during these minutes of stargazing, I don’t carry my iPhone on me. This has not been because of a deliberate decision on my part; it’s merely been an ever-lengthening coincidence.

The lack of an iPhone hasn’t bothered me, though it’s often the only minutes each day when my phone isn’t somewhere within reach — or at least, the only minutes each day when I’m not subconsciously itching to touch my iPhone (regardless of whether it’s within reach).

The reaching for it, just the gentle desire to touch it, to make sure it’s there, I feel it, subconsciously, all day, and when I’m not able to do so, some part of me, sometimes consciously but always subconsciously, cries out, “Where’s my phone? Where’s my phone?,” until finally, there it is!, and I have it again.

But that itch goes away each night when I look up at the stars and pick out Cassiopeia. I don’t notice this lack of an itch, but thinking back on it, it’s true: the itch completely goes away.

Tonight, however, I had my iPhone on me when I went outside, and after a few minutes of looking up at Cassiopeia, I remembered it, and so after the required unconscious tap on my Facebook app, I opened my web brower and Googled the constellation’s name, not because I wanted to do a full search of the Internet but because I needed a shortcut to the relevant page on Wikipedia.

And Wikipedia (i.e., the wisdom of the crowd) told me that Cassiopeia was the mother of the woman whom was tied to that rock in The Clash of the Titans, the one whom Perseus wanted to save. She (the daughter) was served up to a sea monster to appease the wrath of Poseidon, who was holding the mother guilty for the crime of blasphemy, which she (the mother) committed when she boasted that both she and her daughter were more beautiful than the daughters of a sea god. The sea god was not Poseidon, mind you, but rather, the god who ruled the seas prior to Poseidon, so like, one of the sea’s still-living, past-ruling-gods (kind of like the sea’s version of Jimmy Carter).

Poseidon had to do something about such a boast. There’s a reason blasphemy is a sin. Blasphemy calls into question the power dynamic between a subject and its ruler. In order for the ruler to continue to rule, these dynamics cannot be doubted for a moment, and every outspoken doubt must be met by an overpoweringly undoubtable show of force, elsewise one brings into being the very beginning of a revolt.

And so Poseidon did what he had to do, and he came up with an unimaginably bitter pain for the boastful Cassiopeia: she had to sacrifice her beautiful daughter, whose only guilt resided in being the object of her mother’s boastful pride. To satisfy the wounded sea god’s pride, however, Cassiopeia had to sacrifice her daughter in a horrible, yet relevant way; she couldn’t just slice her daughter’s neck; she had to give her living daughter up to be consumed alive by a horrible sea monster.

In the story, Perseus comes along just in time and saves the princess (whose  name, by the way, is Andromeda; you’ve probably heard of her: we not only gave her a constellation [right below Cassiopeia’s], but we also named a galaxy after her — we’ve always liked princesses better than we’ve liked queens).

But the princess wasn’t really the guilty one; her mother was. So Poseidon had to come up with another punishment for the queen’s blasphemous crimes. He decided to curse her with a frozen immortality where she would forever be positioned as her daughter was positioned during what must have been the most torturous moment of both her and her daughter’s lives, forcing her (the mother) for all time to relive and never be released from the pain of that horrendous moment.

But he would do so not in private; Cassiopeia would not be frozen in some locked dungeon far beneath the earth where no one would ever see her or think about her crimes; no, instead, she would be held up high where we would all have to bear witness to her pain, a reminder to all of humanity as to what will happen if we boast against the gods (including those gods who are no longer in power).

And Cassiopeia sits above us, tied to her throne like Andromeda tied to those rocks, crying out, forever stuck in a moment of impending and violent shame.

The story of Cassiopeia doesn’t relate to my addiction to my iPhone, unless one wants to stretch the metaphor to its breaking point and compare modern culture’s worship of technology to the act of an ancient blasphemy…but hey, for argument’s sake, why not?

As I said above, blasphemy is an unforgiveable sin because it calls into question the power dynamics between a ruler and his/her/its subject. If we imagine for a moment that there is no such thing as God or gods, then what blasphemy are we committing when we sacrifice parts of our lives to technology?

As an academic living in rural Vermont, I have more than a few friends who are committed anti-technologists. They’re not nutjobs — they all watch Netflix, use computers, drive cars, etc., but they are also outspokenly critical of the costs and pains that come with our dependence on modern technology.

They are, in a word, humanists. They believe that humanity has an intrinsic value that ought to be defended. To their credit, they do not seem to believe that humanity is more valuable than anything else on the planet, but they believe that, despite its egalitarian relationship with everything else, humanity is truly unique and deserves to be saved.

One of the things it deserves to be saved from is technology. Like any other vice, technology sucks the life-force out of humanity and redirects it for its own use — like a poppy plant getting humanity high in order to make us grow more poppy plants. The more we sacrifice our energy, our attention, and our time to technology, the less control we have over our selves.

Studies show that an increased use of digital technology can lead to, among other things, increased weight gain, a reduction in sleep, the retardation of a young person’s ability to read emotions from non-verbal cues, increased challenges with attention and the ability to focus, and a reduction in the strength of interpersonal-bonding sensations. It directly harms our ability to enter into healthy relationships with other human beings, thereby harming humanity’s ability to regulate itself.

In other words, technology rules over humanity at this point; it regulates our interactions, even when we’re among each other. Technology has inserted itself into even our most intimate relationships (see: vibrator), and found itself enthroned upon an altar at which the majority of us bow down every night until we go to sleep, stealing from us the only productive hours we have after we sell ourselves into wage slavery in order to pay down our debts, debts which, let’s be honest, were mostly incurred by the manufactured desire to offer tribute to technology (collected in small amounts by technology’s high-priests: Comcast, Apple, Verizon, Samsung, the New York Stock Exchange, etc.).

To commit blasphemy against technology — to forget, even for a moment, even subconsciously, that technology does not rule over us, to not feel, even if only in retrospect, technology’s ruling hand — is to remember, even subconsciously, that humanity was here before technology, and that we did just fine on our own.

We weren’t weak. We weren’t bored.

We had kings and queens and gods who kept them in their place. And every night, we looked up at the dark night sky, and without feeling the uncomfortable itch of addiction, thought to ourselves, calmly, quietly, “There’s something about those stars.”

President Trump Did What Now?

I haven’t written about politics in a bunch of weeks. The reason is simple: it’s only a matter of time before Donald Trump gets impeached. There seems to be enough smoke now for any fair-minded person to agree that there must be some kind of fire. I don’t claim to know exactly what it is or who was involved, but I don’t doubt that the act of collusion includes the man at the highest level.

The NY Times is now reporting that Presidents Trump and Putin had an undisclosed, private conversation that lasted as long as an hour during the G20 Summit. It’s true that the conversation occurred in front of many of the world’s leaders, but except for Presidents Trump and Putin, only a Kremlin-employed interpreter knows exactly what was said.

Trump is attacking the Times for the story — “Fake News story of secret dinner with Putin is ‘sick.’ All G 20 leaders, and spouses, were invited by the Chancellor of Germany. Press knew!” — but it’s not about whether the press knew about it (nor is it about the President’s use of quotation marks around “sick” — does he think he’s quoting somebody or is he misunderstanding  the use of scare quotes?). It’s about whether the press reported the conversation, and until now, they had not.

Journalists know a lot of things. They don’t report on everything they know. The best of them only report on the things they know for sure, which means they have evidence to support it.

And what did the NY Times journalist, Julie Hirschfeld Davis, report?

She reported that “hours into” a G20 dinner, President Trump rose from his seat and joined President Putin for “a one-on-one discussion…that lasted as long as an hour and relied solely on a Kremlin interpreter.”

She wrote some more words to allow the White House to register its reaction,  and she wrote some others words to provide context for more casual readers, but at bottom, those are the only facts that she reported.

And President Trump calls it “Fake News!,” not because he denied it happened, but because he’s upset someone thinks such a conversation should be news.

This is the reason those of us on the left think he is an idiot. He can’t stop getting in his own way. How hard is it to not have a private conversation with the person you’re being accused of colluding with? And if you must have a conversation, how hard is it, really, to arrange a truly private one?

You know how hard it is for this president? Incredibly hard. Everyone in the bureaucracy is out to get him. He can’t make a phone call to anyone on the planet without someone else knowing about it, and with the leak culture being encouraged by the press and, let’s face it, the American people, that someone else is more than likely to let the information slip. How much worse would it look if President Trump tried to arrange an actual secret meeting with President Putin?

He had no choice. He can’t just not talk about the situation with President Putin, collusion or no collusion, so his only choice is to do it in the most public place possible.  If he actually wants to talk about the collusion issue, he can’t trust the State-department interpreter to not share the details of their conversation, even if only under oath to a prosecutor.

So what the President did, collusion or no collusion, makes complete sense. But to think, even if only for a minute, that such a conversation doesn’t deserve to be news is to think something bat-shit stupid. If the President of the United States had a private, one-on-one conversation with the Prime Minister of Luxembourg, the existence of that conversation would make the news — and I don’t even know if they have Prime Ministers there. To imagine it wouldn’t be news when you do it with your alleged colluder in treason…that’s just dumb.

That’s why I haven’t been writing about politics lately. I’m so done with trying to understand this President. I don’t have to anymore. I get it, and I honestly don’t think he’s a match for the one-two punches that keep coming at him from the bureaucracy and the press. If Mueller is as ethical as the press suggests, then it’s only a matter of time before they take him down.

At this point, writing about Trump feels more like trying to catalog and predict the ending of a one-sided fight — will he go down because of some kind of final, powerful blow or will he just succumb to a continuous onslaught of jabs? Making those kinds of prediction can be fun some of the time, like trying to predict which character on your favorite HBO show is going to die next, but more often it feels like trying to get excited about the arc on a crappy reality show.

There’s a danger in feeling that way, of course. If we allow ourselves to get bored by the lack of progress or overwhelmed by the case’s ever-growing details (how many fucking people were in that room with Don Jr. and how the fuck are they all connected again?), then we risk losing the urgency of the resistance. I get it.

But seriously, let’s look at this shit. Yes, the Republicans are trying to fuck up all kinds of shit in Congress, and yes, the President is doing a ton of real damage via Executive Order, but it seems the most they can do right now is all short-term stuff. They’re not organized enough to ram something through Congress — Trump is too unhinged and vague, and the Republican Congress has to reconcile the desires of too many “moderates” (as if…) with too many Tea Party crazies. If the Democrats can stay united in their resistance, the Republicans can’t deliver on the biggest promises they’ve made to the electorate, and they’ll continue to look and act completely dysfunctional.

Yes, there are things to do. Yes, there are real dangers to fight. But in all honesty, it seems like those who are doing the fighting for my side of things are doing a damn fine job, and I’m trusting them to continue to do so.

Me? I’ll keep going to work each day to teach the next generation of leaders how to think for themselves. It’s the least I can do.

What does it mean to be a self-published writer?

I’ve always interpreted self-publishing in terms of a bookstore: A self-published writer is someone who, from start to finish, is responsible for getting that book on that shelf.

But if I’m a bookstore owner, why am I going to allow you to come into my shop and just put your book on my shelves? If I start doing that, I’m going to have hundreds of wanna-be writers showing up on my doorstep, trying to get their stupid-ass books on my shelves. If I say yes to you, the rest will think I’ll say yes to them, and next thing you know, to make sure the books I sell remain high-quality enough for my customers, I’m screening which books make it on my shelves and which ones don’t, which basically means I’m doing the job of a publishing house now, and damn it, I’m trying to run a bookstore, not a publishing house, so no…you can’t put your self-published book on my shelf.

Can you imagine trying to talk your way past that guy? That’s a hell of a struggle, and even if you’re persuasive, it just means you got your book on that one shelf in that one bookstore, and everyone knows that no one goes to bookstores anymore.

So now, when you’re talking about self-publishing, what you’re really talking about is putting your book on Amazon. And that’s simple. Anybody can do that.

And millions of them do.

So now what’s your next struggle? It’s rising to the top in the cage-match rumble for a reader’s attention. If you want people to find your book in the jungles of Amazon.com, you have to work your network, which means turning friends and family members into customers and hopefully having a few of them who turn a few friends of their own onto your book.

But that seems kind of slimy to me. It’s putting your network to work, and that feels like an exploitation. I don’t want my friends and family to work for me. If they dig what I’m doing and they recommend it to someone else in the natural flow of their lives, that’s great, that’s honest and genuine; and that’s how I want my relationship with my readers to be: honest and genuine.

So there has to be another form to self-publishing, one that doesn’t require me to haggle with a bookstore owner or exploit the strength of my network.

And that’s when I realized there’s this. My blog. There doesn’t have to be anything other than this. It’s a place where I publish my writings and make them available for free.

I’m not a professional writer, and now that I’ve reached the age of 40 and am involved in a career that satisfies me personally and professionally in so many different ways, I’ve given up the desire to become a professional writer. I pay my bills in other ways, so why not write for free?

This doesn’t mean I’m not going to self-publish a book someday. But if I do, I’m going to link to it here on my website and make it available for free.

Because that’s what I think self-publishing should mean. If I didn’t get paid to write it, why should you pay me to read it?

There’s no resource being consumed here, nothing but time. And if your time is just as valuable as mine, why should you have to cover the cost of mine?

Except, wait a minute, because if we’re really talking about an exchange of time, truth must be spoken: it takes me a lot longer to write these things than it does for you to read them. Doesn’t that mean you owe me something? If our exchange value is time, doesn’t that mean you owe me some of your time (provided I have’t wasted whatever time you’ve already given me)?

That would be true if our time was equally as valuable, but it’s not. By virtue of your presence here, we can assume that your time — i.e., your attention — is precious. There are literally countless other things you could be doing with your time right now, but instead of doing any of those things, you’re doing this: reading the words I wrote. That’s a gift I must truly appreciate.

Because obviously, as someone who actually keeps a blog, I must have a lot of time on my hands, a portion of which I choose to give to this.

As a self-published writer, I’m not being paid for this. But as a self-selected reader, you’re actually paying for the time that you give me: in an attention-based economy, giving someone your eyeballs is to give them a major form of currency. I can use your eyeballs as leverage in a negotiated contract where the other party would be agreeing to exchange their services (editing, publishing, and marketing) for your eyeballs. If I give them you (i.e., my network), they’ll give me money. They won’t even have to read my work first because decision makers don’t care about what’s between the pages they publish; they care about the number of eyeballs that will, at the very least, scan those pages.

But, as I said before, I will not trade on the strength of my network. I refuse to think of my readers — of you — as a revenue stream. That would fuck up our whole relationship, and I’m not willing to do that.

Your attention is expensive, and it’s the only resource being consumed here. Everything else I’m just giving away.

I hope you find as much joy in it as I do.