Category Archives: religion & atheism

On “The Language of God” (Part I)

In The Language of God: A Scientist Presents Evidence for Belief, Francis Collins, the leader of the international Human Genome Project, recounts his journey from being an agnostic to an atheist to a Christian (thanks to the writings of C.S. Lewis), and then argues in favor of (on the one hand) belief in God and (on the other hand) trust in science.

In this post, I’d like to explore Collins’ evidence for belief, and then, in a later post, respond to his caricature of atheism.

The Moral Law

His evidence rests on what he calls, after Lewis, the Moral Law, which stands for “a concept of right and wrong [that] appears to be universal among all members of the human species (though its application may result in wildly different outcomes).” The Moral Law does not signify a list of rules similar to the Ten Commandments; rather, it signifies the phenomenon of morality, the internal awareness of there being, in fact, a right and wrong way to proceed (regardless of our ability to explicitly discern the two).

The Moral Law is the standard, the higher authority, by which we judge our behaviors, “and its existence,” writes Collins, “seems unquestioned.” Even when we disagree as to whether one action or another better corresponds to that standard, we rarely deny the existence of the standard. The Moral Law is what gives us universal concepts of fairness, kindness, honesty, impartiality, etc. Again, we may disagree as to what actions or behaviors are fair or kind or honest, but we all agree that such concepts are real.

In an attempt to pre-empt the “postmodern” criticism that all ethics are relative and that there is no absolute right or wrong, Collins throws postmodernism back in its face: “If there is no absolute truth, can postmodernism itself be true? Indeed, if there is no right or wrong, then there is no reason to argue for the discipline of ethics in the first place.”

A Postmodern Interruption

Let me interrupt my explication of his argument to say that Collins’ understanding of postmodernism seems, at best, juvenile. Since he already admitted to finding “the actual sacred texts” of the world’s religions to be “too difficult” (requiring him to explore the various religions via “the CliffsNotes versions”), I don’t think it’s unfair to assume that he has also not read/understood the “sacred texts” of postmodernist philosophy (which, by all admission, are often more opaque than the sacred texts of the various religions).

If Collins had read them, he might have learned that the recursive argument against postmodernism (if there’s no truth, then how is postmodernism true?) begins with a false premise. Postmodern philosophy does not argue the fact that there is no such thing as truth. What it argues is that your truth differs from my truth and that both of them must differ, by virtue of our subjectivity, from the absolute truth; and thanks to the way our language is constructed (including mathematics), we’ll never be able to access the absolute truth.

Postmodernism is a critique of the unstated assumptions that arose during the Enlightenment; it is not a constructive philosophy in its own right. It does not construct a logic that reveals the absolute truth; instead, it deconstructs your logic to reveal your unstated assumptions that will always already remain in play. It does not argue for its truth; it argues against your truth.

What Collins fails to grasp is the difference between destruction and deconstruction. He believes that postmodernism seeks to destroy the concept of the truth, but the reality is that postmodernism seeks to deconstruct the concept, not destroy it.

The process of deconstruction allows a postmodern critic to reveal the hidden assumptions that you’ve used to construct your argument, and more often than not, those assumptions originate in a subjective (and unargued) standpoint founded on a set of historic personal and/or cultural biases.

In other words, deconstruction (if done well) reveals the unsupported ground that your rational argument is based on, and it often (when done well) leads its listeners and readers into a feeling of intellectual vertigo.

Using the process of deconstruction, postmodernism doesn’t assert that there is no ground truth to our universe; it only demonstrates that your argument, despite your claims, does not rest on it.

With that being said, how might a postmodernist (this postmodernist) critique the concept of the Moral Law (as explained by Collins)?

The obvious answer might start with Collins’ assertion that the Moral Law is universal, but its supporting evidence (examining the diversity of moral codes across time and cultures) would take us in the wrong direction, since the argument in favor of the Moral Law is not about a prescription for behavior X over behavior Y, but rather, humanity’s universal sense of morality, the intuition that there is, irrespective of its cultural formulation, a right and wrong way to behave.

The postmodernist, then, should start the critique not with the cultural relativity of morality, but with the bodily relativity of it; that is, by demonstrating the Moral Law as the product of evolutionary pressures on the development of the human species.

If the Moral Law depends upon these evolutionary pressures, then morality would become (nothing more and nothing less) than a useful tool for genetic reproduction in the various environments that have been present during a small planet’s orbit of a minor star in a particular galaxy somewhere in the far reaches of the universe, a fact that would hardly support the Moral Law’s claim to universality.

The Inability for Morality to Evolve

But after discounting the postmodern critique using a (false) argument of recursion, Collins also attempts to cut off the evolutionary tack. He realizes that, “If this argument could be shown to hold up, the interpretation of many of the requirements of the Moral Law as a signpost to God would potentially be in trouble.”

He rests his argument on the existence of altruism, “the voice of conscience calling us to help others even if nothing is received in return…the truly selfless giving of oneself to others with absolutely no secondary motives.” The love that altruism demonstrates is called by Christians, “agape,” which differs from the love of affection, friendship, and romance.

Agape, Collins writes, “presents a major challenge to the evolutionist,” — and remember, Collins is the dude who led the Human Genome Project, so he is a firm believer in evolution. He continues, “It cannot be accounted for by the drive of individual selfish genes to perpetuate themselves. Quite the contrary: it may lead humans to make sacrifices that lead to great personal suffering, injury, or death, without evidence of benefit.”

He then takes on a few of the evolutionary responses to agape, such as the notion that it is recognized as a positive attribute in a potential mate, i.e., we want mates who are nicer, rather than meaner, so if we act nicer, we have a better chance of finding a mate with whom we can reproduce. Collins puts up against this argument the range of cruel behaviors that non-human primates use to reproduce, “such as the practice of infanticide by a newly dominant male monkey, in order to clear way for his own future off-spring” (there can hardly be more of a turn-off than murdering your potential mate’s previous children).

He then argues against the idea that agape leads to advantages over time (i.e., if you act nice now, without any clear benefit, chances are that you will be rewarded in the future — we can call this the “karmic” argument), but to this, Collins asks how it explains those “small acts of conscience that no one else knows about.”

Finally, he argues against the idea that altruistic practices by an individual benefit the group, and thus, aid in the continued evolution of the group’s related genes, if not the exact genes residing in the individual. The example here is the sterile worker-ants who “toil incessantly to create an environment where their mothers can have more children.” Collins responds to this argument by saying, first, “evolutionists now agree almost universally that selection operates on the individual, not the population,” and second, that “group-aided altruism” cannot account for those instances when we practice altruism outside of our group: “Shockingly,” Collins writes, “the Moral Law will ask me to save the drowning man even if he is an enemy.”

How does the unbelieving evolutionist respond to these arguments, which, again, are made by an individual who we have to assume by virtue of his role in Human Genome Project is among the world’s leading thinkers when it comes to evolution?

The Metaphorical Basis of Morality

One response might find its path through the cognitive-science-based philosophy of George Lakoff and Mark Johnson, which holds that, “the mind is inherently embodied; thought is mostly unconscious; [and] abstract thoughts are largely metaphorical.”

The basic argument of their book is that “we understand our experience via conceptual metaphors, we reason according to their metaphorical logic, and we make judgements on the basis of the metaphors.” The metaphors arise from the ways our physical bodies exist in the world, and thus they are dependent not upon any absolute truths, but upon the historical development of humanity.

Lakoff and Johnson see their philosophy as bridging a middle path between rationalism and postmodernism. Our understanding of the world cannot be absolute (as extreme rationalists might like it), but nor is it arbitrary and unconstrained (as the extreme postmodernists’ might assert). Lakoff and Johnson argue for a philosophy that is grounded and situated in who we are and where we come from.

In one chapter of their book, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Westen Thought, Lakoff and Johnson argue that the metaphors that govern our morality “are typically based on what people over history and across cultures have seen as contributing to their well-being.”

For example, it is better to be healthy rather than sick. It is better if the food you eat, the water you drink, and the air you breath are pure rather contaminated. It is better to be strong rather than weak. It is better to be in control rather out of control or dominated by others. People seek freedom rather than slavery…People would rather be socially connected, protected, cared about, and nurtured than be isolated, vulnerable, ignored, or neglected. [etc.]

Lakoff and Johnson then go on to show how these notions of our physical well-being become metaphors for our moral well-being:

Morality is fundamentally seen as the enhancing of well-being, especially of others. For this reason, these basic folk theories of what constitutes fundamental well-being form the grounding for systems of moral metaphors around the world. For example…, since it is better to be healthy than to be sick, it is not surprising to find immorality conceptualized as a disease. Immoral behavior is often seen as a contagion that can spread out of control.

They continue:

When we began to analyze the metaphoric structure of these ethical concepts, again and again the source domains were based on this simple list of elementary aspects of human well-being — health, wealth, strength, balance, protection, nurturance, and so on.

So what does this all mean for how agape might have evolved? How does our discovery that the world’s moral systems are fundamentally based on the well-being of our physical bodies discount the notion of a divinely inspired Moral Law?

It has to do with Lakoff and Johnson’s finding that “we all conceptualize well-being as wealth.”

We understand an increase in well-being as a gain and a decrease in well-being as a loss or cost. [This] is the basis for a massive metaphor system by which we understand our moral actions, obligations, and responsibilities….in terms of financial transaction….Increasing others’ well-being gives you a moral credit; doing them harm creates a moral debt to them; that is, you owe them an increase in their well-being-as-wealth.”

In this system, altruism is explained as an action that “builds up moral credit.” Any good action one person takes on behalf of another puts the other person in moral debt to the do-gooder; in altruism, the do-gooder cancels the debt, but they “nonetheless build up moral credit.”

Altruism, then, is how one grows wealthier at the expense of no one and nothing, and since our minds understand “wealth” as contributing to our own well-being, increasing our moral wealth increases our sense of well-being.

According to this argument, the evolutionary pressure that gives rise to altruism is the same evolutionary pressure that gives rise to our universal desire to increase our wealth: the understanding that an increase in wealth equals an increase in our well-being.

How does this explain, Collins might argue, an example of a man sacrificing himself (and his genes) in order to save a drowning enemy, since such an action does irreparable harm to one’s well-being?

Lakoff and Johnson argue that all of morality is ultimately based on some conception of the family and of family morality, and that this in turn is based on another metaphor “in which we understand all of humanity as part of one huge family…This metaphor entails a moral obligation, binding on all people, to treat each other as we ought to treat our family members.” If Lakoff and Johnson are right, then our embodied mind sees the enemy drowning in the river as our brother.

By revealing that morality is ultimately based on the metaphor of “The Family of Man,” Lakoff and Johnson account for instances of altruism that go beyond our group. The reality is that, to our embodied mind, all of humanity belongs to our group.

Of course, we still haven’t explained why we’d leap into the river in the first place: if altruism is understood as an increase in moral wealth that does not necessitate an increase in another’s moral debt, how would we evolve the notion of sacrificing our lives — and thus the totality of our wealth — for another person?

The answer lies in cognitive science’s discovery that “thought is largely unconscious.” The “selfish gene” conception of evolution argues that genes act in their own self-interest. Under the selfish gene model, altruism seems untenable because, obviously, altruism is defined as acting without (and sometimes despite) one’s self-interest.

But Lakoff and Johnson argue that, since most of our reasoning is unconscious, “we can now see that the moral problem of the apparent conflict between selfishness and altruism is ill-defined, because…we are not rational self-interest maximizers in the traditional sense.”

As human animals with the kinds of minds we have, we do not always act in our own self-interest, and we rarely have rationally consistent explanations for doing the things that we do. So when we jump into the river to save our enemy (or anyone else), it might be enough to realize that our embodied mind believes that we’re jumping into the river to save our brother.


In The Evolution of God, Robert Wright argues that moral evolution happens because “a people’s culture adapts to salient shifts in game-theoretical dynamics by changing its evaluation of the moral status of the people it is playing the games with.” In other words, the culture expands its understanding of who is in the group to those who previously stood outside of it. We can see this in the evolution of monotheism from the tribal exclusivity of Judaism’s worship of YHWH to the Pauline inclusion of the Gentile as also being worthy of God’s grace.

To argue that the Moral Law evolved here on Earth rather than being given to us by a divine and absolute God is not to assert that religion has never played a role in the development of morality or that humanity has not benefited from the roles religion has played. But it is to argue that the Moral Law does not serve as convincing evidence of God’s existence.

I believe the phenomenological existence of morality can be better explained through a conceptual model that connects the evolutionary pressure on the gene (to help a family member) with the evolutionary development of our embodied (and metaphorically reasoning) mind (which sees all of humanity as members of our family).

I also believe that we act morally because we unconsciously conceive of moral actions as increasing our wealth, and hence, our well-being, which metaphorically serves the self-interest of our genes.

I also believe, with Lakoff and Johnson, that the universality of the Moral Law originates in the common physical attributes of the human animal, which in turn gives rise to the metaphors that govern our embodied minds.

I don’t know if this argument would convince Collins to give up the divine origin of his Moral Law, but I do think it opens the door to an answer that is more satisfying that his recourse to the absolute.

In my next post, I’ll look at Collins’ unfair caricature of atheism and see if we can’t find a better way to imagine it.

How Religion Works for Me

I do not believe that Muhammad wrestled with the angel Gabriel on the outskirts of Mecca. Nor do I believe that a man named Joshua Ben Joseph arose from the dead after being interred for three days. Nor do I believe that Moses came down from the mountaintop carrying the 10 Commandments of YHWH. Nor do I believe that Zoroaster turned down a deal with Anra Mainyu that would have made the prophet the sovereign over the world. Nor do I believe that Krishna froze time in order to convince Arjuna to fight. Nor do I believe that the Aesir and Venir actually engaged in a war. Nor do I believe that an Athenian queen, on the evening of her wedding, slept with a sea-god and later gave birth to a hero who would go on to kill a monster who was half man and half bull.

Instead of believing in those things, I believe that some of humanity’s greatest storytellers and philosophers developed conceptual systems that aid in the communication of heart-salving wisdom and/or embody hard-won lessons learned through historical conflict.

To read or listen to the Koran, the Bible, the Torah, the Avesta, the Bhaggavad Gita, the Vedas, the Poetic and Prose Eddas, the poetry and plays of ancient Greece, etc. is the act of experiencing great literature, and in that, it helps develop our sense of compassion, love, obligation, beauty, etc.

And for this, we should be grateful.

But we should not make the mistake of seeing such literature as rigorous proofs for the existence of the gods or God.

I applaud Catholics and Muslims and Jews for dedicating hours, years, and lifetimes to interpreting the wisdom they find in their sacred texts, just as I applaud Joyceans for dedicating precious time to interpreting the intentionally coded messages found in their sacred text of Finnegans Wake.

But in the same way that I do not let the words of James Joyce dictate the choices I make, so I do not allow the world’s religions to dictate my path through this life. I have no problem going to these founts of wisdom for assistance and guidance, just as I do not have a problem going to Shakespeare, Homer, or David Foster Wallace for a similar kind of guidance.

Great literature is great for a reason.

But we need not make a religion out of it.

On The Use of You as “a God-Surrogate”

After publishing my previous post, “Why I am an Atheist,” I received several thoughtful responses, but I also received, through snail-mail, a friendly and heartfelt letter from a Catholic priest whom I’ve never met. I do not want to publish that letter here, but I would like to publish my response to it, if only to clarify some things for other readers who might have read my previous post in the same way. Among other things, the priest wrote that “in choosing You as a God-surrogate, you have set yourself up for [disappointment] when we decide to ask of you what you will not do, and so force you into another exit and the creation of other surrogates.” Here, in edited form, is what I wrote in response to this friendly priest.

Your idea that my concept of the You is a “God-surrogate” doesn’t feel like an accurate representation to me. Rather than substituting God’s role in the universe with You, I am saying that I don’t believe in God, but I do believe in You. I am saying that You are the most important influence in my life, and before considering what I want, I should consider what You want.

But (and this is part of the reason why the notion of a God-surrogate is a false one) You are not all-powerful (there is no one and nothing that is all-powerful). While I should consider Your wants and needs before my own, that does not mean I must rest my decisions and actions on Your wants and needs. As an American male who grew up with all of the mythologies such a designation entails, I possess just as strong a sense of individualism as the cowboy riding alone on the range and just as much inclination for telling the bosses to stick it. I might venerate You, but I am also not afraid of You.

(You might think that venerate is the wrong word here, since it goes back to “reverence,” which in turn, goes back to the Latin word vereri, which means “to stand in awe of, fear” — but it goes back even further to a Proto Indo-European root that meant, “to become aware of,” and it is in that sense that I use it: once I am aware of You, I should consider Your needs and wants in relation to my own).

On My Use of Shame

After publishing my previous post, “Why I am an Atheist,” I received several thoughtful responses, but I also received, through snail-mail, a friendly and heartfelt letter from a Catholic priest whom I’ve never met. I do not want to publish that letter here, but I would like to publish my response to it, if only to clarify some things for other readers who might have read my previous post in the same way. In short, the priest wrote that he felt sorry for me, and that he was sorry — “sad and apologetic” — that my “experience with religion was so dominated by shame.” Here, in edited form, was how I responded to this friendly priest.

There is no need to feel sorry for me. My experience of religion was not dominated by shame. As I wrote in my original blog post, the concept of shame I was talking about had less to do with the vernacular use of shame and more to do with the process of discerning right from wrong. I used the word “shame” to connote the visceral sense of this discernment, the feeling that what I was about to do or what I had done was wrong.

My blog post was an attempt to convey the feeling of belief, rather than the intellectual stance of it. As I proceeded through the post, I reformed the notion I introduced as shame into the idea of “recognizing a moral imperative.” Unfortunately, the latter phrase, by virtue of its use of “recognizing” (a term which is connected to the concept of “knowing,” rather “feeling”) lacks the bodily sensation I was trying to evoke in my description of belief as a sensory experience.

As a creative writer rather than a philosophical writer, I attempt to use words in an evocative sense, rather than a philosophical one.

Given that I wanted to communicate the sensory experience of the moral imperative, I found that the word “shame” carried more sensory weight than any intellectual phrase that came to mind. I suppose I could have gone with the sensation of love, which would have implied a positive moral force rather than a negative one, but if truth be told, when I believed in God, I was a teenager, and the negative moral force — as in, “don’t do that” — held more sway in my life than the positive one.

But again, this is nothing for you to feel sorry about. As I wrote in the blog post, I don’t (and didn’t) conceive of this sensory experience in a negative light. It played the same role in my life as a stitch in one’s side does during a basketball game, a simple message that says, “Something you’re doing is wrong, so stop it.” Shame is a message; nothing more, nothing less.

Why I Am an Atheist

Let me begin with the notion that, as an atheist, it is not my job to disprove the notion of God. It is the job of those who believe in God (or gods) to prove God’s (or gods’) existence to me.

But that does not mean that I have nothing else to do.

The job of an atheist is to become a philosopher, or failing that, a scientist, or failing that, an artist, or failing that, conscious; to put it another way, the job of an atheist is to discover the wisdom of life, or failing that, the facts of life, or failing that, the beauty of life, or failing that, life.

And while it may not be my job as an atheist, it may be my duty to at least explain, for those who have difficulty with the concept, why it is that I do not believe in God (or gods). The explanation, I repeat, is not an argument attempting to disprove the existence of God (or gods); it is, instead, a courtesy, an attempt to communicate that is not (necessarily) an attempt to convince.

I equate the belief in God to a sensory experience. I say this as an individual who once believed in God. My belief was accompanied by the bodily sensation of certainty; but it was more than that as well, since “certainty” has a cold formality to it that does not give rise to the sensation I am talking about; rather than a coldness, there was a warmth to this sensation, a warmth that accompanies words like “awe” and is analogous to the sensation of drinking red wine in a candle-lit room, while outside, large flakes of falling snow drift lazily down to the street; a beautiful warmth that tells your body that all is right, and good, and true.

But it was also more than that.

The belief in God, while always accompanied by this warmth, also carried with it — not guilt, but shame, shame that, despite knowing what God wanted from me (to be the best person I could be), I still opted not to become that person. I did not experience this shame in a negative way. I understood it as a motivating force that would help me become a better person. If a certain action caused shame in me, such an action was probably not the best action to take. Seeing that sensation in a negative light would be akin to seeing one’s pain in a negative light; yeah, it hurts, but the hurt is itself a message — and there’s no reason to shoot the messenger (as a messenger).

So: a sensation of certainty, an awesome warmth, and — instead of shame, let’s call it “recognizing a moral imperative” — recognizing a moral imperative defines my notion of what it means to believe in God.

But there is a deeper aspect of it too, one that isn’t so easy to define.

It starts with a word like “revelation.” I have experienced a bodily sensation that told me, revealed to me, that I was in the presence of God. I knew it as certainly as I know that my cat is seated next to me right now. It was a sensation that brought tears to my eyes, the words “Thank you” to my lips, and the greatest bodily sense of inspiration that I’ve ever felt, the knowledge that no matter what I wrote next, it would have to, by virtue of God’s inspiration, be right, and good, and true. The words that came from my fingertips that night were, “I believe because…” and all of my writing since that moment (including this moment) has been an attempt to recover that sensation.

I write all of this now, say all of this now, as an atheist. But faced with such a sense of certainty and beauty and truth, not to mention revelation, even I am forced to ask: why on Earth would I call myself an atheist?

I call myself an atheist because in order to experience all of those same sensations again, all I have to do is reveal myself to you.

I call myself an atheist because it is no longer God that creates those feelings in me. It is you. It is the realization that these moments I am having right now, these absolutely private moments, are also shared with you. At bottom, it is the sensation that tells me that I am not alone, but its revelatory power comes not from its negation of my solitude; it comes from the understanding that I share this moment with you, and that you and I share it with so many other people (and places and things).

I sit here in my warmth and beauty and certainty because you — and everyone else — allow me to. There is not someone banging down my door, not someone stealing my food, not someone plotting my personal demise. I give thanks for that, not to God, but to you. I am alive because you — and everyone else — recognizes my right to live.

I can hear the objections now: “What about murder?” “What about the Nazis?” “What about Darfur?”

Those who are killed are so because their killers do not recognize their right to be alive. This extends beyond Dachau and Darfur and goes as deep as our white blood cells (if not deeper).

I hear other objections as well, from the Nietzscheans who would argue that such a perspective puts me in the weak position, and that such a position, by virtue of its weakness, is ill-favored. But even if I conceded the first position (which I do not), I cannot concede the second. Yes, to assume a position of weakness is to open oneself up to (potential) violence, but to live is to open oneself up to (potential) violence; (potential) violence is life’s ground state. To assume a position of weakness, then, is to say to (potential) violence, “I am not afraid of you.” Why would such a brave and valiant stance need to be considered ill-favored? It may be strategically unsound, provided one’s strategy aims for a longer life, but there are other strategies one can use during their lifetime. For those who aim toward charity and love, assuming a position of weakness might make the most strategic sense. In short, assuming a position of weakness need not be ill-favored.

But does “giving thanks to you — and everyone else — for recognizing my right to live” even put me in a position of weakness? I do not concede that it does. Rather than assuming the weak position, it assumes the position of one who is not afraid of you — and everyone else. I give you — and everyone else — the power to do as you will because I am not afraid of you.

Can you hurt me and the ones I love? Yes. Can you embarrass me and the ones I love? Probably. Can you destroy me, kill me, erase me and the ones I love? Yes. Yes you can.

But will I let you?

To thank you — and everyone else — is not to grovel for, nor to renounce, my right to live. To thank you — and everyone else — for allowing me the right to live does not mean you — and everyone else — can take it back without a fight.

So while I can hear the objections of the Nietzscheans, I do not feel the need to respond. To give thanks is not to be weak.

I can also hear the objections of God’s believers, those who say that I do not live by virtue of their good graces (as I claim), but by virtue of God’s. While I may continue to live because they don’t kill me, I only live in the first place because God gave me (or for those who believe in a “God of the gaps,” gave humanity) that spark — call it a soul, call it consciousness, it means the same: that spark of subjectivity, the ability to feel, and think, and act with free will.

To these objections I answer: I equate the belief in God with a bodily sensation, and I experience this same sensation — the same certainty, the same awesome warmth, the same moral imperative, and the same tears, thanks, and inspiration — when I choose, instead, to believe in you. I have shifted the object of my veneration from God to you, and in that shift, erased God from my life.

While I cannot claim to solve the mystery of subjectivity, I do not feel a need to answer such mysteries with the answer of “God.” I look back on the great unraveling of mysteries over the past 250,000 years, and I think to myself, what mysteries will be unraveled tomorrow? Subjectivity will be a big one (if not “the” big one), but I have faith (yes, faith) that its unraveling will come. I may not be the one to unravel it, but I have faith that you — or someone similar to you — will be willing and able to unravel it for me (which will be one more reason to offer you my thanks, and one more reason to venerate your power and majesty).

And that is why I call myself an atheist. Because I do not believe in God. Instead, I believe in you.