The illegitimacy of presuppositions

A linguistic presupposition is ‘an implicit assumption about the world or background belief relating to an utterance whose truth is taken for granted in discourse’. (Wikipedia)

A epistemic presupposition might be defined similarly as an implicit assumption about the world or background belief that is taken for granted prior to attempts to determine its veracity.

PROPOSITION: The inclusion of epistemic presuppositions in an epistemic method is always illegitimate.


At the foundation of epistemology lie two hopes.

  • 1: The hope that our reality will make sense.
  • 2: The hope we can uncover a methodology that will make sense of our reality.

These are hopes. We don’t need to assume these hopes can be actualized.

Some take these two hopes, and for various reasons, presume they are more than hopes. They start with the presupposition that reality does/will make sense, and that we do/will have a methodology that will make sense of that reality.

These presuppositions are clearly unfounded. There is nothing about hope that warrants a presupposition. Those who do accept such presuppositions have blundered at the very foundation of their epistemology in that they have accepted a conclusion to a degree not warranted by the appropriate degree of evidence, thereby abandoning the very essence of epistemic integrity.


Highly reliable regularities do not deserve the status of “presupposition”.

We were not born assuming our realities would appear logically coherent. Yet, for most of us, we discovered that reality invariably mapped to “laws” of logic to such a degree that, by the time we were toddlers, we possessed a justified extremely high degree of confidence in the continued reliability of “laws” of logic.

Some who can not remember this process of legitimately acquiring confidence in logic through inductive experience during infancy subsequently claim logic is something we are warranted in presuming. This is a presupposition arrived at out of forgetfulness; they have simply forgotten (or are ignoring) the natural cognitive acquisitions of their infancies.


The reliability of our minds need not be presumed. Nor should it be presumed.
Some claim that we must presume the reliability of our minds before we can even attempt to make sense of the world around us. This is not true.

Imagine a system of interdependent modules, all of which must be working for a positive outcome. Examples include a car engine, a computer or clock. If any individual module of the system ceases to function, the entire system will fail.

The same holds for what we might call an epistemic apparatus in which 1) a mind, 2) sensory organs, and 3) scientific methodology all combine to form an interdependent system. If any of these three modules fail, the entire system will fail. If any one of these modules is faulty, the system will be faulty. However, if this epistemic apparatus produces predictive power and explanatory coherence, we can map the degree of confidence in the working of the constituent modules to the degree that the system is demonstrated to be successful.

So the reliability of our minds need not be a presupposition, nor should it be a presupposition. Our minds should be tested prior to our confidence in our minds. And there is no guarantee that the current reliability of our minds will not increase or diminish in the future.

Even those who claim we must presume the reliability of our minds understand that their minds will begin to diminish in their later years. And they admit that even our faltering minds can normally test the degree to which our minds are faltering through the observation of our own forgetfulness, or through more rigorous means such as charting our success over time at crossword puzzles.

So, presuming that our minds are reliable prior to testing our minds transgresses epistemic integrity.


Any method currently successful in making sense of our reality need not be presumed to always be successful.

I am sometimes met with the objection that my current epistemic methodology is a presumption I hold. It is not. Just as the continued reliability of my mind I don’t nor should presume, I do not presume that my epistemic method will continue to work.
What is that epistemic method? Essentially it is to follow what appears to work (in terms of predictive power) to the degree that it appears to work for as long as it appears to work.

And what if this method stops working? To the degree that it stops working, to that degree I will lower my confidence in the method.

(I currently can not imagine what it would mean for a method of following what works to cease working, but my lack of imagination does not warrant that I hold my method as a presupposition.)

Some may argue that, since I am admitting I am dependent upon the appearance of the success of the method, that I am subject to being deceived. Welcome to the limits of subjectivity. Absolute certainty is not possible for those less than omniscient since we are limited to less than certain evidence, the longing to transcend this limitation notwithstanding.

Even my epistemic method need not be held as a presupposition.


It may be that some who insist others hold presuppositions are simply over-projecting from their own emotional/cognitive inability to appropriately abandon their presuppositions to the conclusion that no one else can abandon their psychological disposition to hold presuppositions. They will poke and prod at my position, and when they can not uncover a presupposition, will simply assert I have presuppositions hidden away somewhere deep down in my presupposition-inclined soul. It’s an argument based on the arrogance of assuming others do not know nor can control their own degrees of certainty.

Ultimately, the attempt to demonstrate everyone holds presupposition is driven by the hope to make one’s own faulty epistemology more legitimate by claiming an equivalency that does not exist. If they can only demonstrate we all hold presuppositions for which we have no justification, their own unjustified presuppositions are somehow made legitimate.

This is both illogical and indicative of the current dismal state of apologetics.

I hold no presuppositions. If you do, I strongly recommend, for the sake of epistemic integrity, that you abandon your presuppositions. Limit your degree of confidence in any proposition to the degree that is warranted by the perceived efficacy of that proposition. You’ll discover a reality with less of the comfort of dogma, but much more freedom to honestly follow the evidence into a more rigorously constructed ontology.


Phil Stilwell on Quora

I’m starting to post answers on Quora.

Here’s the link.

The following was my latest posting of an answer.

The most basic false belief within Christianity today is the notion that salvific faith is rational. 

Salvific belief (belief that leads to salvation) is treated as binary in the Bible: either you believe in Jesus or you don’t. This is consistent with the binary dichotomy of Heaven and Hell, but it is not consistent with rational belief.

Rational belief is a degree of belief that maps to the evidence. For every proposition that requires an inductive assessment (this excludes our immediate perceptions), evidence, both confirming and disconfirming arrives incrementally.

Consider the following scenario. Continue reading

Rationality is Prior to Burden of Proof

Epistemic rationality is honestly positioning one’s degree of belief in a proposition to the degree that the balance of evidence for that proposition warrants. Epistemic rationality belongs to the individual epistemic agent.

Burden of proof is more of a social concept. If someone wants to convince members of a community a proposition that is incongruent with the current beliefs of that community, then, to accomplish that goal, he or she needs to take on the onus of presenting the arguments/evidence that will accomplish that.

Note that this assumes that the community values arguments and evidence. What could it mean to have a burden of proof where proof (read “convincing evidence” here), is not valued among epistemic agents. If the epistemic agents within a community hold that beliefs need not map to the balance of evidence, then the notion of burden of proof will be of no value to them.

So, what is logically prior? Epistemic rationality is prior to burden of proof. If a community thinks, to any degree, that they need not map their degree of belief in X to the degree that X is warranted by the confirming/disconfirming evidence, then they only foolishly inform those who would attempt to provide arguments/evidence for a position not their irrational own that they must take on the burden of proof.

Burden of proof is like the status of offense or defense in a football game. If you don’t respect the rules of football, or want to play football under the rules of tennis, you don’t have any right to suggest the other team must now play offense. Until you demonstrate a willingness to play by the rules of football, you don’t have any right to suggest who takes possession of the ball.

Rationality comes first. Only after you have committed to mapping your degree of belief to the degree of the confirming/disconfirming evidence can you suggest the other side has the burden of providing evidence/argumentation that might change your mind.

The Absurdity of Epistemic Recursion

The other day I encountered a Christian who suggested that, based on my notion that rational belief is a degree of belief that maps to the corresponding degree of perceived evidence, I would be required to not only assess the reliability of my mental faculties in assessing a given proposition, but that I would need to also assess the reliability of my mental faculties to assess my mental faculties, and then recursively assess this assessment ad infinitum until I had no justification for any significant degree of belief in the initial proposition.

Here are his very words.

So suppose I agree with you that, given my past experience and familiarity with my own fallibility, I make sure to always proportion my degree of belief to the evidence. Some proposition, call it P, presents itself. I evaluate the evidence for P and decide there is a good amount of evidence in its favor, say for the sake of argument that I think it is 75% likely to be true on the evidence. I determine that I have good reason to believe that P. But that determination is, itself, a reasoning process that I have a belief about, namely I have a belief that my reasoning process has properly arrived at a proper assessment of the probability of P given the evidence. We will call this belief about the likelihood of P Q. My experience with my ability to assess probabilities given evidence tells me that I should put 95% confidence in Q. But if that is true, then I need to lower my confidence in P, since Q tells me that I could be wrong about P being 75% likely. My confidence in Q is a belief that I am also not certain of, and so it, R, says that it is 95% likely that Q is right. But that means that I should be 95% sure that I am 95% sure that I am 75% sure of P. And each time I iterate this, and reflect on my certitude, I must lower my confidence that P until it approaches the point where I cease to be confident that P is true at all.

This demonstrates a lack of understanding of how science works. Let’s walk through this.

I determine that, based on the evidence I perceive, there is a 80% probability that proposition X is true. We can write this as…


When scientists assess the probability of a proposition, they include an assessment of the resolution, biases and accuracy of their instruments in the degree of certainty in the probability. For example, if a sociologists, based on a survey, assesses the probability of a child born into a Evangelical home to still be Evangelical at age 20 to be 80%, that assessment of 80% has attached to it a degree of confidence.

This degree of confidence is called the margin of error. The margin of error does not change the statistically determined probability. It only changes the error bars. If the sample size is small, the statistical analysis may yield an 80% probability, yet margin of error will be large. If the sample size is large, the statistical analysis may again yield an 80% probability, yet margin of error will be smaller. But the assessment for both samples may be identical at an 80% probability.

There may be, in addition to small sample sizes, other elements that can affect the margin of error. One could be a sampling bias. Perhaps Evangelicals are more/less likely to respond to surveys than non-Evangelicals. Perhaps the survey was conducted Sunday morning when most evangelicals are not available to respond to surveys. There are many potential weaknesses in the measurement apparatus. These should be identified in determining the degree of confidence in the statistical determination of the 80% probability, but do not change that 80% probability itself. They only change the margin of error, our confidence in our conclusion.

Part of the assessment also includes the scientists assessment of their track record of reliability. Have they made mistakes in methodology in the past that have resulted in low accuracy of predictions? If so, this does not change the probability they assign to the proposition upon assessment, but only their degree of certainty in that assessment, the error bars.

In light of this, this apologist, if he has even a fundamental understanding of science, will have to admit that an assessment of the tools of assessment, including the mind doing the assessment, does not in any way affect the probabilistic conclusion. Only the margin of error can be affected.

But perhaps that is what this apologist is actually saying. Perhaps he is saying we don’t have any respectable margin of error in any assessment we make. Let’s take a closer look.

Let’s say our conclusion of P(X).8 is accompanied by a margin of error (ME) of 10%. We might write this…

P(X).8 & ME(X).1

This apologist, for some reason, believes we need to include a recursively to this. This is what we might end up with after 5 recursions.

P(X).8 & ME(ME(ME(ME(ME(X).1).1).1).1).1 = .00001

The error bars would be located at the poles! This would indeed destroy our confidence in our apparatus of assessment!

But is this what scientists do?



Let me list a few reasons, some very obvious.

1. It would have destroyed science long ago. If no one had had legitimate confidence in the apparatus of their methodology (including their own minds), science would have never gotten off the ground. But science works! Are we now to trade what works for something that doesn’t?

2. There is no logical imperative to employ this silly recursive assessment of the assessing apparatus. If there is, I’d like to see it laid out in syllogistic form. It appears that this apologist would like to force this rule on recursion on the scientific method so he can dismiss it as unreliable. This is straw-manning in its most dishonest form.

3. The process of employing this invented rule of infinite recursion of assessments would require eternity. This apologist seems to believe that we need to assess our assessment of our assessment of our assessment…ad infinitum. This apologist presumably is not currently engaged in this assessment of his own assessments. Why impose it on others?

4. For an epistemic agent to be rational in any given epistemic context, they merely need to position their degree of belief in a proposition X to the degree that the evidence relevant to X warrants. This conclusion is in no way immutable. It may be changed later as more evidence arrives, including evidence relevant to the mental faculties of the scientist.

In conclusion, it appears that his epistemic recursion is not something done by this apologist, but only something he is imposing on the normal successful epistemology employed in scientific inquiry in an attempt to make it equivalent or inferior to his own epistemology.

The epistemology employed by science works. Those holding to religious epistemologies are justifiably envious of its success. And this is the probable cause of their failing attempts to dismantle the epistemology of science.

UPDATE: The following was posted on a Facebook group in “response” to this article. I’ll answer {thus} between the lines.

In addition to violating the spirit of this closed group, Phil Stilwell misrepresents [commenter], and posts an Unbelievable? argument on his own blog. Phil wildly fails to realize that his own epistemology must provide an answer to the problems posed by Humean skepticism and the problem of induction. {I don’t need to address those issues related to “truth” and “knowledge”. I’m not addressing what is “truth” or how we acquire “knowledge”. I’ve made it very clear I’m focused on rationality. This perennial conflation between truth and rationality is the crux of the problem.} Rather than answering those problems, he instead charges [commenter] with being inconsistent in his own epistemology. But [commenter] doesn’t hold a Humean epistemology. Phil does. {Wrong. I don’t. And, yes, [commenter] needs to substantiate his own epistemology. Of course.}
Phil then affirms [commenter]’s very true claim that his epistemology cannot be supported if meta-skepticism is true, because the fundamental principle “apportion one’s evidence…” cannot itself be evidenced. {Simply ask yourself what would it mean to believe something to a degree NOT corresponding to the degree of the evidence. Where has this method ever worked?} Phil doesn’t actually think his guiding epistemic principle is irrational, however, because in his metaepistemology he mistakenly identifies rationality with pragmatism, and thinks that this apologist’s condition for belief-justification is not pragmatic, and therefore not rational. {You are here suggesting that, following what works needs to be demonstrated to work. :) See the problem?} For pragmatic reasons, he tells us, it’s impossible to support any beliefs if skepticism is true. But one must get along in the world with some beliefs, therefore [commenter]’s question can be rejected. {[Commenter]’s question can be rejected since he is confusing “truth” with “rationality”.}
It’s quite a spectacle. By identifying the two concepts with each other, he strips rationality of any content, and therefore any of its epistemic weight. {If you want absolute truth/knowledge, you’ll likely never find it. But the lack of it does diminish rationality in any way. Once again, you’ve confused “truth” with “rationality”.} Because if rationality is just what works, and what works is just a subjective inference, then the deluded are just as warranted in their beliefs as those who are mentally sound (whatever that means if skepticism is true.) {If we are deluded by an evil demon, yet honestly follow what we perceive to work, we are rational but wrong. That’s due to no lack of responsibility on our part. Rationality is our only epistemic responsibility.} If Phil replies that “[what] works” is just getting what you want given the furniture of the world, then he abandons the subjective inferentiality of his own meta-epistemology, and loses again. {Simply wrong. Rationality (once again) is simply believing X to a degree that maps to the degree of perceived evidence for X. This is not that difficult. Simply pay attention to what I actually say.}

Note that this is not an actual response to my post above.

Christian apologists are simply attempting to distract from the fact that they have a bankrupt epistemology that they can’t defend.

Warranted Belief And Psychological Demands

If we were simply minds designed to assess truth, life would be easy. We could simply test and adopt the evidential heuristics and algorithms that provide the most predictive successes, apply these tools to the evidence for and against a given proposition, then simply assign a probability to the truthfulness of that proposition. There would be no default position of belief or disbelief. There would be no bivalent conclusion of belief or disbelief. Everything would be comfortably a matter of epistemological probabilities that had no bearing on our survival.

However, we find ourselves active agents in a world in which we are driven to survive and secure happiness for ourselves and those we love. We find ourselves emotional beings that are very much disturbed by uncertainty. We are driven to “know”.

This drive to “know” is what pulls us away from proper probabilistic positions on the truthfulness of claims, and compels us to claim “knowledge” that a proposition is either true or untrue. While this bivalent approach to truth destroys our credibility as effective assessors of epistemological probabilities, it is nonetheless fully human.
Continue reading

The Emotional Substrate Beneath Bloated Ontologies

We are most fundamentally emotional creatures, and the most fundamental realm of meaning is that of emotion. From the time we are infants, our emotional brains are busy sorting through these needy emotions and attempting to carve out a social identity, a set of things we can call “true”, and a code of behavior. But there is nothing as subjectively real as our emotions.

So we are compelled by these emotions to construct an edifice that can comfortably house our emotions by providing psychological, epistemological and moral frameworks over which we can then drape image, and respectably present ourselves to society.

Because the goal is to cloak our raw and muddled emotions under more presentable walls of definition, this enterprise is inherently illusory, and is most commonly self-delusional. Yet by the time we reach adulthood, we have constructed an elaborate edifice that, if matching the expectations of society, can assure our social well-being.

I’d like to deconstruct the various walls of meaning to expose the raw emotions that we often do not want to admit lie at the foundation of being.

  • Identity. This is the most transparent. Many realize that identity is static only where it is thought static. Personhood can change significantly over a lifetime. We say “this is who I am” at our peril. Constructing rigid walls of identity lock us into a self that forfeits a more colorful and fuller life. But, to avoid the swirling and persistent uncertainty and fear, our adolescent minds forge an identity that we often find hard to later modify. We begin to see the image that we have constructed upon our emotions as a rigid entity, and prior to our emotions. This self-delusion serves to maximize predictability and minimize risks, but it often leads to marginal lives. If we can recognize that it is emotions that are the substrate to our identities, and take measures to directly address those emotions rather than merely repainting the peeling facade the same color from time to time, life can become much more dynamic and enriching.

  • Continue reading

The Great Pumpkin

pumpkinJoseph and Albert were both very intelligent 9-year-olds. They both had pumpkin gardens. They both planned to win 1st prize for “Largest Pumpkin” at the county fair now only 3 days away.

Joseph spoke confidently as suggested by this font.
Albert spoke more pensively as suggested by this font.
The boys spoke as they walked home from school.
Continue reading