Jonathan Weisberg

Associate Professor • Department of PhilosophyUniversity of Toronto
170 St. George St., Room 516
Toronto, ON M5R 2P1
jonathan.weisberg@utoronto.ca

CV

Publications

Could've Thought Otherwise
Evidence is univocal, not equivocal. Its implications don't depend on our beliefs or values, the evidence says what it says. But that doesn't mean there's no room for rational disagreement between people with the same evidence. Evaluating evidence is a lot like polling an electorate: getting an accurate reading requires a bit of luck, and even the best pollsters are bound to get slightly different results. So even though evidence is univocal, rationality's requirements are not "unique". Understanding this resolves several puzzles to do with uniqueness and disagreement.
Philosophers' Imprint, forthcoming
Belief in Psyontology
Which is more fundamental, full belief or partial belief? I argue that neither is, ontologically speaking. A survey of some relevant cognitive psychology supports a dualist ontology instead. Beliefs come in two kinds, categorical and graded, with neither kind more fundamental than the other. In particular, the graded kind is no more fundamental. When we discuss belief in on/off terms, we are not speaking coarsely or informally about states that are ultimately credal.
Philosophers' Imprint, forthcoming
Formal Epistemology
A survey of formal epistemology aimed at undergraduates with no previous exposure. Topics surveyed include: (1) confirmation theory, (2) the problem of induction, (3) the regress problem and foundationalist vs. coherentist theories of knowledge, (4) epistemic logic and the limits of knowledge, and (5) applications outside epistemology, like decision theory, the existence of God, and the semantics of conditionals.
You've Come a Long Way, Bayesians
To celebrate the 40th anniversary of the JPL, a retrospective on select topics from the last 40 years of Bayesian epistemology. Topics discussed: (1) scoring rules and accuracy arguments, (2) imprecise credences, (3) regularity and zero-probable events, (4) connections between Bayesianism and "informal" epistemology, and (5) full and partial belief.
Updating, Undermining, & Independence
Sometimes appearances provide epistemic support that gets undercut later. In an earlier paper I argued that standard Bayesian update rules are at odds with this phenomenon because they are "rigid". Here I generalize and bolster that argument. I first show that the update rules of Dempster–Shafer theory and ranking theory are rigid too, hence also at odds with the defeasibility of appearances. I then rebut three Bayesian attempts to solve the problem. I conclude that defeasible appearances pose a more difficult and pervasive challenge for formal epistemology than is currently thought.
Knowledge in Action
Your actions should be guided by what you know, many say. Yet Bayesian decision theory says rational decision-making is rooted in uncertainty: you ought to maximize expected utility with respect to your credences. I argue that these knowledge- and credence-based pictures are not as incompatible as they seem, and I offer three irenic proposals to bridge the divide. First, there are knowledge-based methods of practical reasoning that are capable of making expected-utility-maximizing choices. Second, credences can constitute knowledge by constituting dispositional beliefs about epistemic probabilities. And third, even when credences don't constitute such knowledge, they can still influence action by serving as weights for the reasons one's knowledge does provide.
The Argument from Divine Indifference
The rationale behind the fine-tuning argument for design is self-undermining, refuting the argument's own premiss that fine-tuning is to be expected given design. In Weisberg (2010), I argued on informal grounds that this premiss is unsupported. White (2011) countered that it can be derived from three plausible assumptions. But White's third assumption is based on a fallacious rationale, and is even objectionable by the design theorist's own lights. The argument that shows this, the argument from divine indifference, simultaneously exposes the fine-tuning argument's self-undermining character. The same argument also answers Bradley's (forthcoming) reply to my earlier objection.
The Bootstrapping Problem
Bootstrapping is a suspicious form of reasoning that verifies a source's reliability by checking the source against itself. Theories that endorse such reasoning face the bootstrapping problem. This article considers which theories face the problem and surveys potential solutions. The initial focus is on theories like reliabilism and dogmatism, which allow one to gain knowledge from a source without knowing that it is reliable. But the discussion quickly turns to a more general version of the problem that does not depend on this allowance. Five potential solutions to the general problem are evaluated, and some implications for the literature on peer disagreement are considered.
Embedding If and Only If,
Some left-nested indicative conditionals are hard to interpret while others seem fine. Some proponents of the view that indicative conditionals have No Truth Values (NTV) use their view to explain why some left-nestings are hard to interpret: the embedded conditional does not express the truth conditions needed by the embedding conditional. Left-nestings that seem fine are then explained away as cases of ad hoc, pragmatic interpretation. We challenge this explanation. The standard reasons for NTV about indicative conditionals (triviality results, Gibbardian standoffs, etc.) extend naturally to NTV about biconditionals. So NTVers about conditionals should also be NTVers about biconditionals. But biconditionals embed much more freely than conditionals. If NTV explains why some left-nested conditionals are hard to interpret, why do biconditionals embed successfully in the very contexts where conditionals do not embed?
with Adam Sennet
Representation Theorems and the Foundations of Decision Theory,
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we should reject these claims, and lay the foundations of decision theory on firmer ground.
with Chris Meacham
Varieties of Bayesianism
A survey of Bayesian epistemology covering (1) the basic mathematical machinery of Bayesianism, (2) interpretations of 'probability', (3) the subjective-objective continuum, (4) justifications for Bayesian principles, (5) decision theory, (6) confirmation theory, and (7) full and partial belief.
Bootstrapping in General
Bootstrapping poses a more general challenge than commonly thought. Versions of the problem afflict even strongly internalist theories of knowledge. Even if one must know a source to be reliable to gain knowledge from it, bootstrapping is still a threat. I consider potential solutions internalists might try, and defend the one I think most plausible: that bootstrapping involves an abuse of inductive reasoning akin to generalizing from a small or biased sample. Finally, I argue that this solution is equally available to the reliabilist. The moral is that the issues raised by bootstrapping are orthogonal to questions about internalism and basic knowledge. They have more to do with the nature of good inductive reasoning.
A Note on Design: What's Fine-Tuning Got to Do With It?
We have known for a long time that there is complex, intelligent life. More recently we have discovered that the physics of our universe is fine-tuned so as to allow for the existence of such life. I argue that this new finding provides no evidence for the design hypothesis. Thus, there is an important sense in which the much-touted fine-tuning of physics is irrelevant to debates about design.
Commutativity or Holism? A Dilemma for Conditionalizers
Conditionalization and Jeffrey Conditionalization cannot simultaneously satisfy two widely held desiderata on rules for empirical learning. The first desideratum is confirmational holism, which says that the evidential import of an experience is always sensitive to our background assumptions. The second desideratum is commutativity, which says that the order in which one acquires evidence shouldn't affect what conclusions one draws, provided the same total evidence is gathered in the end. (Jeffrey) Conditionalization cannot satisfy either of these desiderata without violating the other. This is a surprising problem, and I offer a diagnosis of its source. I argue that (Jeffrey) Conditionalization is inherently anti-holistic in a way that is just exacerbated by the requirement of commutativity. The dilemma is thus a superficial manifestation of (Jeffrey) Conditionalization's fundamentally anti-holistic nature.
Locating IBE in the Bayesian Framework
Inference to the Best Explanation (IBE) and Bayesianism are our two most prominent theories of scientific inference. Are they compatible? Van Fraassen famously argued that they are not, concluding that IBE must be wrong since Bayesianism is right. Writers since then, from both the Bayesian and explanationist camps, have usually considered van Fraassen’s argument to be misguided, and have plumped for the view that Bayesianism and IBE are actually compatible. I argue that van Fraassen’s argument is actually not so misguided, and that it causes more trouble for compatibilists than is typically thought. Bayesianism in its dominant, subjectivist form, can only be made compatible with IBE if IBE is made subservient to conditionalization in a way that robs IBE of much of its substance and interest. If Bayesianism and IBE are to be fit together, I argue, a strongly objective Bayesianism is the preferred option. I go on to sketch this objectivist, IBE-based Bayesianism, and offer some preliminary suggestions for its development.
Conditionalization, Reflection, and Self-Knowledge
Van Fraassen famously endorses the Principle of Reflection as a constraint on rational credence, and argues that Reflection is entailed by the more traditional principle of Conditionalization. He draws two morals from this alleged entailment. First, that Reflection can be regarded as an alternative to Conditionalization—a more lenient standard of rationality. And second, that commitment to Conditionalization can be turned into support for Reflection. Van Fraassen also argues that Reflection implies Conditionalization, thus offering a new justification for Conditionalization.

I argue that neither principle entails the other, and thus neither can be used to motivate the other in the way van Fraassen says. There are ways to connect Conditionalization to Reflection, but these connections depend on poor assumptions about our introspective access, and are not tight enough to draw the sorts of conclusions van Fraassen wants. Upon close examination, the two principles seem to be getting at two quite independent epistemic norms.
Firing Squads and Fine-Tuning: Sober on the Design Argument
Elliott Sober argues that the cosmological design argument is unsound, since our observation of cosmic fine-tuning is subject to an observation selection effect (OSE). I argue that this view commits Sober to rejecting patently correct design inferences in more mundane scenarios. I show that Sober's view, that there are OSEs in those mundane cases, rests on a confusion about what information an agent ought to treat as background when evaluating likelihoods. Applying this analysis to the design argument shows that our observation of fine-tuning is not rendered uninformative by an OSE.
Clark and Shackel on the Two-Envelope Paradox,
Clark and Shackel (2000) argue that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. They also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. They're view goes beyond accepted decision theory, even contradicting it in the peeking case. They thus propose a revision of standard decision theory, one that we argue is both implausible and unnecessary.
with Chris Meacham

Open Access Projects

The Open Handbook of Formal Epistemology, edited with Richard Pettigrew
in progress...

Unpublished etc.

Papers in Progress

Risk Writ Large,
Risk-weighted expected utility (REU) theory is motivated by small-world problems like the Allais paradox, but it is a grand-world theory by nature. And, at the grand-world level, its ability to handle the Allais paradox is dubious. The REU model described in Risk and Rationality turns out to be risk-seeking rather than risk-averse on one natural way of formulating the Allais gambles in the grand-world context. This result illustrates a general problem with the case for REU theory, we argue. There is a tension between the small-world thinking marshaled against standard expected utility theory, and the grand-world thinking inherent to the risk-weighted alternative.
with Johanna Thoma

Software

An editorial management system: receive, track, and review manuscripts submitted to an academic journal. Ergonaut was created for use by Ergo, where it currently manages the review process. Ergonaut is written in Ruby on Rails and is openly available under the MIT license.
Access your home LaTeX installation from your iPad, iPhone, or anywhere with an internet connection. AutomaTeX runs on your home computer and automatically compiles .tex files in your Dropbox folder as they're edited. Dropbox then syncs the compiled PDF back to your iPad.