My main topics are understanding, evidence and reasons, perception, (applied) social epistemology and AI, though I am interested in most things theoretical philosophy. Below are some current and completed projects.
Published in Synthese
Abstract: In this paper, I argue that mental types can overlap. That is, one token mental state can be multiple types. In particular, a perceptual experience can simultaneously be a belief. This does not imply that belief and experience are type-identical, they merely share some of their tokens. When a subject perceives with content p, that content is usually accessible to the subject. By endorsing p, whether automatically or consciously, the subject comes to believe that p. In this instance, the perceptual experience, while retaining its content and phenomenology, becomes a belief. The possibility of this kind of overlap turns out to have epistemic benefits, especially in the face of arguments by Alex Byrne and Kathrin Glüer. I consider several objections to overlap, including the idea that perception and belief have different kinds of content, that beliefs tend to outlast experiences and differences in phenomenology.
Published in Philosophical Studies
Abstract: It is commonplace amongst epistemologists to note the importance of grasping or appreciating one’s evidence. The idea seems to be that agents cannot successfully utilize evidence without it. Despite the popularity of this claim, the nature of appreciating or grasping evidence is unclear. This paper develops an account of what it takes to appreciate the epistemic relevance of one’s evidence, such that it can be used for some specific conclusion. I propose a basing account on which appreciating evidence involves being able to correctly base. That is, the agent is disposed to base various conclusions on her evidence that are objectively supported by that evidence. She can also derive correct conclusions if her evidence were slightly different. This account is cognitively undemanding, and explains why appreciation is crucial for the core functions of using evidence, like excluding hypotheses and probabilistic reasoning. I contrast this view with possible rival accounts and argue that the rival accounts add nothing over and above the basing account.
Abstract: Here are two popular commitments: Perceptual experiences can be used as evidence and perceptual experiences have nonconceptual content. Historically, it has been argued that the two commitments are incompatible. The problem is meant to be that nonconceptual content is not open to scrutiny by the agent. The traditional solution is that the agent can assess her nonconceptual experiences by forming certain beliefs. For instance, say she has an experience with that pen is RED as content, where RED is nonconceptual. She then forms the (conceptual) belief that the pen looks red. This allows her to assess her experiences, and use them as evidence. I claim that the traditional solution fails. The problem is that the interpreting belief overtakes the justifying role experience is meant to play. I show this through a test. Justifying force is the power to justify conclusions. This force can be toggled off. For instance, beliefs lose their justifying force when defeated. I consider cases where an agent has an experience, and an interpreting belief. The agent then forms a novel belief. I show that whether the novel belief is justified depends on the status of the interpreting belief, not the experience. This shows that the experience is not being used as evidence. If it were, the status of its justifying force should matter. I remain agnostic about alternative solutions to the historical challenge. However, it is an important development that the traditional solution is off the table.
Abstract: How can lay users rationally rely on AI systems? Presumably, this requires having justified beliefs about whether a given AI system is reliable at fulfilling its assigned tasks. Lay users typically know little about AI systems and testing them is highly complex work that requires expert knowledge. By themselves, lay users thus have no good way to learn whether a given system is reliable. I argue that instead of relying on testing or other individual evidence, lay users can justify their reliance on AI through trusting designers to deliver reliable products. Lay users usually have reasons for this trust. Marketing AI systems comes with certain social norms and expectations that the product is reliable. Betraying these norms and expectations often results in commercial failure, meaning designers face both normative and economic pressure to ensure their systems are reliable. Lay users also have access to social evidence about whether designers are creating reliable systems. They receive testimony from acquaintances as well as people in various media about how well certain AI systems work. This often also involves input from experts, who can impact the public image of AI both directly and indirectly. Common usage of a certain AI system is also a sign that it is working well enough. Realizing that rational lay reliance on AI is based on trusting designers, rather than testing or knowing how AI systems work, yields important lessons about how to scaffold successful lay epistemic and non-epistemic agency when using AI systems.
Published in Erkenntnis
Ultimate Version (Open Access)
Abstract: Preface and Lottery paradoxes have shown that competently deducing a novel conclusion from multiple justified premises does not necessarily result in a justified belief. However, those paradoxes do not arise if only a single premise is used. Yet, in recent years, justification closure for single premise deduction has also been challenged. Schechter has presented a case that, according to him, thwarts any attempt to formulate a general principle that links competent single premise deduction to justification. This paper develops a justification closure principle for single premise deduction that accommodates Schechter’s case, thus showing that his pessimism regarding the possibility of such principles is mistaken. I also argue that an alternative reply to Schechter’s case fails and that safeguarding justification closure constitutes progress for establishing knowledge closure for single premise deduction.
With Harmen Ghijsen
Abstract: Debates on the rationality of endorsing conspiracy theories often focus on the primary evidence conspiracy theorists have: errant data, eyewitnesses, revealed documentation and the kind of facts typical investigators would use. For a significant portion of conspiracy theorists (most Flat-Earthers for example), their evidence does not support the conspiratorial conclusions they draw. If we only focus on primary evidence, we would thus be forced to posit a deep kind of irrationality in order to explain how they arrive at their beliefs. We propose a more palatable alternative on which secondary testimonial evidence plays a crucial role in the formation of conspiratorial beliefs. While there often still is something irrational to accepting poorly supported conspiracy theories, that irrationality is no longer blatant. We compare our approach to alternatives in the literature and argue that it is descriptively more accurate and normatively more satisfying.