My main topics are perception, understanding, mental states and AI, though I am interested in most things theoretical philosophy. Below are some current and completed projects.
Published in Synthese
Abstract: In this paper, I argue that mental types can overlap. That is, one token mental state can be multiple types. In particular, a perceptual experience can simultaneously be a belief. This does not imply that belief and experience are type-identical, they merely share some of their tokens. When a subject perceives with content p, that content is usually accessible to the subject. By endorsing p, whether automatically or consciously, the subject comes to believe that p. In this instance, the perceptual experience, while retaining its content and phenomenology, becomes a belief. The possibility of this kind of overlap turns out to have epistemic benefits, especially in the face of arguments by Alex Byrne and Kathrin Glüer. I consider several objections to overlap, including the idea that perception and belief have different kinds of content, that beliefs tend to outlast experiences and differences in phenomenology.
Abstract: It is commonplace amongst epistemologists to note the importance of grasping or appreciating one’s evidence. The idea seems to be that agents cannot successfully utilize evidence without it. Despite the popularity of this claim, the nature of appreciating or grasping evidence is unclear. This paper develops an account of what it takes to appreciate the epistemic relevance of one’s evidence, such that it can be used for some specific conclusion. I propose a basing account on which appreciating evidence involves being able to correctly base. That is, the agent is disposed to base various conclusions on her evidence that are objectively supported by that evidence. She can also derive correct conclusions if her evidence were slightly different. This account is cognitively undemanding, and explains why appreciation is crucial for the core functions of using evidence, like excluding hypotheses and probabilistic reasoning. I contrast this view with possible rival accounts and argue that the rival accounts add nothing over and above the basing account.
Abstract: Here are two popular commitments: Perceptual experiences can be used as evidence and perceptual experiences have nonconceptual content. Historically, it has been argued that the two commitments are incompatible. The problem is meant to be that nonconceptual content is not open to scrutiny by the agent. The traditional solution is that the agent can assess her nonconceptual experiences by forming certain beliefs. For instance, say she has an experience with that pen is RED as content, where RED is nonconceptual. She then forms the (conceptual) belief that the pen looks red. This allows her to assess her experiences, and use them as evidence. I claim that the traditional solution fails. The problem is that the interpreting belief overtakes the justifying role experience is meant to play. I show this through a test. Justifying force is the power to justify conclusions. This force can be toggled off. For instance, beliefs lose their justifying force when defeated. I consider cases where an agent has an experience, and an interpreting belief. The agent then forms a novel belief. I show that whether the novel belief is justified depends on the status of the interpreting belief, not the experience. This shows that the experience is not being used as evidence. If it were, the status of its justifying force should matter. I remain agnostic about alternative solutions to the historical challenge. However, it is an important development that the traditional solution is off the table.
With Sietze Kuilman
Abstract: When it comes to AI systems, most researchers focus on the question whether they are intelligent, rather than in what way they are intelligent. This paper investigates the latter question. Assuming AI systems have something akin to mental states, what are they like? In particular, we are interested in what the content of AI beliefs would be like. We first describe how AI systems behave, and what mistakes they tend to make. Then, we apply prominent theories of mental content determination, like functionalism, interpretationism, dispositionalism and teleosemantics. We argue that AI beliefs tend to have alien, rather than human contents. By that, we mean that AI systems work with radically different contents than our own person-level mental states. For example, we can think of a picture as containing a panda, where most AI systems can only think of pictures in terms of pixels. We show that this result has serious practical and ethical consequences, and also suggest how AI systems could be improved such that they would have human contents.
Abstract: Most epistemologists seem to think that mental states, like beliefs and perceptual experiences, can be evidence. Likewise, many epistemologists think that evidence should be appreciated in some way by the subject. Usually, the assumption seems to be, just having a certain mental state suffices for appreciating it. I show that things are more complicated. First, I claim that mental states routinely fail to be evidence due to a lacking appreciation on the agent's part. For instance, I might fail to appreciate the relation between my belief in certain basic axioms of mathematics and the truth of certain highly complex theorems. This precludes me from using that belief as evidence for those theorems. However, the naïve view of appreciating mental states is partially vindicated in that some mental states, like beliefs, are always evidence for the agent for at least some conclusion. However, perceptual experiences, if they are nonconceptual, are not automatically evidence for the agent for some conclusion. For those states, it is false that having them entails that they are appreciated sufficiently to be used as evidence in any way by the agent.
Abstract: Once upon a time within the philosophy of perception, vision was the only game in town. In recent decades, the other senses, like hearing, touch, smell and taste, have started receiving individual attention. Despite this, it is still an almost universal assumption that all senses have the same metaphysical nature. I discuss an alternative family of views, which I call mixed views. On a mixed view, not all senses have the same metaphysical nature. An example mixed view is one where vision and touch are intentionalist, while smell, taste and hearing are naïve realist. I show that certain commitments make mixed views attractive. For instance, the senses feel quite different, and a mixed view can reflect these differences. A potential problem for mixed views is multisensory experience, where various senses together create one experience. This might be difficult on mixed views, given that the senses would all have radically different metaphyiscal natures. However, I argue that there are ways to overcome this obstacle.