Information on the UTRGV talk on 9/27 (ver 5)

Title: Facts-values, apriorism, coherentism, naturalism: reassessing philosophical dichotomies in the age of AI

Abstract

In this talk, I argue that the Fact-Value (V/F) dichotomy, in a modified version, can be used to develop AI models in “machine ethics” and philosophical approaches to AI. More generally, whether (a) philosophy’s history and some celebrated results within are relevant to the current development of AI and ML (machine learning) is reviewed here. I also reflect on (b) how AI architectures can be constructed to reflect what philosophers have discussed for centuries. In the line of (a) and recent literature on the philosophy of AI (Buckner, Magnani, Humphreys, Chalmers, etc.), I evaluate the way canonical dichotomies from philosophy help understand better current progress and limitations in AI and ML. For (b), I focus on a modified F/V dichotomy and how it can be reconstructed in the age of AI and used to design machines that may develop “moral cognition.” The roots of the V/F dichotomy, its dismissal, and the connection to the analytic-synthetic distinction are also explored as potential components of ‘philosophical models’ of AI.

Synopsis/introduction

Philosophical models of technology

How does philosophy relate to new technologies? How can philosophy and the humanities influence the advancement of technology ‘in the making’?

It is relevant to present several positions related to the above questions. First, there is the hypothesis that some pertinent advancements in science and technology significantly influence philosophical thinking. On the one hand, one can expect a knowledge transfer on the other route: some pertinent philosophical ideas shape and reshape technological and scientific imagination, especially with respect to normative and critical analyses of technological and scientific progress. If both these positions are plausible, one can expect a rich interwoven connection between technology and philosophy: at the limit, this can create a closed feedback loop between philosophy and novel technology. In Mitcham’s terms, this can be the humanities-oriented philosophy of technology rather than the engineering-oriented philosophy of technology. This relationship can be conceptualized as an instance of “co-evolution”: I. Van de Poel recently argued that there is a co-evolution of technology and society (2020). The related idea of a co-evolution of philosophy and technology is enticing when we tackle the future of philosophy and humanities in the digital age. A third position on the technology-philosophy relationship questions whether philosophy should be sensitive to technological advancements (as technology is a mere ‘tool’) and whether philosophy is even vaguely relevant to technology (Plato, Rousseau, Heidegger, Ellul, i.a). We assume the opposite perspective: science and technology ‘co-evolve’ with philosophy (Kitcher, Williamson, Putnam, Quine, i.a.) and are highly relevant one to the other.

As a preliminary move, this paper assumes that philosophers build some constructs of the world, or more precisely models, analogous to scientific ones. When philosophers contemplate a given world W0 when they try to understand, explain, or ‘model’ W0, they offer a set of ‘philosophical models’ of W0—mental constructs premised on some philosophical constituents. When an element of novelty X that was not known as part of W0 occurs, the philosophical model(s) of the new X+W0 are altered: constituents of the previous philosophical models of W0 are eliminated, and new constituents are added to the initial model.

The philosophical questions in these cases are:

How do models of X+W0 differ from models of W0?

What needs to be updated when we surprisingly discover that there is an X in W0?

Equally important, we should adjust how we ‘model’ philosophically the world when X is a new technology, scientific discipline, or an emerging social or political context. Some of these technologies are said to disrupt or reshape our philosophical models when these changes are substantial and when they alter the very nature of the philosophical endeavor. In most cases we expect to see a new ethics, a new epistemology, or a new philosophy of mind being indispensable to the model of X+W0.

This is not always the case, as the present argument goes. This paper agrees that models of the world with relevant novel technology (e.g., AI, LLM, gene-editing, quantum computation, etc.) should be premised on grounds different from philosophical models without it. What is proposed here is to include “old and settled philosophical issues” in this process of revising philosophical models. When we reshape philosophical models of ‘AI+W0,’ we may need to revisit arguments that formerly were leveled in a philosophical dispute.

Backreaction from philosophy

 There is an important difference between the cases in which X is a novel technology, a new artifact, and other cases in which we discover an existing or mind-independent feature X of reality. This coevolution or backreaction is probably more palpable in the interplay we witness now between Large Language Models on the one hand and philosophy of language, epistemology, and philosophy of mind on the other hand. As a quick example, we hear philosophers arguing for or against “understanding a language,” “consciousness,” and “inner language” using the conceptual apparatus of the early 20th century or even earlier (for example, Buckner revisits the rationalist-empiricist debate in the light of new results in a deep neural network).

X, as a technology in the making itself, incorporates elements of the world’s previous or current philosophical models. This has been the case with models of the language or mind or elements of formal logic entering early development in Artificial Intelligence. We are interested in how this interwoven connection can create a loop reaction as a backreaction: the philosophical model of X+W0 creates and inspires the development of X itself.

Celebrated and infamous dichotomies in philosophy

The paper focuses on a precise element of philosophy that contributes to such a backreaction: some philosophical dichotomies. More precisely, we assume that the “W0+AI” models would and should include some canonical dichotomies, even if they are no longer considered viable alternatives per se in contemporary philosophy. Dichotomies are frequently encountered in the history of philosophy and beyond: they mark irreconcilable conceptual/inferential/categorical differences between two concepts. As a first step, this talk surveys some preeminent dichotomies from modern philosophy and how they have been deflated and ‘collapsed’ in the 20th century. In contemporary philosophy, the leading view is to discount the importance of most canonical dichotomies: mind-body, analytic-synthetic, apriori-aposteriori, observer-observed, subjective-objective, public-private, nativism-empiricism, nature-nurture and last but not least, the infamous fact-value (“F-V”) dichotomy. This rejection of dichotomies can sometimes be associated with various forms of holism, semantic reductionism, pragmatism, or verificationism. We will integrate the rejection and acceptance of dichotomies in the general view of coherentism.

As paradigmatic cases of arguments against philosophical dichotomies, we focus on arguments against the F-V dichotomy, sometimes called “F-V collapse arguments” (Searle 1964; Putnam 2016; 2002; Singer 2015; Cohen 2003; Walden 2025). They are inspired by pragmatism, naturalism, and holism, which are widely used in philosophy of science, epistemology, and ethics.

Reassessing previous philosophical models, even those which had been previously dismissed on various grounds, is not in itself new. Recently, there has been a new trend to revisit classical philosophical issues through the lens of recent developments in Virtual Reality, Artificial Intelligence or Machine Learning (see C. Buckner, P. Humphreys, D. Chalmers). Various philosophers are willing to reinstate old debates about the source of knowledge, behavior, learning, intelligence, and moral agency. In the same vein, this paper suggests a gambit: to grasp better the philosophical underpinnings and limitations of the recent advancement in AI (W0+AI to include the way AI reshapes the world in which we live), one should reconstruct some canonical dichotomies and revisit how they were dismissed in philosophy. More precisely, we argue that a philosophical model of the world with AI fares better if the models of AI include the fact-value dichotomy and a revised version of the analytic-synthetic dichotomy. The focus is on the moral agency of machines as a topic in the more general domain of “machine ethics.”

A set of foundational questions in AI can be addressed by this strategy: Are machines moral agents? Can they learn morality? Can they reason on moral matters? 

One can ease one’s way into these foundational questions by employing some dichotomies, especially the F/V one, in our philosophical models of W0+AI. The talk assumes that any result in philosophy, including canonical dichotomies and their dismissal, is open to revisions that can be triggered by new advancements in science, technology, society, etc. The focus is primarily on the fact-value dichotomy, which has a rich philosophical pedigree (Plato, Hume, Kant, Moore, Sidgwick). Starting from this dichotomy, we evaluate the perspective of building “moral machine learning” and interpret it as a multidimensional purpose. We discuss the way facts and values can be in principle use as separate “data” in training and testing AI algorithms and as objectives. We discuss cases in which training data is purely factual, whereas strong conditional weighing (“attention”) is based on values (V).

Philosophical models of AI

We assume philosophers build models analogous to scientific ones to address some of these matters. Philosophers should adjust how they ‘model’ the world when a new technology, scientific discipline, or social or political context emerges. This paper argues that models of the world with a relevant novelty (e.g., AI, LLM, etc.) should be premised on different grounds than philosophical models without it.

We argue that a ‘philosophical model of AI’ (particularly the algorithms that process and deal with value and moral judgments), which includes the fact-value dichotomy as a hypothetical element, fares better than a philosophical model of AI without it.

Second, given that we can model AI based on the F-V dichotomy, we address these questions: how does an algorithm based on a philosophical dichotomy look like? why We discuss the structure of a machine that is based This implies that in such a modeled way, the algorithm will be a multi-purpose machine.

Moral coherentism, moral cognition, and moral progress are discussed in the framework of the ‘fact-value’ distinction applied to artificial morality.

Conclusion

The paper concludes that some so-called “settled” arguments from contemporary philosophy can be reconstructed and revisited in the age of AI, benefiting both philosophy and computational science. We end the talk by assessing the knowledge transfer between canonical philosophy, computational science, and moral psychology.



Zoon link (UTRGV.edu or Illinois.edu logged accounts): https://bit.ly/UTRGVPhilRes

Flyer:

Leave a Reply

Your email address will not be published. Required fields are marked *