The Canny Valley & Complex Conflicts
The Canny Valley & Complex Conflicts
OPINION
By Dr. Mark Szabo
November 29, 2023
Read time: 10 minutes
We seem to have an instinctive urge to trust GPTs. I call that the Canny Valley. It’s the positive flip side of our negative response to an almost human-like robot, which Masahiro Mori called the Uncanny Valley. This urge to trust GPTs is rooted in society’s current approach to truth, and it has important implications for those who manage or are involved in complex conflicts. As long as human conflict is driven by our own fallible perceptions of the world around us and the sometimes hard-to-fathom actions of others, the Canny Valley is going to make it easier for us to avoid grappling with the messy reality of complex conflicts.
In 1970, Masahiro Mori observed that when we see a robotic object that looks almost human, we often have an uncomfortable feeling. He called that negative response the Uncanny Valley between what’s human and what’s not. On the other hand, we seem to be very eager to embrace what we experience from AI-generated content. I call this positive reaction the Canny Valley.
For this article we’ll use GPTs as an example of AI content. GPTs (Generative Pre-trained Transformers) are a type of language prediction model that are trained for human-like responses to queries. They are based on underlying data structures called Large Language Models (LLMs). A GPT’s accuracy is dependent on its predictive algorithm and its underlying source data. GPTs are, self-admittedly, susceptible to inaccuracy and possible bias. For example, here’s what appears directly under OpenAI’s ChatGPT’s prompt input box: “ChatGPT can make mistakes. Consider checking important information.” GPTs are not yet a reliable source of truth, but instead of the unease one might expect from the uncanny distance between human and humanoid, we’re seeing an implicit affinity for AI-sourced data.
Allow me to go “OK, Boomer” on you for a second here. Or in my case, “OK, Gen X.”
Not so long ago, if we told people that a future computer was going to explain very complicated topics, we might be skeptical. If we further predicted that future humans would implicitly trust the output, you’d get disbelieving surprise. If we were told that trust would still hold, despite the output being error- and bias-prone, you would be laughed out of the room. So, what has changed?
I believe there are a few factors creating the Canny Valley. First, we live in a time of epistemological uncertainty. The very idea that there even is a “truth” about which we can make claims is in doubt. We hear phrases like, “I’m just speaking my truth,” as if truth is something personal to each individual, not something outside ourselves to which we all aspire to understand. The postmodernists have been very successful on that front.
Second, we have easy access to personal media technologies, which content providers take full advantage of. News media, for example, has as always benefitted from our natural human negativity bias (“If it bleeds, it leads.”). However, that can now be hyper-targeted to every individual’s unique preferences without having to mitigate it for mass consumption. This provides everyone their own unique level of comfy-wumfy confirmation bias. This regular dopamine drip confirms our priors in a way that’s specifically engineered to be as addictive as possible.
Third, our academic institutions have cheerfully abandoned the idea of searching for objective truth, many with the Rousseauian goal of leading humanity to a better future by severing ties with long-standing civic institutions. That has long-since left the halls of academia and gone mainstream in our society.
Lastly, humans are losing the ability to think critically and grapple with complex issues, in part because of the foregoing, and in part because we have atomized our society to the extent that we rely on each other’s specializations. Long gone is the ideal of the Enlightenment polymath with multiple competencies that cross many disciplines of learning. How would that even be possible these days, with all there is to know?
In this environment, the idea of an all-wise, all-knowing computer to scour the wisdom of humanity and give us answers to complicated issues is very attractive. Far from feeling that uncanny unease of the humanoid object, we race headlong to embrace the “truth” from AI precisely because it is not human. We don’t have to worry about each other’s individual “truths,” we can just all agree that the AI is truthy enough. We can just let our digital tools continue serving up dopamine, just the way we like it. And we can relax and settle into our specialized areas of knowledge and expertise, without having to grapple with other approaches or disciplines.
If any of the foregoing is accurate, the Canny Valley has serious implications for the management of complex conflicts. I have some predictions.
First, the Canny Valley is going to make polarization more acute. As I discuss in my book Fight Different, more accomplished minds than mine (see links below) have concluded that complex conflicts are often created and perpetuated when we oversimplify the hard-to-understand matters involved in a complex conflict. I call this the Coherence Trap, because it often happens when our natural need for coherence (i.e., to make sense of things) outweighs our ability to understand all the many factors involved in the conflict. When there’s too much to process, we go with what’s comfortable and easy to understand. That’s when we miss important nuances that might help us get a fuller, more robust picture of what’s really going on. GPTs may make this worse in a few ways. The output will only be as complete as the underlying LLMs, which are as open to bias and oversimplification as anything else developed by humans. More importantly, however, GPTs are also only as good as the prompts we give them. If we are asking for information about a conflict, for example, our queries will need to be as even-handed as possible, or we risk asking for information we’re already comfortable with. The implicit urge to trust provides an additional challenge to this.
Second, the Canny Valley is going to allow us to over-emphasis the subjective aspects of complex conflicts. It’s useful to think of conflicts as the perception of incompatible activities. If that holds true, conflicts arise because of how activities are perceived, not necessarily because they are actually incompatible. This is why I have argued that a critical aspect of managing complex conflicts is understanding the rational and emotional drivers of the key participants. Academics might call that taking a phenomenological (AKA “lived experience”) approach to making sense of a conflict’s drivers to help understand conflict behaviour. That might sound like we are prioritizing participants’ perception of reality over facts. If that works for understanding conflict behaviour, why do we need GPTs to be truthful? The trap there is that human perception of truth and prioritization of one’s own individual perception is not the same thing as a GPT making stuff up. We can get to an objective, verifiable truth about how people think and feel, even though their own responses are subjective to them. A conflict manager needs to know the reality of what’s driving participants’ perceptions, not some potentially fabricated or unreliable content pulled from an LLM.
Third, the Canny Valley will make it much easier to query a GPT than to communicate with humans - particularly those on the other side of an issue. Researching human behavior to find motivations and choice drivers is a challenging endeavor to begin with. Few in the conflict management realm are formally trained in this and even fewer have the research infrastructure required. It will be too easy to replace actual human understanding with what a GPT tells us. An important way to understand a complex conflict is to focus on the patterns of interaction between the participants, just as you would when addressing any other complex natural system. Understanding those interpersonal interactions and finding ways to change them (and thus change the entire conflict system) is not possible by prompting a GPT. This is even more the case because the most crucial interactions are between a few key players in the conflict not a theoretical aggregation of an LLM’s data.
Last, it’s worth noting that the Canny Valley is not necessarily destructive if we keep it in its proper place. GPTs and other AI-facilitated research tools make the life of a conflict manager and behavior researcher much easier. I regularly use these technologies to suggest possible theoretical frameworks to address a specific situation, to develop hypotheses about behavior that can be tested, and even to frame survey questions. AI can help remove drudgery, but not judgement. You still need to trust but verify whatever the output is. Again, ChatGPT’s caveat says it all: “ChatGPT can make mistakes. Consider checking important information.”
It’s still early days in the development of GPTs, LLMs, and all that. They may become more reliable as the technology advances. They may outstrip our ability to make sense of complex issues. But as long as human conflict is driven by our own fallible perceptions of the world around us and the sometimes hard-to-fathom actions of others, the Canny Valley is going to make it easier for us to avoid grappling with the messy reality of complex conflicts.
DEEPER RESOURCES
BOOKS
Hamlet’s Mirror: Conflict and Artificial Intelligence
by Andre Vlok
Fight Different: The Power of Focal Thinking in Systemic Conflicts
by Mark Szabo
The Way Out: How to Overcome Toxic Polarization
by Peter Coleman
Attracted to Conflict: Dynamic Foundations of Destructive Social Relations
by Peter Coleman, et al.
COURSES
Leadership in Values-Based Conflicts
by Mark Szabo
https://www.udemy.com/course/navigating-public-facing-conflicts/?referralCode=4AFFF6AC63B8D337929A
OTHER
AI-powered surveys: Hyped or helpful?
by Mark Szabo
https://martech.org/ai-powered-surveys-hyped-or-helpful/
ChatGPT Tutorial: How to Use Chat GPT For Beginners 2023
by Charlie Chang
NOTE: Some of these contain affiliate links, which help the work of the Center for Complex Conflict. Other than Mark Szabo, these authors are not directly associated with the CCC; we just appreciate their work. Thanks