Regenerative sustainability provides a pragmatic foundation to demonstrate
caregiving and relationality to people and planet. It shouldn’t be a ‘battle
against climate change;’ it should be a ‘relationship we’re working to heal.’
"Computers can work with hard, hand-coded rules or statistical processing based on historical data, but never in relationship to the full situation at hand and thus never with wisdom. This irrelationality ends up devaluing humanity while also leaving no space for it."
— Emily M. Bender, "Resisting Dehumanization in the
Age of
‘AI'”
In business and society for almost a decade, media and leading experts in the
field of artificial
intelligence
(AI) have proclaimed when a certain iteration of automation has "beat
humans" in a specific task or discipline. Often billed as "natural evolution"
for society with calls to "not hinder innovation," the forces driving the
seemingly inexorable replacement of humanity with these announcements typically
leave out a seminal aspect of how most AI Systems are designed:
They prioritize the rational while devaluing the relational.
If one believes the brain is essentially a computer where knowledge is defined
as an aggregation of data, this demonstrates a bias towards rationality being
the primary if not sole driving factor of what it means to be a human. Emotions,
spirituality, music, art and culture and how humans relate to each other are
specifics classified as "squishy" or "subjective.” These marginalizing terms
effectively usher one half or more of our humanity out of the room.
The role of art in climate, sustainability and regeneration discourse
Benjamin Von Wong’s activist artistry transcends mere visual appeal — underlining the essential role of art in climate, sustainability and regeneration discourse. Join us as he explores the incredible potential of art as cultural commentary in raising awareness, and taking our shared behavioral and cultural pursuits to the next level — Wed, May 8, at Brand-Led Culture Change.
No wonder loneliness and isolation are at pandemic
levels.
We're being trained as a society to seek solace from systems trained to ignore
our relationality.
Companies wondering why employees are nervous at the use of Large Language
Models or various AI
tools
may be inclined to assume workers aren't tech savvy or fear they're training
their machine successors to take their jobs. But when an invisible bias towards
rationality is embedded within an economic paradigm prioritizing productivity,
simply training workers how to use the latest version of
ChatGPT
isn’t enough to address the great
gloom.
Leaders need to train and equip themselves in care. Being “relational” in this
regard doesn’t mean being invasive or invading someone’s privacy. It’s a
recognition that being present for someone as a human is an act of care and
relationality in and of itself.
To be clear — while machines and algorithms can and are helping humanity in
myriad ways, these tools cannot connect with us in pheromonal, physiological
ways. This is not a criticism; it’s a critical distinction in outputs and a
recognition of how people communicate with each other.
The genuine opportunity for brands and companies utilizing AI Systems (and
“systems” is critical to mention, as all AI utilizes human or other data to
function) is to inform workers and all stakeholders that it is in the sharing
of knowledge and information with each other where we communicate the totality
of who we are as humans. Pick 10 employees to create a presentation based on the
same two-page report. Based on their interpretation and delivery (how they
smile, when they gesture, descriptive words used based on their experience,
words from different languages used to better connect with specific audience
members), the same information provided in the report will be communicated in 10
different yet equally valuable ways.
Recognizing a balance between the rational and relational in how society
utilizes technology also opens the door for enlightening conversations on how to
avoid the potential for marginalization. When Western values of rationality
mirrored in design don’t honor relationality in the application of AI or other
technologies, harm happens. Framing results for AI applications as
“unintentional harms” is irresponsible when designers knowingly disregard
cultural framings that vastly change perceived and actual outcomes for a
specific product or service.
Brands also have the opportunity for education regarding the concept of
“relationality” in terms of how humans interaction with AI systems as studied in
the field of research known as Human Computer
Interaction
(HCI) — a multidisciplinary field encompassing behavioral science,
psychology and sociology. As an example of the manifestation of HCI issues, it’s
common knowledge that if an
AI-enabled robot makes a certain gesture with an “arm” in one country or
culture, said gesture may be offensive in a different culture. This example
speaks to the nature of human agency — where our capacity to act based on our
will is affected by the gestures, voices, pheromones (visceral nature of our
physiology in close contact) and actions of other humans. Robots or AI Systems
designed with anthropomorphic features — a chatbot or other system to which
users are prone to attribute human
characteristics
— are deceptive by design.
Yes, some designers may have a fiscal or ideological agenda. But the deeper
deception is trying to convince a person their unique relationality can be
replaced. This includes anyone designing these tools who may not have been given
tools to equip themselves or others with care.
This is where regenerative sustainability provides a pragmatic foundation to
demonstrate caregiving and relationality to people as well as the planet. We
know that
regeneration goes
beyond sustainability by (1) restoring, renewing and/or healing
systems
we depend on; while also (2) improving the inherent ability of said systems to
restore, renew and/or heal
themselves
more effectively. What we may not always consider is that humans are a core
part of the “healing systems” that live symbiotically with nature. Relying only
on rationality for AI — or any tool or KPI — may also deceive us into thinking
we are more powerful than the planet and can “win the battle against climate
change” by applying more knowledge and technology to any issue.
But who said this is a “battle?” Words matter. Metaphors matter.
If Mother Earth is alive, why wouldn’t we have a conversation with Her to see
what Her needs are? Why are we more willing to communicate with a fabricated
form of humanity than to commune with the life force providing all of us water,
air and life?
It shouldn’t be a “battle against climate change.” It needs to be a
“relationship we’re working to heal.”
And the healing starts in a recognition that every human has worth who can be in
relation to another person. I call this The Seen
Transition
— which is based on the fact that relationality, the potential for connection
between people, cannot be replaced.
Sabelo Mhlambi, a subject matter
expert in AI and ubuntu
ethics,
provides further justification for recognizing the totality of who we are as
humans in his foundational 2020
paper,
"From Rationality to Relationality: Ubuntu as an Ethical and Human Rights
Framework for Artificial Intelligence Governance:"
"Accepting another human as part of oneself is to be in harmony with ultimate reality, for accepting others is in compliance and reverence for Umvelinqangi — the ultimate reality from which humans and all forces derive and are intricately and inextricably interconnected. This provides the foundation for relational personhood. Relationality is the acceptance of the individuality of others — for all are interconnected — and in general, it is the acceptance of the interconnectedness of humans, nature, and the spiritual."
It is irrational, immoral and irresponsible to deny our full humanity in the age
of the algorithm.
Being relational is hard because we haven’t given enough value to what passes
between two people when sharing information, emotions and ourselves.
So let’s work on this together.
John C. Havens is the author of Heartificial Intelligence: Embracing our Humanity to Maximize Machines (Penguin, 2016).
Published Feb 20, 2024 8am EST / 5am PST / 1pm GMT / 2pm CET
John C. Havens is the Staff Lead of IEEE’s Planet Positive 2030 Initiative, where he also architected and directed IEEE’s largest body of work on AI Ethics since 2016.