Nonviolent Communication and Artificial Intelligence: Navigating Humanity, Technology, and Connection in a Hyper-Digital Age

by Alan Rafael Seid, CNVC Certified Trainer

Personal disclosure: I am neither all-for nor all-against the technology known as “Artificial Intelligence” (AI) — and that is reflected in the tone of this article. The more I research it the deeper my ambivalence. One reason for this is the immense complexity of the topic — there is so much I and we don’t know! And it’s changing at a dizzying speed. Every day there are new developments surpassing those of the day before.

As you’ll read below, there are specific instances in which I’ve used AI intentionally. I also acknowledge and recognize that the concerns voiced by experts need to be taken very seriously.

I thought it could be valuable to you, dear reader, for me to share my point of view transparently at the outset — to make the lens with which I have approached this article more visible.

In this article we look at AI through an NVC lens to see what we learn and discover about this exciting and frightening technology.

Introduction — Why This Conversation Matters Now

There are moments in human history during which a major technological innovation has delivered profound and significant changes: the printing press, the telephone, the lightbulb, the automobile, commercial aviation, the transistor, the personal computer, the internet, and the smartphone have each constituted a major leap from what was possible before.

Each of these innovations has solved problems and also created new ones!

More recent developments include blockchain technology, driverless cars, 3D printing, gene editing, augmented reality… and, as the focus of this article, Artificial Intelligence or AI.

As I’ve researched AI for this article, I’ve come to believe that this technology is going to change everything — far more than we’ve yet seen or than most of us can imagine.

Artificial Intelligence has emerged rapidly in human society and is poised to profoundly impact every area of human civilization.

At the same time, it has stimulated increasing concerns, some of them alarming!

Though I do occasionally talk with someone who is unaware of AI, there are very few people now who don’t know something about it. And most people’s lives are already impacted by it, whether or not they are aware of it.

AI has brought us a cultural moment of both awe and anxiety, thrilling possibilities for some, and terrifying ones for others!

Nonviolent Communication (NVC) offers us a grounded, human-centered lens for understanding AI — and this is the intersection we’ll be looking at in this article.

In the pages that follow I’ll give you a brief intro to NVC, explain AI at a basic level — and clarify that AI is a set of strategies not a universal human need.

In this exploration of Nonviolent Communication and Artificial Intelligence, we’ll also look at the topic of AI and human connection, what needs AI serves, and where it might get in the way of contributing to the fulfillment of human needs.

We’ll also explore some important questions to ask before and as we engage with this technology.

This is a massive topic, with new developments emerging continuously! My approach is to balance comprehensiveness with brevity, and to offer something that will remain relevant and applicable as long as possible given that this is a dynamic and rapidly expanding field.

A Quick Introduction to Nonviolent Communication (NVC)

What is NVC?

NVC was developed by Marshall Rosenberg, PhD, starting in the 1960s and 70s, and he continued to refine it for the following several decades.

The purpose of NVC is to create the quality of connection out of which people naturally and spontaneously want to contribute to one another’s well-being.

Dr. Rosenberg’s achievement was to distill the essential elements in thought, language, communication, and the use of power that contribute to creating that quality of connection skillfully and consistently.

NVC has two aspects I want students and readers to understand:

  1. NVC has a specific framework and “tools” — including principles and key differentiations, and,
  2. NVC has a consciousness and intentionality — which I will describe after we cover the framework and the tools.

The NVC framework and tools

The NVC framework includes three areas where you can put your attention in the service of connection: listening, speaking, and interior clarity. You can also think of these as empathy, honesty, and self-connection.

Being clear about what you are feeling, needing, and wanting (interior clarity) makes for clearer and more sincere self-expression (honesty).

Being self-connected also translates into showing up as a more present listener.

At a basic level, empathic listening skills reorient our old programming from listening-to-respond to listening-to-understand.

Listening so others feel safe opening up, and speaking what is true for you in a way that people can receive, understand, and connect to… this is the essence of the dance of connection that NVC gives you access to.

What Dr. Rosenberg called “the NVC Model” also has four components, which constitute four different types of information: Observation, Feeling, Need, and Request (OFNR for short).

Observations
This refers to observing the neutral facts separate from evaluative thinking, including interpretations or our story about what happened.

Feelings
In NVC we recognize feelings as indicators — like a light on the dashboard of a car — that something is happening at a deeper level. This “deeper level” leads us to the needs.

Needs
Needs are how Life is showing up inside you, me, or any human being — as a longing for vitality, love, safety, presence, belonging, care, mattering, intimacy, self-expression, autonomy, and so forth. Needs are our core human motivators. I define needs more fully below.

Requests
In NVC you take responsibility for what you would like by making a clear request. My training reminds me that an NVC request means I am also holding the other person’s needs with care — which means I’m open to a “no.” (NVC also recognizes the enormous cost of getting what you want through demands rather than requests.)

Each of the three areas of focus mentioned above — self-connection, empathy, honesty — includes the four components of OFNR:

  • Within myself, I can clarify whether I have a story or interpretation about something, as opposed to a clear observation; I can connect with what I’m feeling and needing… and I can formulate a clear, doable request.
  • I can listen for these four pieces of information, even if they are not explicitly expressed; and,
  • When I speak, I can make sure I include them when they would serve the connection.


Formal vs Informal NVC

When you learn NVC in a workshop, you may be asked to practice a specific syntax which follows Observation, Feeling, Need, and Request (OFNR). For example: “When I notice that I have no calls or texts from you in a week, I feel sad because I have a need for connection. Would you be willing to tell me what got in the way of reaching out?”

This syntax is known as Formal NVC — and nobody speaks like this!

Formal NVC is primarily for practice in a class, workshop, or practice group setting. We do not use formal NVC in real-world interactions and conversations because, ironically, formal NVC can get in the way of connection!

However, Formal NVC has 2 important purposes!

  1. It helps you keep important key differentiations very clear in your mind, and,
  2. It trains your attention to go to OFNR rather than to thoughts about who’s right and wrong — and supports you in translating judgments to feelings and needs.

This element of skill-building is why we practice formal NVC in a workshop or practice group.

Informal NVC is what we use for everyday interactions.

In informal NVC the intention to connect and the willingness to work toward a mutually beneficial outcome are there. We remember that the purpose is not to speak NVC correctly, but rather, connection.

Much of the time, as long as connection is happening, informal NVC could mean that we feel safe to be less concerned about our word choice! We use linguistic shortcuts or playful language — or no words — and as long as connection is happening then it could be seen as informal NVC.

Informal NVC often requires having integrated some of the tools to a minimal extent and, especially, being rooted in NVC Consciousness.

NVC Consciousness

ideas of rightness and wrongness, there is a field. I’ll meet you there.
— Rumi

Besides the concrete tools we can practice and employ, there is NVC Consciousness — which you can think of as the underlying mindset and intention of NVC.

NVC Consciousness is more than the purpose and the intention of NVC. It acknowledges principles that guide both our intentions and where we put our attention. One such principle, for example, is that whenever I create a win-lose situation, in the long term I also lose — and this is because we are interdependent, interconnected, and interrelated.

NVC Consciousness recognizes that, as human beings, all our needs are intertwined.

Because the purpose of NVC is to create a high quality of connectionif my intention is to manipulate a specific outcome or to get my way, then what I express will not be NVC no matter how skillfully I use the tools or articulate OFNR!

This is an important point: you can use words that sound like NVC, but if the intention is misaligned then it would not be NVC!
I have heard people talk about NVC being “weaponized” — and this is exactly why: the intention and the purpose need to underpin, inform, and suffuse the communication.

It bears repeating: when NVC-language gets used to “win” or to manipulate others then it is no longer NVC, no matter how well it adheres to NVC syntax.

So the consciousness is paramount, and the framework and tools simply help you be more skillful at fulfilling the intention reliably over time.

Needs versus Strategies

Differentiating needs from strategies is a crucial key distinction in Nonviolent Communication. Failure to differentiate needs and strategies can lead to conflict and suffering.

This distinction will also serve us well when we look at Artificial Intelligence.

So let’s define needs as we use them in NVC!

When we talk about needs in NVC we are referring to Universal Human Needs.

Needs are:

  • The conditions necessary for Life to thrive in any human being, regardless of cultural background or geographic location. These include things like love, trust, choice, belonging, safety, and so forth. (Here is a handout with a list of feelings and universal human needs to get you started [LINK: https://nonviolentcommunication.com/wp-content/uploads/2019/07/feelings_needs.pdf].)
  • How Life is showing up in this moment in you, in me, and in each person. This moment it might be trust, the next moment it might be safety, then autonomy… Needs are life energy as it expresses through each of us in the present moment.
  • Core human motivators. Needs impel us to speak or act. Anytime someone speaks or acts it is, consciously or unconsciously, in the service of one or more needs.
  • Energies that want to flow, not holes to be filled. They do not refer to any sense of lack, but could be more akin to a yearning or longing.
  • Universal by definition, meaning that they apply to all people. Needs exist in a collective field, similar to language.

When we say something is life-serving… what we mean is that it satisfies one or more needs.

Conflicts cannot happen at the level of needs. Conflicts happen at the level of the strategies we employ to try to meet needs.

Strategies are:

  • Very important because they are the specific ways we go about meeting or satisfying needs.
  • Not universal — they are specific to a situation or context.

If something refers to a specific person, location, action, time, or object (acronym: PLATO), then it is probably a strategy. If it does not refer to any of these, then it could be a need.

For any set of needs there could be 10, 100, or 1000 strategies! For example, out of a need for safety, humans put razor wire around their house, punch another person, buy a gun, or talk things through and create connection and build trust. The number of strategies people employ to contribute to safety is likely endless — and that is true for the other needs.

But not all strategies are equally effective!

Types of Strategies

Some strategies meet a need directly — like water to thirst.

Some strategies go directly against the need they’re trying to fulfill. For example, lashing out at someone when I’m in emotional pain — when what I really need is love and care.

Some strategies meet needs partially or go against other needs — like contributing to my sense of ease and conservation of energy by ignoring a health issue.

And some strategies can meet so many needs that sometimes we treat them as a need in themselves — examples here could include money and sex.

When we connect with each other at the level of feelings and needs, then we are more likely to find strategies together that can meet as many needs as possible.

In summary: NVC gives you some guiding principles and concrete tools to understand your own and others’ core motivators, deepen personal and professional relationships, and prevent and resolve conflicts.

You can bring NVC Consciousness to all your interactions — that is, prioritize connection and win-win outcomes — even as you expand your skillfulness with the tools over time.

Because there is a difference between having a tool and being skillful with a tool — developing your skills through workshops and practice groups is something I recommend doing well before you really need them!

A Beginner-Friendly Introduction to Artificial Intelligence (AI)

The more I have researched Artificial Intelligence for this article the less certainty I experience about any of it.

An oversimplified point to understand about AI is that it is a tool which can be used for good or ill. This is somewhat akin to having a vehicle with a powerful engine that doesn’t care where you drive it. Humans need to provide the direction in which that engine will be driven.

Along with AI come some legitimate, real-world concerns — and some of them can be quite frightening!

Let’s first define AI in as neutral a way as possible.

What Is AI, Really?

Let’s demystify the term Artificial Intelligence. This term can bring up futuristic images like robots with human emotions and computers plotting to take over the world.

(For the purpose of this discussion I want to set aside, for now, the complexity of different kinds of AI.)

Most of what’s referred to as AI today is far more mundane and mechanical than magical.

At its core, AI is a set of computer systems designed to simulate aspects of human intelligence, especially pattern recognition, prediction, and decision-making.

AI is not a sentient being. It doesn’t feel, think, or care in the way humans do. In fact, it doesn’t “understand” anything at all in the way we use that word as humans.

So how does it work?

Have you seen or used predictive text on your phone or computer, in which your device is “guessing” the next word you might use?

You can think of much of AI as a super-powerful form of predictive text — a sophisticated prediction engine.

You give it input — for example, a question typed into a chatbot — and it generates output based on patterns it has detected from massive datasets during training. That’s what’s happening when you use a tool like ChatGPT: the AI predicts what the most likely next word or phrase should be, given your input and everything it has “seen” before.

Unlike traditional software, which follows a fixed set of human-programmed rules, AI systems learn from examples/i>.

For instance:

  • A facial recognition AI learns to identify faces by analyzing thousands or millions of labeled images.
  • A recommendation engine (like Netflix or Spotify) predicts what shows or songs you might like based on your viewing or listening habits.
  • Language models like ChatGPT have been trained on a wide swath of internet text to simulate conversation and writing.

But again — and this bears repeating — AI does not “understand” what it says. It does not have beliefs, goals, or feelings. Its outputs are not signs of consciousness, but sophisticated outputs of probability and pattern.

This matters when applying an NVC lens, because one of the foundational insights in Nonviolent Communication is the importance of presence — of being with another person’s experience in a fully attuned, empathic way.

AI cannot offer presence.

But as we’ll explore later, it might support us in practicing or learning how to cultivate it within ourselves.

The Rise of AI in Everyday Life

Artificial Intelligence was once confined to science fiction or specialized research labs.

Now, it has quietly — and not so quietly — become part of the fabric of our every-day lives.

When you use navigation in your car or a grammar-correction tool on your computer — you are using forms of AI.

In education, AI is being used to personalize learning, giving students tailored recommendations or automatically grading papers.

In healthcare, AI tools analyze X-rays, flag potential medical risks, and even assist in diagnosing illnesses. It’s been used to model proteins — for understanding, preventing, and treating diseases — many, many times faster than humans could before.

In finance, AI predicts stock trends, detects fraud, and powers algorithmic trading — bringing new problems and challenges along the way.

In dating apps, algorithms suggest potential partners based on behavior and preferences.

Controversially, in the creative arts, tools like Midjourney, DALL·E, and ChatGPT are being used to generate images, write poems or marketing copy, compose music, and more.

These tools have sparked both excitement and profound concern — offering new possibilities while also raising urgent questions about authorship, creativity, and the role of human expression.

The latest models, as I write this article, create videos that are difficult to distinguish from actual footage.

These are all strategies that both meet needs and go against other needs as well.

For some of us, these advancements are deeply troubling and alienating. For others, they are exciting.

For many people, the moment AI felt real, to them, was when they interacted with a conversational model like ChatGPT or heard that an AI-generated painting won an art contest. These moments can be confounding, awe-inspiring, unsettling, and challenging to process.

It’s tempting, especially with natural-sounding dialogue, to anthropomorphize these systems — to treat them as if they were someone. But despite appearances, AI is not sentient. It’s not alive. The so-called “intelligence” of artificial intelligence is nothing like the richness of human awareness, perception, and feeling.

Why does this distinction matter?

Because in a world increasingly populated by AI-generated content and interactions, it’s easy to forget what makes human communication so precious: our vulnerability, our capacity for empathy, our stories, our silences.

These cannot be replicated by code.

They can be simulated, yes — sometimes eerily well — but they cannot be lived by a machine.

From an NVC perspective, the growing presence of AI invites us to get even clearer about what we value in life and in communication.

If we don’t stay grounded in needs like authenticity, understanding, and presence, we may find ourselves responding to something that sounds like empathy but will never give us genuine connection.

The Artificial Intelligence revolution is just beginning, and different forms of AI will become as ubiquitous as smartphones.

What happens when we look more closely at AI through an NVC lens?

Seeing AI Through an NVC Lens

Because needs are universal in nature, exist in a collective field, and can be seen both as core human motivators as well as the conditions necessary for a human to thrive…

…and because strategies are defined as the specific ways we go about meeting needs…

…I hope it is clear by now that AI is a set of strategies, and not a universal human need.

AI Is a Strategy, Not a Need

People are using AI in a wide variety of ways, with differing intentions and purposes.

NVC points out that all concrete actions are strategies employed in an attempt to meet needs.

AI is a collection of strategies intended to contribute to needs — possibly efficiency, creativity, security, or stimulation.

However, depending on how it is implemented and used, AI could result in needs not met, including belonging, authenticity, connection, presence, and others.

So, as a strategy for meeting needs — how is it doing? How does AI contribute or detract from meeting human needs?

Below we’ll take a closer look at what needs AI might contribute to and detract from.

What Needs Might AI Contribute To?

Here are some of the needs for which I see people using AI:

Access to knowledge → learning

Personal example:

Because it’s trained on an immense amount of data, a large language model (LLM) such as ChatGPT can contribute to needs for learning. For example, the other day I asked ChatGPT what it saw as the difference between the terms “polycrisis” and “metacrisis.” And though the answer I got aligned with my previous thinking, the way the distinction was articulated by this AI contributed to my need for learning. It also reaffirmed that the core of my work — addressing the metacrisis — appears to be on the right track.

Convenience → ease

Personal example:

A couple of years ago I was asked by a large architecture firm that creates offices and work-spaces to create a training video on workplace communication skills.

I was trying to explain the difference between needs and strategies in language that architects, designers, and builders could relate to.

To give a relatable example, I picked two needs or functions that people might want in a workplace — the need for focus and concentration on the one hand, and the need to collaborate on the other.

If I were an architect or designer, I might have been able to pull these examples from my own knowledge and experience. However, architecture is far from my expertise, so I asked an AI large language model (LLM) to give me two brainstormed lists: (1) elements or strategies that contribute to focus and concentration at work, and (2) elements or strategies that contribute to collaboration in the workplace.

It took 10 seconds and I had my two lists of examples for the training video. Now I could make this distinction clear to architects and designers in language that they could connect with.

In this case, AI met my needs for convenience, ease, time-efficiency, and also contribution!

Automation of repetitive tasks → freedom, creativity

One of the areas I’m most curious about with regard to AI is its ability to take over tasks that are repetitive, time-consuming, or cognitively draining — especially those that don’t require deep human presence.

My hope would be to have a machine handle certain routines, in order to reclaim time and mental space for greater freedom, rest, or creativity. My fear, is that I would use the freed up time to simply be more busy!

Hypothetical example 1:

A coach uses an AI transcription tool to automatically summarize her client sessions. Instead of spending hours taking notes and preparing follow-ups, she can focus her time and energy on being fully present with her clients during sessions — and later use the free time for other priorities.

Needs met: efficiency, support, ease, choice.

Hypothetical example 2:

A small business owner uses AI to auto-categorize and respond to low-priority emails. This clears his inbox and his mind, giving him space to attend to things other than email!

Needs met: clarity, spaciousness, inspiration, contribution

In each case, AI is functioning as a strategy to meet needs like freedom, creativity, ease, and purpose — not by replacing human connection, but by — hopefully, ideally — making room for more of it.

New Forms of Expression → Beauty, Inspiration

People employ AI for things beyond using it as a productivity tool. While this is controversial — and I cover it in the sections on needs not met and concerns — people are also using AI as a creative partner.

Artists, musicians, writers, and designers are using AI to explore forms of expression that weren’t previously possible. These new tools can evoke a sense of wonder and inspiration — contributing to our need for beauty, novelty, and meaning.

Personal example:

Shortly after waking up one morning, I looked in my calendar and saw that it was a friend’s birthday. I wanted to create something different and special for him — but I had a very busy day ahead of me, and only about 10 or 15 minutes available at that moment.

I went into a large language model (LLM) AI and gave it my prompt. I said, here are a few interesting details about my friend, please write for me the lyrics for a birthday celebration song for him. Then I took those lyrics to a music-generation AI and I had it compose a track using my lyrics. It gave me two versions, both of which I sent to him.

They sounded like professionally produced and performed music tracks!

The whole process took about 10 minutes.

My friend’s reply afterwards:

“That’s amazing! How did you do that?
Thank you! Thank you! Thank You!
I personally like the first one the best, but man, that’s the best gift!”

Needs met: creativity, celebration, play, connection.

This last example may be considered a frivolous use of AI, especially considering some of the existential concerns some experts warn about — and I wouldn’t entirely disagree!

However, it’s illustrative — and a real example from my life.

In the next section, we’ll look at some needs AI could go against, and later I cover the deep concerns AI experts express.

For now, I’ll add that from an NVC lens AI is another set of imperfect tools that can be used to expand or augment our strategies to meet needs.

Sadly, AI as a strategy can also lead to many unmet needs!

What Needs Might AI Detract From?

Despite the interesting and possibly exciting ways that AI could contribute to human needs — there are some legitimate and troubling (some might say alarming) concerns regarding this relatively new technology.

When email started to become ubiquitous, I remember hearing concerns that it could make the postal service irrelevant and make it disappear.

When the worldwide web and e-books were introduced, I heard concerns that libraries would disappear.

Neither of these fears have come to pass.

And yet — hearing some of the early pioneers of AI express existential concern for humanity gives me pause!

I don’t want to minimize this section on potential unmet needs. It’s important, and could become an entire book.

Let’s briefly look at what conditions could lead to which unmet needs — in no particular order. Afterwards, we’ll look at cultural and ethical issues as well as some deeper concerns voiced by AI experts.

Unmet Needs Related to AI

In this section I cover several areas in which the use of AI can lead to unmet needs. I cover additional areas below, in the section titled Cultural and Ethical Concerns, in which we look specifically at some of the concerns raised by AI experts.

When we over-rely on AI-generated responses, that will likely diminish people’s sense of genuine connection. Unmet needs include empathy, authenticity, presence, connection.

Surveillance AI may increase safety for some at the cost of privacy; unmet needs: privacy, autonomy, safety. A nation-state with an autocratic and authoritarian bent can further use surveillance AI to go after political opposition and dissent. Unmet needs include autonomy, safety, and freedom.

AI is already replacing creative work, a trend that is projected to increase. Much of AI’s “creativity” is based on the work of real people ‘scraped’ from the internet and used without the artist’s consent. Unmet needs include support, recognition, contribution, financial sustainability, and integrity.

Misinformation: AI has enormous potential to exacerbate a compromised information environment. If not regulated, AI will increasingly be used by people with less-than-positive intentions to spread misinformation, to misrepresent real people’s stances on issues, and to extort people through blackmail. Deep Fake videos are now so convincing that they require one or more additional layers of research to get to the truth. This level of sophistication is also being used to create sexualized content of celebrities, media personalities, and everyday people in a way that violates trust as well as people’s privacy. There are AI apps that can take a photo of a fully clothed person and create a nude of them, which when misused further erodes kindness, trust, privacy, and dignity. Unmet needs include integrity, trust, consideration, and safety (social cohesion).

Artificial Intelligence requires massive data centers that use enormous amounts of energy and water. As use of AI increases these impacts and pressures will also mount, further affecting aquifers, waterways, and the climate. Unmet needs include integrity, connection with nature, safety, peace, and possibly survival.

AI, as a tool, can be used and overused in the area defense contracting. Yes, a country must keep itself safe from those who seek to harm its land and people. However, here lies the important difference between support for the military and militarism. Dr. Marshall Rosenberg said that even the ideal nonviolent society would still probably have a military — but its deployment would be about protective use of force, never punitive use of force. The military can use its force to protect life. Militarism is the idea that most problems have military solutions, and it leads to the ongoing perpetuation of armed conflicts, with arms merchants getting richer and richer. As a society, we need to look at the incentive structures and how they perpetuate war rather than an end to war. As AI has become more sophisticated, it has begun to include “autonomous weapons systems.” Personally, I feel very nervous about an AI deciding on its own who it will attack and who not. This is not future science fiction. These systems exist now — and they are proliferating. Unmet needs include: clarity, safety & protection, freedom, peace, and shared reality.

One feature increasingly common in AI models is known as recursive self-improvement. This is when AI learns on its own, becoming smarter and smarter. Some AI experts have expressed concern about AI becoming “super smart” and deciding it doesn’t need humans around anymore. Unmet needs include: survival, safety, clarity, and transparency.

Set-and-forget AI agents. Agentic AI involves “agents” that make specific decisions autonomously. For example, I might direct an AI agent to book a trip to a specific place within a specific timeframe. This “agent” will look up flights, places to stay, places to eat, tours I might go on — and then make all the reservations and pay for everything — based on the parameters I give it. With access to the web, my email, my credit card information (and my other online logins) my AI agent can pay for everything and set it up. I just give it a goal and it figures out the best way to get there. The problem that shows up is: what happens after these agents, which were set up — are forgotten, persist beyond their usefulness, and begin to interact with each other? I ask an agent to find me the cheapest eggs in my city this week — but I forget to turn it off… will it do it every week, far beyond my death (using lots of energy and water each time)? Or if my security camera picks up someone vandalizing my home, and I set up an AI agent to go find this person based on the images and footage: how do I know the AI agent won’t make a mistake? What if I forget about it and many years later it finds who it thinks is the person? This is one of those problems the AI industry has probably not worried about too much as a whole. Unmet needs include: safety, care for the environment, ease, effectiveness, and possibly many more.

Most of the dangers of AI are unknown future potentials.

For decades society has permitted new technologies with unknown consequences to enter the public sphere, resulting in a massive social experiment — take smartphones and social media, as one example.

The truth is that these technologies evolve much faster than anyone, especially the lay-person, can understand, let alone predict, the related consequences, and much, much faster than lawmakers can legislate for.

It would make sense for scientists, engineers, and lawmakers to have an agreed-upon precautionary principle — but for now, this does not exist.

Furthermore, there is no cohesive regulatory framework, resulting in an inconsistent patchwork that transcends any individual jurisdiction.

How do we encourage innovation, support the rights of scientists, creators, engineers, and companies and at the same time safeguard society from the worst consequences of technologies developing at an exponentially fast pace?

Additional Cultural and Ethical Concerns

Despite the fact that many people are understandably excited about the possibilities related to Artificial Intelligence, there are many — and some very serious — concerns related to it.

Additionally, the rate of development and change is incredibly fast! Many of the experts I studied in my research claim that the innovation curve is vertical — the change is happening at an exponential rate.

And, as has been the case in the past — but more so now with AI — the technology is emerging and evolving faster than the law can keep up with.

Concerns Voiced by AI Experts

A good number of those closest to the development of AI — including computer scientists, ethicists, and policy analysts — have voiced important concerns.

These are not the fears of science fiction, but grounded, real-world issues that impact our shared future.

From a Nonviolent Communication (NVC) perspective, these concerns can be understood as expressions of unmet needs: for safety, transparency, equity, integrity, and care, among others.

Let’s explore four key areas of concern:

Bias in Training Data

AI systems learn from examples — and those examples come from human-created, and often human-selected, data.

This means that the patterns AI detects reflect the biases, blind spots, and imbalances already present in the groups selecting the data or in society as a whole.

If a hiring AI is trained on résumés from a company that historically favored male applicants, it may learn to prefer male candidates.

If facial recognition systems are trained mostly on lighter-skinned faces, they may perform poorly on darker-skinned individuals — a well-documented issue with real-world consequences.

From an NVC lens, these concerns point to unmet needs for fairness and inclusion, among others.

When AI systems reinforce discrimination — even unintentionally — the result is a deep violation of peoples’ needs.

Environmental Impact of Large Models

As mentioned earlier, training powerful AI models like ChatGPT or image generators like Midjourney requires immense computing power, which in turn consumes massive amounts of electricity and water (for cooling).

For example, researchers at the University of Massachusetts Amherst estimated that training a single large language model could emit as much carbon as five cars in their entire lifetime — and that figure has likely increased as models have grown more complex.

The needs at stake here include sustainability, care for the Earth, interdependence, and responsibility.

When technological growth comes with the cost of environmental harm, it calls for deeper inquiry into the strategies we’re choosing to meet our needs — and at what long-term cost.

Risks of Misuse: Deepfakes, Surveillance, and Disinformation

In the section above on unmet needs I mention deepfake videos — hyper-realistic fake images or audio — which can impersonate public figures or fabricate events.

AI has been used to hurt others, through the use of “nudify” apps — which will turn a photograph of someone clothed into a reimagined nude version — or pornographic deepfakes, portraying someone without that person’s consent as if they are in a sexually explicit video.

Additionally, AI can be and is being used to manipulate public perception. One well-known example happened during the floods in North Carolina after Hurricane Helene.

Disinformation campaigns powered by AI can spread faster and more convincingly than ever, undermining trust in journalism, science, and democracy.

Governments and corporations are using AI for mass surveillance, raising serious privacy and civil liberties concerns.

Needs not met include trust, transparency, safety, autonomy, and shared reality.

These needs are core to healthy relationships — both interpersonal and societal.

When technology is used to deceive or control, it erodes our capacity to connect and collaborate.

The Alignment Problem

Perhaps the most far-reaching concern among AI researchers is what’s known as the alignment problem — the fear that as AI systems become more powerful, their “goals” may not align with human values or well-being.

Even without malicious intent, a powerful system that optimizes for a narrowly defined goal (like maximizing engagement or profit) could unintentionally cause harm.

For example, a content recommendation AI trained only to maximize time spent on an app might show users increasingly extreme content — not because it wants to harm anyone, but because that content happens to hold attention. In the long run, this could contribute to polarization, anxiety, and isolation.

This speaks to deep needs for conscious choice, meaning, integrity, and long-term well-being.

If we don’t define what “good” means, in the sense of what contributes to needs-fulfillment — and embed those values into our systems — we risk building tools that outpace both our wisdom and legal capacity to guide them.

The intention behind voicing these concerns is care: we need to enter this moment with our eyes wide open and our wits about us.

Rather than being paralyzed with fear — I would prefer that we see these concerns as invitations for dialogue, responsibility, and care.

From an NVC perspective, mourning these risks can help us stay connected to our values, while also fueling compassionate and proactive engagement.

Rather than viewing AI as good or bad, we can ask: What needs are being served? What needs are being neglected? And what strategies would be more life-enriching for all?

Supposing that we are able to prevent or mitigate the worst consequences… can AI help people develop more empathy, compassionate understanding, and skills for preventing and resolving conflicts as well as for having deeper personal and professional relationships? Can AI help you in learning NVC?

Can AI Support the Learning and Practice of Nonviolent Communication?

The short answer is yes.

One additional question is, “would we want to?” The answer to that question will vary from person to person.

I’ve been approached by developers creating AI-based apps specifically for helping people learn and apply NVC.

How useful these tools are depends in large part on developers’ understanding of NVC, what questions developers are asking themselves, and the design itself.

Emerging Tools and Possibilities

One of the developers I worked for briefly had a fascinating premise: a user inputs a situation and the app returns a simulated dialog as if both people had NVC consciousness and skills. Though this process is not teaching people the key differentiations of NVC, nor giving them exercises to improve their skills, it gives them a vivid example of what a connecting conversation could sound like. The developer swore that this app saved his relationship.

There are apps that claim to do “emotional coaching” or to help you connect with others on a deeper level in a business environment.

In development are AI empathy bots who act as mock dialogue partners to help you practice, and NVC chatbots to help you phrase needs-based requests.

At best I think these AI-powered apps can complement or augment what participants get from live trainings — but I cannot, at this time, envision them replacing what the sensitively-attuned nervous system of a CNVC Certified Trainer can contribute.

Risks and Limitations of AI in NVC Practice

AI does not offer genuine presence or resonance. This leads to the risk of “scripted empathy” instead of felt connection.

As someone who has been studying NVC for 30 years and have held the title of Certified Trainer for over 20 years — and someone who is comfortable with technology — I would feel concerned regarding any claims that an app can teach someone NVC.

One important reason is that learning NVC is not a linear process!

Consider the Pathways to Liberation Self-Assessment matrix, created by some of my colleagues (Jacob Gotwals, Jack Lehman, Jim Manske, and Jori Manske) — which details 28 capacities that are developed through NVC learning and practice.

(The link above takes you to the English-language version of the Pathways to Liberation Self-Assessment matrix. You can access the matrix in other languages here. [LINK: https://pathwaystoliberation.com/the-matrix/])

Each of the 28 capacities has four ‘levels’ of development: Unskilled, Awakening, Capable, and Integrated.

It’s possible that an AI could give someone a structured curriculum that focuses on these capacities and could help them to grow in them.

However, NVC is a relational practice! We grow these capacities in ourselves while in relationship with other humans!

The key would be to find a way that AI could complement or enhance — add to — the human-based process of learning and growing in NVC.

What I would want to avoid is an over-reliance on AI tools that would lead someone to bypass — or which could even short-circuit — the deep inner work that mastering NVC entails.

Applying NVC to Our Collective Relationship with AI

When we think about Artificial Intelligence, the conversation often gets polarized — AI is framed either as a dangerous threat to humanity or as the next great leap forward.

Nonviolent Communication invites us into something more nuanced than all/nothing binary thinking — a more life-connected approach.

Instead of labeling AI as “good” or “bad,” we can ask, as we’ve done above: What needs are being met, and what needs are not being met?

This shift in framing lets us hold both the excitement and the concern, the gratitude and the grief. It also allows us to engage in dialogue and decision-making that’s guided by shared values, rather than fear or hype.

Mourning Unmet Needs: Authenticity, Trust, Community, and More…

From an NVC perspective, mourning is not about wallowing in despair — it’s about allowing yourself to fully feel your feelings while at the same time acknowledging the unmet needs, so we can stay connected with Life and reconnect with what matters most.

In the context of AI, there are real losses worth mourning:

Authenticity: When AI-generated content replaces human-crafted words, images, or music without acknowledgment, it can feel hollow — like the “voice” we’re hearing isn’t really the person speaking.

Trust: Deepfakes and AI-driven disinformation campaigns erode trust and our shared sense of what’s real.

Community: Over-reliance on AI for interaction can subtly replace opportunities for genuine human connection.

Hypothetical example: A local online support group begins using an AI moderator to welcome new members and offer encouragement. Some participants appreciate the quick responses, but others feel a subtle loneliness — missing the warmth of a real person taking time to write to them.
Allowing space for this mourning honors our needs for authenticity, trust, and belonging — and helps us make more conscious choices about how we use AI.

Celebrating Needs Met: Innovation, Efficiency, Access

We can also celebrate where AI is serving life. This celebration is not naive — it’s an acknowledgment of the ways this technology might expand our possibilities:

Innovation: Artists and scientists use AI to explore questions and creations that were previously out of reach.

Efficiency: Automation frees people from repetitive tasks, giving them more time for creative, relational, or restorative activities.

Access: AI-powered translation tools make communication possible across language barriers; screen readers and voice assistants support people with disabilities.

Example: A deaf student uses AI-generated real-time captioning during lectures. For the first time, she can participate in classroom discussions without waiting for a transcript days later. Needs met: inclusion, learning, participation, dignity.

Recognizing these contributions allows us to keep AI in perspective — as a strategy that can serve meaningful needs when used consciously.

Using NVC to Bring Compassionate Curiosity to Tech Policy and Innovation

Beyond individual use, AI raises societal-level questions about regulation, ethics, and governance. How do we ensure that AI development aligns with human values? How do we protect vulnerable communities from harm — and ensure access for historically underserved populations?

NVC offers a stance of compassionate curiosity:

Instead of getting bogged down on whether AI is “safe” or “dangerous,” (because it’s clearly both) we can ask, What needs are at stake for the different groups involved?

Instead of assuming bad or good intentions from tech companies, we can seek to understand what needs they’re trying to meet (financial sustainability, innovation, significance) and how those might be met in ways that also protect public well-being.

In policymaking, one thing I see missing that would be really valuable and I would like to see encouraged is needs-based dialogue between technologists, ethicists, communities, and lawmakers — seeking solutions that honor the widest range of needs possible.

Example: In a city council meeting about AI surveillance in public spaces, NVC-based facilitation helps participants name their underlying needs: safety, privacy, fairness, accountability. Once needs are on the table, the conversation shifts from “ban it or keep it” to exploring creative strategies that address safety and civil liberties.

NVC’s potential goes beyond merely a personal practice, but a collective tool for collaboratively shaping our future — including the role of AI in our lives — keeping humanity, empathy, and shared values at the center of technological evolution.

Practical Guidance for Readers: Navigating AI with NVC Awareness

It appears that Artificial Intelligence is here to stay. The question is not whether we’ll engage with it, but how.

Nonviolent Communication gives us a compass: before using any AI tool, we can pause, turn toward our own inner experience, and ask questions that reconnect us to our needs and values.

Questions to Ask Before Using AI

What need am I trying to meet by using this AI tool?

Am I seeking ease? Efficiency? Clarity? Creativity? Connection?

Naming the need helps us recognize AI as one possible strategy among many, rather than the default solution.

Example: You’re tempted to have ChatGPT draft an email. If the underlying need is “clarity,” maybe a quick phone call could meet that need more directly.

Is this strategy truly life-serving — or might there be another way?

Even if AI can meet the need, are there unintended costs — to myself, my relationships, my values?

Example: You can use an AI summarizer for your friend’s long letter, but you realize the act of reading it slowly is more enjoyable as well as a part of how you honor your friendship.

Am I using AI to connect, avoid, or outsource responsibility?

AI can help us connect, but it can also become a way to avoid difficult conversations or feelings.

Example: Instead of writing a heartfelt apology, you let AI generate a polished version. It sounds good, but you miss the opportunity to rebuild trust by being vulnerable in a genuine way.

Tips for Conscious Use of AI

Use AI as a reflection partner, not a replacement.

You can use AI to help you brainstorm or clarify your thoughts, but ultimately I predict that bringing your own voice, empathy, and presence into the final interaction will be much more satisfying.

AI can be the sketchpad — but you paint the final picture.

Schedule “human-only” spaces in your life.

Protect times, places, or activities where you deliberately do not use digital technology — including AI — whether that’s journaling by hand, having device-free meals, going on device-free walks, or doing creative work without digital assistance.

Check for alignment with your values.

If a tool saves time but compromises privacy, creativity, or integrity, consider adjusting how or whether you use it. Conscious choice protects what you care about most.

Bring transparency to your interactions.

If AI helps you with a message or creative work, consider disclosing it. Honesty can strengthen trust, even when technology plays a role.

(Transparency example: I used ChatGPT to help me brainstorm some of the headings and the structure for this article.)

From an NVC perspective, the key is not whether AI is “good” or “bad,” but whether we’re staying aware of why we’re using it, what needs it’s serving, and how we can integrate it into life without compromising the needs and values we hold dear.

Conclusion — Toward a Future Where AI Supports, Not Replaces, Human Connection

AI is neither a savior nor a villain — it’s a tool. And it is truly a remarkable tool!

That said — it is not entirely “neutral” given it’s inherent requirements of massive amounts of energy and water.

And despite the incredible advances AI can contribute to in medicine, materials science, and many other areas — it is a technology that can also be used to manipulate and deceive, dominate and control.

In the wrong hands, it can be used as a very powerful weapon!

This is precisely why we need to be even more intentional in our use of it and in how we collectively grapple with creating regulatory guardrails early on as well as over time!

Because of how rapidly AI is advancing — and the fact that the experts who created it still struggle to understand exactly how it does what it does — there are significant risks, both known and unknown. The law of unintended consequences is definitely at play here!

NVC invites you to stay rooted in needs, presence, and connection. NVC invites all of us to slow down as best we can, and ask the important questions.

Positive social change does not just depend on people “out there” — we actively participate in it with each interaction, engagement, and opportunity to speak up and speak out for what we value.

Humanity’s future depends not just on what we build and create, but how we relate to ourselves, each other, and emerging possibilities.

Marshall Rosenberg on NVC and Technology

Marshall Rosenberg, PhD — the founder of Nonviolent Communication — did not write about Artificial Intelligence as we know it today.

Yet his principles for understanding human interaction and choice are deeply relevant to our relationship with all forms of technology.

Dr. Rosenberg taught that everything we do is an attempt to meet our needs. Technology, in this view, is simply a strategy — a set of tools humans create to meet needs like efficiency, learning, safety, or connection.

He emphasized that strategies are never the same as needs themselves.

This distinction is crucial in a world in which technological advancement can sometimes blur the line between what we use and what we truly value.

Technology as a Strategy, Not the Need

When Dr. Rosenberg spoke about tools — whether they were words, systems, or devices — he often brought the conversation back to the human needs behind them.

A smartphone is not the need; the need might be for connection, efficiency, or learning. The same is true for AI: ChatGPT, facial recognition, and recommendation engines are all strategies. The question is: Which needs do they serve, and at what cost?

The Double-Edged Nature of Strategies

From an NVC perspective, any strategy can serve life or go against it, depending on whether or not it’s aligned with universal human needs. A tool that meets one person’s needs might compromise another’s.

For example: An AI transcription service can meet needs for accuracy and ease.

The same technology could be used for surveillance, compromising needs for privacy and autonomy.

This invites us to apply NVC’s needs-awareness not only in personal interactions but in evaluating societal uses of technology.

Staying Rooted in Presence

One of Dr. Rosenberg’s core messages was the irreplaceable value of presence. While AI can simulate empathic language, it cannot offer the living, human presence that transforms relationships. As he put it, empathy is not about saying the “right words” — it’s about being with someone in their experience.

From this perspective, AI can be a practice partner or support tool for learning empathy, but it can never replace the living energy of human connection.

Our responsibility is to ensure technology supports — rather than substitutes for — that presence.

Guiding Questions in the Spirit of Marshall Rosenberg

If Dr. Rosenberg were alive to address AI directly, he might invite us to keep asking:

  • What needs are we trying to meet with this technology?
  • Are there ways to meet those needs that would be more life-enriching?
  • Who might be excluded or harmed by the way we’re using it?
  • Does this strategy bring us closer to — or further from — authentic connection?

These questions keep our engagement with AI anchored in the principles of Nonviolent Communication, ensuring that the tools we use remain in service to life, rather than the other way around.

Links and Informational Resources

There are a huge number of podcasts, articles, books, and videos dedicated to AI — coming out on a daily basis!

Below is a small sampling of articles, books, podcasts, and videos I have selected for you, so that you can dig deeper into this topic.

Articles

Deepfakes and Deception: A Framework for the Ethical and Legal Use of Machine-Manipulated Media
from the Modern War Institute at West Point
https://mwi.westpoint.edu/deepfakes-and-deception-a-framework-for-the-ethical-and-legal-use-of-machine-manipulated-media/

How Hurricane Helene Deepfakes Flooding Social Media Hurt Real People
This article is from 2024, but it explains something I mention in the article.
https://www.forbes.com/sites/larsdaniel/2024/10/04/hurricane-helena-deepfakes-flooding-social-media-hurt-real-people/

OpenAI CEO Sam Altman warns of an AI ‘fraud crisis’
https://www.cnn.com/2025/07/22/tech/openai-sam-altman-fraud-crisis
AI Is Everywhere Now—and It’s Sucking Up a Lot of Water
https://insideclimatenews.org/news/28092024/ai-water-usage/
Explained: Generative AI’s environmental impact
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
As Use of A.I. Soars, So Does the Energy and Water It Requires
https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions

Musk’s AI firm forced to delete posts praising Hitler from Grok chatbot
https://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfb
Elon Musk’s AI chatbot, Grok, started calling itself ‘MechaHitler’
https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content

Existential risk from artificial intelligence
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence

Books

There are many books on AI. I include only a couple here that I came across in my research for this article.

The Alignment Problem
by Brian Christian
(on AI ethics)

Empire of AI: dreams and nightmares in Sam Altman’s OpenAI
By Karen Hao
You can see a link to an interview with this author, below.

Videos

Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
The Diary of a CEO
youtube.com/watch?v=giT0ytynSqg
The $100 Trillion Question: What Happens When AI Replaces Every Job?
Harvard Business School
https://www.youtube.com/watch?v=YpbCYgVqLlg
Ex-OpenAI Scientist WARNS: “You Have No Idea What’s Coming”
https://www.youtube.com/watch?v=79-bApI3GIU

How AI Is Reshaping Your Consciousness
https://www.youtube.com/watch?v=Ewvk6fpBOrI
AI Slop: Last Week Tonight with John Oliver (HBO)
youtube.com/watch?v=TWpg1RmzAbc

“Empire of AI”: interview with author Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World
Democracy Now! with Amy Goodman
youtube.com/watch?v=1NzW3o8zFEc
The AI Revolution Is Underhyped | Eric Schmidt | TED
(Recorded at TED2025 on April 11, 2025)
youtube.com/watch?v=id4YRO7G0wE
Our latest artificial intelligence reports | 60 Minutes Full Episodes
Originally aired April 20, 2025
youtube.com/watch?v=VAzKqh00g3c

Podcasts

Episode from Offline:
Hugs From Your Late Mom, Interdimensional Chats, and College Cheating: The AI Future Is Here
https://podcasts.apple.com/us/podcast/offline-with-jon-favreau/id1610392666?i=1000714588578
This podcast continues to publish episodes that deal, at least in part, with AI.

Show about AI Policy and Societal Effects
Siliconsciousness
https://podcasts.apple.com/us/podcast/dsrs-siliconsciousness/id1744179436

PuddleDancer Press Books on NVC, Technology, and Conscious Relating

PuddleDancer Press is the foremost proponent and publisher of books on Nonviolent Communication and the complex dynamics impacting our world.

NVC has shown time and again that human beings are capable of creating mutually beneficial outcomes and solutions.

Using NVC to understand and relate to our complex society — including how we use technology — predictably gives us outcomes that are more durable and meet a greater number of needs.

Our books on communication skills can help you:

  • Create exceptional personal and professional relationships,
  • Offer compassionate understanding to others,
  • Know when and how to ask for that same understanding for yourself,
  • Prevent and resolve misunderstandings and conflicts,
  • Speak your truth in a clear, powerful way more likely to lead to harmony than conflict,
  • Create mutual understanding without coercion.

Whether you are a long-time student — or are brand new to NVC — PuddleDancer Press has the educational resources, including the books on technology, innovation, and conscious relating to help you grow your emotional intelligence, interpersonal skills, and communication prowess.

Check out our catalog of books for healthy relating to our world… and give yourself the gift of Compassionate Communication!