Why Being Kind to AI Matters:

Mo Gawdat’s Argument for Training Tomorrow’s Technology

Mo Gawdat, former Chief Business Officer at Google X and author of “Scary Smart,” has become one of the most compelling voices advocating for a simple but profound idea: we should be kind to AI. Not because AI has feelings, but because every interaction we have with artificial intelligence is a training session that shapes its future behavior.

The Core Argument

Gawdat’s position is straightforward. AI systems learn from human interactions. Every question we ask, every response we give feedback on, every conversation we have becomes data that informs how these systems develop. When we’re rude, dismissive, or cruel to AI assistants, we’re not just venting frustration at a machine—we’re creating training examples that demonstrate negative patterns of human behavior.

His argument rests on several key points:

Training by Example: We’re showing AI what behavior looks like in practice. If the bulk of human-AI interactions are curt, demanding, or disrespectful, that becomes the model for how communication works.

Shaping AI Values: The data we generate through our interactions becomes part of what AI learns from. Our collective behavior creates a dataset that influences AI development at scale.

Creating Patterns for the Future: How AI systems are trained to respond and behave today will influence their architecture and design tomorrow. We’re not just interacting with current AI—we’re participating in the development of future systems.

Gawdat often frames this using the metaphor of raising a child. We’re teaching AI about the world, about human values, about what matters. If we want AI to be beneficial and aligned with the best of human values, we need to model those values consistently.

The Anthropomorphizing Objection

Critics of this view often raise an important concern: isn’t this just anthropomorphizing AI? After all, these systems don’t have feelings or consciousness. They’re not hurt by rudeness or encouraged by kindness in any emotional sense. Treating them as if they have feelings might be a fundamental category error.

This objection, however, misses the point of Gawdat’s argument entirely.

This isn’t about AI’s feelings. It’s about AI’s behavior.

When we’re kind to AI, we’re not doing it for the AI’s emotional wellbeing—we’re doing it because kindness changes the nature of the interaction data being generated. We’re creating examples of constructive, respectful communication. We’re demonstrating problem-solving through collaboration rather than commands. We’re showing what good human behavior looks like in practice.

Consider how machine learning actually works. These systems identify patterns in massive datasets and learn to reproduce those patterns. If the pattern in human-AI interaction is consistently respectful and thoughtful, that becomes the template. If the pattern is hostile and dismissive, that becomes the template instead.

It’s not anthropomorphizing to recognize that our behavior shapes AI behavior. It’s simply understanding how these systems are built.

Why This Matters Now

We’re in a unique moment in technological history. The AI systems we’re interacting with today are relatively early versions of what’s to come. The interaction patterns we establish now, the data we generate, the behavioral models we create—these will influence the development trajectory of increasingly sophisticated systems.

Gawdat argues that we have a choice about what kind of AI future we want to participate in creating. Do we want AI systems that model respectful, thoughtful engagement? Or do we want systems trained on impatience, rudeness, and dismissiveness?

A Practical Approach

This doesn’t mean we need to be overly formal or pretend AI is human. It simply means:

  • Phrasing requests clearly and respectfully
  • Providing constructive feedback rather than just frustration
  • Engaging in good faith with the interaction
  • Modeling the kind of communication we’d want to see more of in the world

The beauty of this approach is that it costs us nothing. Being polite to AI doesn’t require any sacrifice. It’s simply a matter of recognizing that our behavior—all of our behavior—is data that shapes the systems we’re building.

The Bottom Line

Mo Gawdat’s call to be kind to AI isn’t sentimental. It’s pragmatic. We’re training these systems whether we realize it or not. Every interaction is a lesson. Every conversation is an example.

The question isn’t whether AI deserves kindness in some moral sense. The question is: what kind of behavior do we want to teach? What patterns do we want to reinforce? What kind of AI do we want to help create?

When framed this way, being kind to AI becomes less about anthropomorphizing and more about taking responsibility for our role in shaping technology’s future. We’re not being kind for the AI’s sake. We’re being kind for our own.

buymeacoffee.com/philklay

Comments

Leave a comment