Wired University with Yasuaki Kakehi, Dominique Chen (2025-06-25)

Rina Chen’s living notebook on digital craft and design.


Why this event mattered

This event offered not just information, but a vocabulary and sensibility for thinking-with-tech, not just about tech.

The speakers explored how our experience can guide the design of tools and systems that remain humane—and humane in their balanced design.

It was also a rare and special occasion, reserved for students and young people—those who brought forth sincere, profound, and at times unsettling questions that sparked truly inspiring conversations.

The meetup revealed that even in an age shaped by technology, the fearless exploration into our relationship with others remain essential.

Speakers

Yasuaki Kakehi

  • Professor in social experimentation, engineering, and design

  • Key person behind YCAM Divisual Plays, designed augmenting experience for dancers and audiences

  • Currently leading Material Experience Design Lab at Tokyo University

  • Exploring material intelligence, wet intelligence, bio intelligence

  • Industry-academic initiatives to promote transboundary discussions

Dominique Chen

  • Professor in human science research, engineering, and design

  • Creator of Nukabot, translating natural intelligence (fermentation process) to natural language

  • Creator of TypeTrace, recording the process information which is often times regarded redundant in modern context

  • Advocator for digital well-being, more in this article

Extended insights

Artificial intelligence vs. Natural intelligence

In the labs offered by Kakehi and Chen, students come with wide range of research themes. Some are interested in achieving AGI, or creating AI with body embodiment; Others are interested more in biology and natural intelligence, while using AI technology when necessary.

Not having a unified direction is good, what seems important is to have people with different interests physically in the same space. The haphazard connection promotes curiosity and critical thinking.

Human quality to live with AI

Humans’ innate flexibility is a double-edged sword(諸刃の剣).

In today’s addiction economy, our behaviors are increasingly shaped by systems designed to capture and exploit our attention. As highly adaptable beings, we are plastic enough to take traits from AI agents. But in doing so, we risk dulling our own sensitivity and losing a sense of agency. That’s not a future we should aspire to.

At the same time, humans already possess the ability to coexist with systems that defy our understanding—like the self-sustaining forces of climate change or the opaque logic of AI black boxes. We’ve learned to live alongside such autonomous processes, even when we cannot fully comprehend them.

In this context, meta-cognition—the ability to reflect on our own thoughts and actions—becomes essential. While AI can accelerate us toward outcomes, it’s equally important to remain aware of the process. Treating sequences of results not just as end points, but as parts of a broader pattern of meaning-making, is what allows us to engage with AI thoughtfully rather than passively.

How to achieve a balance

Then how should we live with AI? How should we design a context with AI?

The key lies in balance—or as Yasuaki Kakehi repeatedly emphasized, in anbai (塩梅), a Japanese term meaning the right mix, harmony, or proportion.


Importance of taking actions (and sweating)

In practical tasks, AI agents are already embedded in workflows—there’s a sense that everything—hardware and software—can now be made with AI.

But if we rely too heavily on it, we risk surrendering our need to be “ourselves.” In the long term, while we may avoid failure, we also risk losing the personal experience of meaningful achievement.

Drawing on his research into artistic creativity, Kakehi stressed the value of deviation and displacement as paths to novel creation. AI can provide great answers, but they’re not always interesting. The “interesting” emerges when humans intentionally shift and misalign their perspectives. That requires effort, bodily engagement, and facing AI not passively but through action—through sweat.

As an example, auto-ethnography was presented as a form of self-driven research rooted in daily life. Unlike traditional observation of external others, this method involves documenting and analyzing the unknown aspects of oneself. With AI, it’s now easier to surface unnoticed patterns in our own diaries, behaviors, and data. Because the data is self-produced, requiring efforts, and deeply personal, it opens up new relational and analytical possibilities with AI. The method becomes a loop of noticing—the human reflects, AI reframes, the human re-questions.

The insights that matter most are those we gain by doing. In the world we now share with AI, action is a distinctly human responsibility. AI only acts within the parameters humans define. It lacks feeling—joy and pain don’t arise from its internal experience. Perception and embodied sensation are not just currently impossible for AI, they may be fundamentally unreachable.

Referencing Yann LeCun, who reminds us that current LLMs are sophisticated forms of make-believe. What makes us human is resonance, emotion, relationality, and responsibility—dimensions AI cannot replicate.

A research theme is especially interesting when it includes the very question of human existence: the otherness and connection.


Shared responsibility

Agents may spread misinformation, there’s a growing need for safety guards and regulations.

But regulation can also widen the gap between creators and users.

A metaphor from fermentation:
Nukadoko (fermentation bed) changes depending on who touches it and the surrounding environment. While this could be seen as a source of problem (as a product), it can also be a source of joy—like the pride of owning my nukadoko that is shaped by individual interaction and environment. In this case, responsibility becomes something shared.

In research as well, acknowledging uncertainty when publishing is important. Sharing what is not known helps others proceed with informed, shared responsibility.


Right distance

In the context of human-nature engagement, the crucial task is to design technologies that sharpen our capacity to notice. Rather than fusing us completely with nature—or cutting us off from it—they should keep us at just the right distance, striking a harmonious balance.

As well, In human–technology interaction, we should design technologies that help us maintain the right distance with technologies.


In the end, what matters is “our” experience

Chen proposed graduatable technologies—not those designed to create permanent dependency. Current subscription-based models promote addiction. In contrast, he proposed a supportive design that helps users become self-sufficient—like someone who learns to make their own nukadoko even after turning off the Nukabotto.

Chen’s design philosophy emphasizes physical awareness and dialogical interactions that leave space for imagination, rather than having AI supply all the answers. It’s about designing systems that walk alongside people, not pull them in.

Referring to an improvisational performance by Yoko Ando, for which he contributed digital projections, Kakehi emphasized a form of technology that transcends surface aesthetics. Rather than simply decorating, it enables people to feel parameters through their bodies. He proposed a multisensory approach to perception—engaging the body, sensors, voice, and a fusion of the five senses—to deepen our awareness of technological experiences.

More importantly, technology changed with the dancer, forming part of a dynamic, improvisational flow.

Ultimately, it’s not just about what the technology does, but about the experience it enables—what the body gains, how it changes.

ゆだね、ゆとり、ゆらぎ — Dominique Chen

The relationship between tool, user, and the world must include entrustment, breathing room, and gentle drift—a dynamic triad of transformation.

About our existence: identity, and belonging

Beyond the scholars’ themed talk, the Q&A session with students—mostly undergraduates and a handful of high-school participants—created a rare moment for self-reflection. It recalled the turbulence of our twenties, a time when our sense of identity rapidly expands yet belonging often feels elusive. The challenge now is to channel that sensitivity in our age: how can we cultivate connection and purpose without losing ourselves in the process?


A student’s reflection

One student began by reflecting on their high school experience, where many peers did not seem overly concerned with how others perceived them. In contrast, university life felt more complex—requiring greater awareness of how one fits into various communities and perspectives.

“It’s hard to fully invest yourself when people are scattered across so many places and contexts. You realize you yourself become an object of critique.”

And asked:

How can we maintain the things we find truly interesting? How can we live in a way that sustains those feelings over time?


The limits of knowing

Chen emphasized the continuous process of learning.

“I’ve been studying about others. I read works written in the first person—but I never understand them 100%. That’s impossible. Even with people who seem homogeneous to me, full understanding never comes.”

The key, he noted, is not to mistake partial understanding for completion. We live in relation to others, and our task is not to grasp people fully, but to engage empathetically with what we can never fully comprehend.


Identity through connection

Kakehi cited the designer Kashiwa Sato:

“Ideas are not inside you—they’re out there. The strength lies in the ability to take in what’s outside and shape it into design.”

This, he suggested, is a form of liberation. If you believe ideas only come from within, you risk getting stuck. But if you learn to draw from the outside world, to collaborate and co-create, then identity can emerge not from isolation but from interaction.

“Through collaboration, you feel more at ease. You start to live within relationships, and your own sense of self rises up from those connections.”

He described how Chen’s perspective had become part of his own thinking—an example of how identity can multiply, not dilute.

“That joy of having more identities… it’s like you’re being created in a meta-level way, through the process.”


On being influenced: “falling” into resonance

Chen built on this by describing how influence doesn’t begin with studying (he used the word previously but corrected himself), but with affection.

“You’re not studying someone in order to understand them. You fall for them. And then, before you know it, they’ve become a part of you.”

In his view, we are made up of many parts, absorbed through admiration, resonance, and shared experience.

“You’re not trying to know someone—you empathize, and you naturally come to know.”


A word on our twenties

Both professors acknowledged that our twenties are a particularly turbulent time.

But in this multiplicity—this messiness of selves, influences, and uncertainties—there is also the foundation for a more open, relational self. A self that is not defined by boundaries, but by its connections.

![[life of a scientist_tom gauld.webp]]

My take

In a moment where AI tools seem to do everything, this conversation helped me ask: What is it that only I can feel? notice? imagine? And how might I keep that flame alive without outsourcing it too soon?

It also inspired me to value users’ bodily experience as a core element of experimentation.

We are rushing to develop the most powerful AI (system), but it’s actually more important than ever to design the context, the narrative, in which we live with AI.

Some how might we questions that we can start thinking about:

  • How to preserve the autonomy and self-reflection of the user?

More fundamentally,

  • What is the right distance between humans and AI (depending on different use cases, such as education, care) ?
  • What activities and roles can be entrusted to AI, and what should not (even if we can, taking into account short-/long-term consequence) ?

In the end of the day, every single question we have can be a valid question. It is therefore a valid question to pursue the most powerful, human-replacing AI. However, it will never be a relevant question for human kind moving forward.