National Summit on Artificial Intelligence and Culture

Rina Chen’s living notebook on digital craft and design.


Overview

**What: National Summit on Artificial Intelligence and Culture When: March 15 to 17 Where: Banff Center for Arts and Creativity Who: 300 leaders from cultural and tech segments attended the summit

The summit was designed to follow the three pillars of Canadian AI strategy: Build, Protect, and Empower. For each pillar, the discussion evolved through leadership talks, action panels, and workshops to cross-pollinate ideas, getting different voices on the table, and generate potential action plans for the future.

Key words

#digital-infrastructure #economic-development #cultural-segment

Key takeaways

  • AI is not a monolithic, singular entity; it is a multitude of different things and concepts that are embedded in different level of our system, out of different intentions, and beneficiaries.
  • Technology and art has always been and should be developed together, the same applies to this time.
  • Education in the age of AI, human skill becomes more important than the AI skill
  • Minister of Culture and Minister of AI both affirmed the importance of equal access to AI.
  • The negative impact of AI contents flooding the market, licensing, copyright fraud are highly prioritized in the next legislation drafting.
  • Context

The summit is distinctive in its emphasis on culture and AI. Minister of Culture set a tone of the discussion by raising a historical account of culture:

Culture is a set of instruction from ancestors to new generations. It is important to realize, who has the pen to write that instruction, or in our case, who will write the code.

And “it’s more powerful that policy. It eats policy.”

It’s interesting to note that, coding is rightfully recognized as having an equal or greater power as pens and texts. However, culture in this context aligns with that of a culture war, rather than multitude of civil cultures. That’s why Minister of AI has framed this summit as “the frontline”, which I personally felt confused given that Meta and other powerful current pen-holders are there as well. With whom are we fighting then? I find fighting as the most useless metaphor in the context this summit, since it was the wicked systemic impossibility that resurfaced in the end, at least in my mind.

A more helpful context included:

  1. Be pragmatic, open to opportunities, but candid about concerns. We are looking for solutions (but not so fast).
  2. Trust building is recognized as the key word moving forward. “Citizens”
  3. Other key words included: Compute, Capital, Contract, Trust, and Talent. (3C+2Ts).
  4. In the global scene, Sovereignty, Resilience, and Collaboration are key words, coherent with Prime Minister’s Davos speech.
  5. Government is willing to invest in a sovereign AI, including the full stack behind it (really?)
  6. At least in words, the envisioned AI development is to serve people. AI for all (really?)

The 300 participants did include a vast diversity of professionals and people with different backgrounds. The occasion proved to be helpful in diagnosing what is the blind spot and who are left out. In this sense, culture, diversity, and equity are deeply embedded in, foundational to, or even prerequisite for our discussions.

Below are a detailed summary of key concepts that I picked up from the summit.

Design the Interdisciplinarity

The Connected Minds Program presented by Pina from York University.

The project was also able to use

Art + Technology

Art has always evolved closely with technology. Les 7 Doigts de la Main

Indigenous Knowledge

I had an especially enlightening encounter with people from indigenous background, who are leading the change, and are having their strong messages heard during the summit.

Elder Harley Crowshoe and Shani Gwin’s opening remarks and later interactions. Panel talk by Dr. Natasha Ita MacDonald. And Jordee Reid from our break-out room discussions.

From the opening remark by Bjorn and angry remarks made across many participants from music, publishing, and animation industries, copyrights and its violation were one of the most voiced keyword throughout the sessions. However, I had an inexplicable knot in my heart listening to this word repeatedly, with intense emotion attached to it.

As someone sits in between tech and art, especially through creative coding, I have always identified myself with copyleft, to the extent that my personal logo is designed as copyright marks mirrored to the left and cut open. I believe that knowledge (both practical and logical) should not be enclosed and anybody should be able to access them if they so choose. Knowledge has never been generated by any single human being. We are all embedded in our environment, and it provides the source of our knowledge, through experience, interaction, and inspirations. It’s awfully human-centric and selfish to claim the fruit of one’s work while not acknowledging in equal terms about what we received from both human and non-human entities across our journey to generate new knowledge. However, I’m not a fool to not comprehending why we cannot have all knowledge to be openly, and freely accessible to all, to understanding that in a free market system, copyright is the mechanism that helps to support creators and writers to live off their work and keep on creating new knowledge.

However, I cannot make myself identify with that idea.

What about independent engineers, from the early days of the Internet to current open-source ecosystem building, that have driven the development of our essential tech stack with labor of love? I’d argue that coding and building a digital system are equally creative as art making, writing, or composing. Would it be feasible if each one of them claimed for their “copyright”-like share of earning from each one of us using that fundamental, yet we oftentimes call “banal” technology? Digital technology is so interlaced that one cannot build a meaningful item without using codes that are built by other people previously. How enclosed we would feel when we are trying to develop and improve something, and even before doing that, we are already financially barred from building on top of the existing knowledge, not to mention the skill and knowledge necessary to contribute, which are harder to be learnt because we need to have the means to pay for them in the first place.

Some people might argue that the tech world is different from the art and design world where humanity belongs, and I’d refute that. Tech is also, same as AI, not a monolithic entity. And I’d argue that a part of them, championed by the Internet art, generative art, experimentational blockchain, and, creative coding, have demonstrated a worldview that are much more inclusive than Western tradition of individualism, and which interestingly, resonates more with indigenous knowledge, and non-Western narratives of where I received my life (Japan and Chine) and many other communities around the globe. The ancestral knowledge teaches us humility to not only shout for our own right, but to realize the responsibility we have for each other, and also for the planet, and any being and non-being who cannot have their voice represented in a room full of people, power dynamics, and hidden extractive practices.

And my heart-knot was untied in a conversation that happened in our break-out room.

However, my ultimate question is, how can I stand for free and equal access of knowledge and tools for all, including AI tools, while also make a living in this capitalist system? Is grant application the only way to fund it? Is it that I have to believe in the public domain that in my basic political science knowledge, is the last thing that anybody should believe that their value will remain the same or never go backward from time to time.

Hypothetical Action Plan

  • Applied agents are locally owned and modified by a community of people sharing the same value.
  • Applied agents should be small, and can be dumb, so long as it serves a specific purpose of the community
  • Data, knowledge, and archives are managed in different layers, depending on how much they can be (or should be) publicly circulated, or they should stay in a particular community

This proposal conflicts with the governmental interest in AI sovereignty and winning the game of AI adoption and AI capability race. Since the agents proposed here are too simple, transparent, and accessible, that it might as well be shared openly across the globe, for any individuals and communities from benefitting from the use of it.

However, they are extremely cost-friendly in our precarious time. Since we saw our mother earth from the moon, we finally visually realized that our resource is not unlimited. How helpful it is if every single nation starts to compete in the race of coming up with the most powerful, sovereign AI? US and Chine are already in that race, and they are dictated by the same game of power, as the one when nuclear became something we need to claim sovereignty on. And all the capital pumps into a powerful, even lethal AI development that doesn’t benefit the citizens in anyway but to deter a potential war by creating more lethal, and wasteful stuff.

I’d love to remain trusting, just as what the indigenous people did to the colonizers when they replied to them with the worst conduct a human can arrive at.

“If you are not at the table, you are on the menu.”

So my ultimate question here is, how to embrace our Canadian value, the flourishing of the civil society, inclusivity, diversity, and human dignity, equality, while also have a framework that never undermines and always prioritize what we care, while also deterring any potential unreasonable, awful, and dehumanizing intrusion initiated by another country?

Wicked Problem

My questions are wicked. I’m asking how to let something that naturally will never be able to happen in this system, a free-market, sovereignty-based system, that we are deeply trapped in. So deeply trapped in that we cannot even imagine a world without it.