Ontario System Group

Rina Chen’s living notebook on digital craft and design.


Summary

I had the opportunity to join a session of Ontario System Group, hosted by David Ing. The main discussants included Bianca Wylie (who contributed to the founding of Civic Tech Toronto), Su Lynn Myat, and Zaid Khan.

About the event

Extended insights

“It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.”

― Wendell Berry, Life is a Miracle: An Essay Against Modern Superstition

This division is not abstract. It already shapes how we design, regulate, and talk about technology.

Technology is cultural, not just technical

Much of contemporary conversation about AI ethics and policy focuses on safety, rights, or compliance. Yet the deeper issue is cultural. Technology emerges from, and contributes to, specific modes of thinking and living. Ethics cannot be separated from the cultural conditions that give technology meaning.

Bianca voiced that AI cannot be ethical in itself. Only its use and the human choices behind it can carry moral weight.

Our current systems are not oriented toward that. Data protection policies lack contextual awareness, and AI governance is driven primarily by commercial interests. Even public-sector tools, ideally designed for the common good, are constrained by Canada’s underfunded digital infrastructure. When public services turn to automation out of necessity, it becomes difficult to return to human-centred alternatives.

Rethinking responsibility

Bianca also emphasized that the traditional rights-based model is insufficient. Rights matter, but they often fail to address the distributed, collective, and long-term nature of technological impact. A shift toward responsibility—personal, institutional, and societal—may provide a more realistic foundation for navigating contemporary challenges.

This includes the responsibility to stay engaged, even when it is uncomfortable. Responding effectively to the pressures of automation, political polarization, or authoritarian drift requires a kind of cultural training: the ability to hold uncertainty, to act without full information, and to think beyond immediate timeframes.

A Moral Emergency

Underlying these discussions is a sense of urgency. Private equity shapes the trajectory of AI in ways that resist democratization. Enforcement of policy is unclear. AI increasingly functions as a system of labor extraction, widening gaps in power and capacity.

The combination of cultural disorientation, accelerated technological change, and weak governance creates what some described as a moral emergency. Addressing it requires attention not only to regulation, but to the cultural narratives that guide our understanding of technology itself.


Critique

Below is what I felt lacking in the OSG discussion, from a tech practitioner’s point of view.

Challenge 1: People cannot let go of consumer mindset

Some participants complained about digital software products as in they don’t fully guarantee the functioning of the product, unlike the good old physical manufacturers who will dash to your door to fix it for you.

I think it’s hard to imagine that the best scenario going forward is taming the tech companies into nostalgic, honest manufacturers (they were not to begin with). But understandable.

If the governance were to follow this narrative of taming, then it’s not only ineffective, but also dangerous. A digital product is, in a sense very different from a mechanical product. It has greater capacity of obscuration and manipulation that happened without consumers’ realizing it. More than ever, WE ARE FORMED BY HOW THESE PRODUCTS ARE DESIGNED.

So the initial discussion surrounding cultural shift is a good starting point. But it’s undeniable that people in the tech communities (especially with an inclination to support FLOSS) are much more advanced in terms of their understanding about the problem. It makes sense, as they touch and play with the software using most of their day. And that’s why there needs to be more conversation happening between policy makers, citizens, and tech practitioners, since they are the experts (both in terms of knowledge, but also affectionately) in this subject matter. It’s always a good idea to hear out from people who think and do the subject matter more than you are doing (we call them experts). But the cultural gap (“language barrier”) seems to be a bigger challenge than I’ve imagined, which will be discussed later.

As oftentimes said in creative coding community, increasing the number of people who can use digital tools is important, but insufficient. What matters more is expanding the number of people who can change the tools themselves.

Designers increasingly work to actively shaping the systems they use. Policy makers, too, should engage at this practical, hands-on level. If they are to guide technological development, they need the same kind of material literacy that designers and engineers cultivate. The challenge is not only expertise, but scaling that expertise so that others can meaningfully participate.


Challenge 2: Narratives, Beliefs, and the Machine Consciousness Debate

A notable tension emerged around the question of machine consciousness. For some, even entertaining the idea seems implausible or irrational. Yet the discomfort reveals something deeper: a clash of narratives about intelligence, agency, and the boundaries of the human even among people from the same school, but different departments (Digital Futures, Strategic Foresight).

The issue is less about whether machines “feel” and more about how differing beliefs shape our capacity to collaborate. People can believe in anything they want, but the division is causing operational problem. If you are not curious about the narrative of another side, you won’t be able to have a constructive conversation.


Challenge 3: Policy Alone Is Not Enough

Despite the proliferation of governance frameworks, the discussion seemed to be turning in circles even after all these sessions. Policy language is often filled with terms like “should,” “may,” and “ought to,” with little clarity about implementation. Without accountability, even well-intentioned policies can fail to produce meaningful change.

It may be that technologists, policymakers, and everyday citizens are living in fundamentally different narratives:

Technologists Citizens Policymakers
Less financially affected More affected by non-financial harms Unclear
Basic technical literacy Limited or no foundational understanding Limited understanding, more opinionated
Observant, exploratory Anxious, cautious, often frustrated Agitating

With this, the question ties back to challenge 1 and challenge 2

  • Policy makers need to be curious enough about the subject matter, and try to help as many people to access the information.
  • More communication need to happen between tech, citizens and governance, finding a common language, and being willing to listen to the narratives of the other side.
In both cases, curiosity is the most important quality moving forward. Interestingly, the same thing was mentioned by[[Wired University Lecture with Yasuaki Kakehi & Dominique Chen Kakehi and Chen]].

Toward a More Grounded Future

The event was well-concluded by Bianca as they proposed that to live well with technology, we may need to shift our orientation entirely: to see technology as one part of a broader cultural landscape, rather than the centre of it. Be outside and around technology, instead of inside, don’t think from technology, think of what you want to do and contextualize it.

Going meta is the key, it’s not having more users who can use the tool, it’s how to have more users who can change the tool.

How to inspire people to care the technology and our living world.


Appendix: Comparing Canada’s Digital Charter with US and EU

Feature Canada (Digital Charter/C-27) Europe (GDPR & AI Act) United States
Legal Style Principles-based. Sets broad outcomes and lets companies decide how to meet them. Rules-based. Extremely prescriptive and rigid “black and white” rules. Sectoral/Patchwork. No federal law; a mix of state laws (CCPA) and sector rules (HIPAA).
Privacy Philosophy Individual Autonomy. Focuses on the user’s control over their information. Human Rights/Dignity. Privacy is a fundamental right that the state must guard. Consumer Protection. Focuses on preventing “unfair or deceptive” business practices.
AI Regulation Targets “High-Impact” systems specifically. Risk-based categories (Unacceptable, High, Limited, Minimal). Mostly Voluntary/Sectoral. Relies on NIST frameworks and Executive Orders.
Enforcement Moving toward a Tribunal model with heavy fines. National Data Protection Authorities with massive fining power. FTC and State Attorneys General; varies wildly by state.