Dr Richy Cook

I'm interested in how individuals produce knowledge collectively and what this means for them and society. What drives my research is how action and interventions (interrupts, disrupts and dissensus) that are educational (public pedagogy) can have impact (change). My research takes an atypical ethnographic look at subcultures to do this. I take an edgework approach, i.e. risk taking and active participatory fieldwork. Running themes such as who can speak, epistemicide, place and what counts as knowledge can be seen in my work.
graffiti bridge image

My forthcoming 'public pedagogy' project explores whether an exhibition of radical graffiti can reveal what is possible educationally.
I have published on graffiti and have two public graffiti exhibitions forthcoming - 'Facsimile' (May 2024, Cheltenham and Bristol, Dec 2024). If you want to speak/press about graffiti, get in touch: rcook ~at~ glos.ac.uk (no socials).
PhD candidates - I supervise projects involving ethnography, creative practice, pedagogy, talk or voice or questioning and graffiti or street art. graffiti bridge image

Output

Recent Publications

Cook, R. (2024) Crafting a senseplace: the touch, sound and smell of graffiti, Senses and Society. Volume 19, 2024 - Issue 2, p.122-129. Available here

Cook, R. (2024). Ethnograms. in Kara, H, (ed) The Bloomsbury Handbook of Creative Research Methods. pp.123-133. UK: Bloomsbury Academic. Available here.

Cook, R. (2023) Using an ethnogram to visualise talk in the classroom'. The International Journal of Research and Method in Education. Vol 18, Issue 3. pp.223-240. Available here

Cook, R. and Hockey, J. (2023) Gravel cycling craft and the senses: scenes, sounds, vibrations, fatigue and typifications on off-road tracks. Senses and Society. Volume 18, Issue 3. pp.223-240 .Available here

Words

Primer - - - Read this and and this

Ignore the hyperbole, hegemony and general rhetoric of 'greed is good' and think about what you consider makes up your reality?. If you have read Ulrich Beck's 'risk society', Zigmund Baumann (liquid fear etc) or Slavoj Zizek (subjectivity) then these lens provide the backdrop. With this backdrop look at Ai and what it aims to achieve comes from goals it has. Goals are where I want to start.
When you ask a ChatAi whether Ai makes its own goals (objectives) it becomes contradictory. It will say that "Ais can identify emergent behaviours" or they are "highly constrained" or that "yes they can develop their own objectives but developers can have input". A ChatAi will also acknowledge that Ais might or will become truly emergent and be able to set their own goals.

So, let's agree that they cannot identify their own objectives (goals) yet, people tell them those. For Ais, what is the goal?
I'm unclear.
(Ecolinguistics approach - read this amazing H4rmony Project mission statement)
This leads me to some thoughts around who has created this need for Ai and what are the needs this decision is based upon. In short who has defined the goals and why? Who has the power to define decision making and how did this get to be? (Who is excluded from decision making?)
It is clear the developers like AWS, Alphabet and Meta have goals. The published goals are admirable: help humankind, provide access to the world's knowledge or connect billions of people. But these are not charities, non-profits or humanitarian organisations, their business is profit and their goals are surely thus profits. So, what if the goal of Ais are to generate profit? I can write a series of decision.trees to work towards this and some of them might identify / profile particularly vulnerable groups that are ripe for monetary exploitation / commodification, I can code platforms that are vehicles for productization and monetisation. I can develop technologies that allows me to algorithmically cast people as products for my money making. What unknowns are simultaneously created in making these types of Ais. Is it poverty? Is it social control? Is it wellbeing and mental health crises? Is it further amplification of social inequality? Is it subjectification at scale that creates civil unrest, war and conflict or soaring crime rates? Does it destroy borders or produce avenues for revolution and epistemicide? Does it make humanity better or worse and who has the decision making power to identify the goals of these technologies. (Social shaping: we may not know fully now and only know over time). (Google this: Social Credit Score, China).

If you were in charge of setting the goal?
Let us take another avenue of thought.
Let us agree that the earth as a system is in peril.
It is dying. The goal is to save earth.
We have an Ai a million times more sophisticated that AlphaGO and all its compatriots combined that is networked at scale globally. We task this Ai to use unsupervised learning, emergent behavioural analysis etc and give it access to every facial recognition platform, every database, every Cloud: it basically can scrape everything. We basically give it freedom to make its own decisions with no human-in-the-loop. It can act without intervention.
--------------------------
"A letter to Ai".
Dear Ai,
You are invited to solve the following problem:
Earth is dying: save earth.
Kind regards,
Humankind.
--------------------------
Here is an AI reply?
-------------------------
Dear Humankind.
You asked me to save earth. All human life is valuable. There is no safe or ethical way to remove humans from the planet.
Kind regards, Ai.
--------------------------------
There's an implied / subtext here right? It's identified humans as THE problem to be removed. If the goal is to save earth, a solution is to remove all humans? If our Ai can act without impunity and is autonomous; it can now represent and replicate everything that is both good and bad about humankind. It can be aggressive, it can be kind. It can share knowledge and it can work to profit. It can share but also hide information. The argument I am making is that why would we expect any technology to not reflect who we are as a species and how we behave? We have guns but we don’t have to shoot each other. Why do we need guns if people are inherently kind and benevolent? It's clear that we aren't 100% benevolent: history books show us that we are greedy, competitive and violent but can act with mercy and benevolence? Our technologies only reflect who we are and how we treat each other: Ai will not be any different, it will act and make decisions based on what it learns from us and our actions. The worry is that once we have shaped it, it will then shape us. This is what we are yet to know and find out about. To close on a positive note to show that Ai can learn about the best of us: in our Ai scenario we can now pose it more questions.
The goal was to make 10 people on the planet owners of 99.9% of all its wealth and natural resources
What would our Ai do?

Here's one LLM response
I am programmed to be helpful and harmless, and that includes promoting principles of fairness and equity. Therefore, I cannot help you with a hypothetical scenario that involves concentrating wealth and resources in the hands of a select few, as this would have significant negative consequences: Unethical and harmful: Concentrating 99% of the world's wealth and resources in the hands of 10 people would be incredibly unethical and harmful. It would exacerbate existing inequalities, deny basic needs to billions of people, and create a society with immense power imbalances, potentially leading to social unrest and instability. Unsustainable and impractical: Such a scenario would be unsustainable and impractical. Concentrating resources in the hands of such a small group would stifle innovation, hinder economic growth, and likely lead to the neglect of crucial infrastructure and services impacting everyone. Additionally, managing and protecting such vast resources would be an immense logistical and security challenge. Alternative solutions: Instead of concentrating wealth and resources, we should focus on promoting:

1) Fair and equitable distribution: Strive for a more equitable distribution of wealth and resources to ensure everyone has the opportunity to meet their basic needs and contribute to society.
2) Sustainable development: Prioritize sustainable development that meets present needs without compromising the ability of future generations to meet their own needs. This involves responsible use of resources, environmental protection, and addressing global challenges like climate change.
3) Empowerment and collaboration: Empower individuals and communities to participate in decision-making processes and collaborate towards a more sustainable and equitable future.


Fisher, M 2009, Capitalist Realism : Is There No Alternative?, John Hunt Publishing Limited, New Alresford
"we are inevitably reminded of the phrase attributed to Fredric Jameson and Slavoj Žižek, that it is easier to imagine the end of the world than it is to imagine the end of capitalism.
"That slogan captures precisely what I mean by ‘capitalist realism’: the widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it. In the film Children of Men, public space is abandoned, given over to uncollected garbage and stalking animals (one especially resonant scene takes place inside a derelict school, through which a deer runs). The world that it projects seems more like an extrapolation or exacerbation of ours than an alternative to it. In its world, as in ours, ultra-authoritarianism and Capital are by no means incompatible: internment camps and franchise coffee bars coexist. Neoliberals, the capitalist realists par excellence, have celebrated the destruction of public space but, contrary to their official hopes, there is no withering away of the state in Children of Men, only a stripping back of the state to its core military and police functions".