Karon Jones, Tamara George, Thomas Depew, and Charles Woods
Introduction
2024 was a busy year filled with updates for our institution. The monumental update was to our university’s name as we officially transitioned from Texas A&M University-Commerce to East Texas A&M University (East Texas). Another critical update was the establishment and launch of a Master’s in Artificial Intelligence program, the first of its kind in Texas. This program is housed in the East Texas College of Innovation and Design (CID), and the in Fall 2024, Charles was asked to teach AI 510: Ethics of Artificial Intelligence. The course description reads:
“As artificial intelligence (AI) continues to transform various aspects of our lives, it becomes imperative to examine the ethical implications of its development, deployment, and impact on society. This course is a topical seminar designed to engage students in critical discussions surrounding the ethical challenges and dilemmas posed by AI technologies. Topics may vary, but may include: bias and fairness; transparency and accountability; environmental sustainability; AI and social justice, legal implications, emerging technologies, case studies, privacy issues, ethical guidelines and policy development.”
A variety of books and open-access resources available from our university library, Velma K. Waters Library, were integrated into the course. The open-access resources we read included, Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies (Stahl, 2021) and Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges (Stahl, B.C., Schroeder, D., Rodrigues, R., 2023). But we also performed a close reading of Kate Crawford’s (2021) book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence to guide our discussions and to challenge the ways we were thinking about technology, ethics, and cultures.
Charles created discussion prompts based on each chapter of Atlas of AI to which we (Karon, Tamara, and Thomas) and our peers responded. The class worked through each chapter of the book, including those which cover issues related to the DRPC mission, like Chapter Three. Data and Chapter Six. State. There was Choose Your Own Adventure element to the reading wherein students could opt out of reading one chapter. Activities and discussions included creating maps of the geo-locations where data is most often collected from us in the haystack of information, developing working ethical frameworks for performing labor related to AI, and analyzing terms of service documents for AI technologies using the digital rhetorical privacy analytic elements. At the end, we used critical points from each of our discussions to individually create an AI Manifesto. In the rest of this blog post, Charles shares the prompt for the Atlas of AI Conclusion. Power & Coda. Space discussion followed by AI Manifestos from Karon, Tamara, and Thomas. We offer a brief conclusion at the end.
The Prompt
Due:
Choose Your Own Adventure: There is not a Choose Your Own Adventure element for this prompt. You must complete the Atlas of AI Conclusion. Power & Coda. Space Forum.
Prompt: Throughout the book, Atlas of AI, Kate Crawford presents ethical issues at the intersection of artificial intelligence (AI) and earth, labor, data, classification, affect, the state, power, and space. I want us to spend some time being reflective now that we have concluded the book, but also being forward-thinking as we chart potential futures for policy related to ethical AI.
A manifesto is a public declaration often related to policy. They occur in many forms. Please draft an AI Manifesto based on your positionality, your lived experiences (including with AI), and your career field or field of study that attends to each of the constructs Crawford’s attends to in her organizational framework for Atlas of AI: 1) earth, 2) labor, 3) data, 4) classification, 5) affect, 6) the state, 7) power, and 8) space.
Here are some approaches I think are helpful to consider for this assignment:
- Reflect on what you learned from Atlas of AI Conclusion. Power & Coda. Space and how what you learned challenged what you know/knew
- Outline what you think ethical AI policy looks like to you, generally, and/or within your career field or field of study
- Chart potential futures for ethical AI usage generally, and/or within your career field or field of study
- Include 2-3 quotes from the book to support your arguments.
- Use scholarly and non-scholarly resources (from class or otherwise) • Highlight personal stories and anecdotes.
- This could be an essay, but what other forms could a manifesto take?
- Be creative if you like.
There is not a length requirement. You have a great deal of freedom in this assignment. I am excited to see what you all produce.
Response Requirements:
AI Manifestos
Karon Jones
Earth – There are so many concerning aspects of AI that I’m not sure enough people are talking about. In my life – campus, classroom, social life, etc. – I use AI daily; yet, I was not aware of the implications of AI to the Earth. As Crawford demonstrates, AI comes with devastating environmental costs that cannot be ignored:
- Awareness: Regardless of the culprit, it’s imperative that we pay attention to our carbon footprint – reduce, reuse, and recycle – and we must also support political candidates
who are mindful of environmental concerns., specifically within the ever-expanding technology sector, which might seem clean, but is far from it.
- Learn the Science: The evidence of the urgency is in Crawford’s research: “Data centers are among the world’s largest consumers of electricity” …” in the form of coal, gas, nuclear, or renewable energy,” and “There are seventeen rare earth minerals used in technological devices, including batteries…Extracting these minerals from the ground often comes with local and geopolitical violence” (Crawford, xxx, pages).
- Personal connection: As a young child of the 1960s and 1970s, the commercial with the tearful First Nation environmentalist still lives in my brain. There must come a time when what we are doing to the planet comes first or convenience, capitalism, and constructs always take precedence. To quote the First Nation mantra, “People start pollution, people can stop it.”
Labor – Crawford clearly states that “large-scale computation is deeply rooted in and running on the exploitation of human bodies” (57). Within and outside of the technology sector, the author reminds us that with AI/technology, workers are surveilled more than at any time in history. This seems true regardless of the field.
- Awareness: The list Crawford provides (observe, assess, and modulate, (59) passively surveil, timing devices, sensors, (76) reminds us that “humans are being treated like animals (74), robots (84), and (slaves)” (78).
- Personal Experience: No doubt, surveillance and stress occur in public education – lesson plans, observations, mandated state standards, and other expectations – and are known to be why many public education teachers are exiting the field. No one can function under that type of continuous pressure.
- The call to action involves solidarity, protests, walk-outs, political action, and other movements that push back because, as Crawford states, “We all have a collective stake in what the future of work looks like” (87).
Data – Crawford asks us to consider how data is obtained and how it’s used, stating “The AI industry has fostered a kind of ruthless pragmatism, with minimal content, caution or consentdriven…practices while promoting…that the mass harvesting of data is necessary and justified” (95).
- As always, awareness. Read contracts, etc.
- Choose data privacy settings via privacy tools, change passwords, enable 2-factor authentication.
- Opt out of data collection when possible
- Avoid clickbait, offers, quizzes, etc.
- Demand transparency in data collection.
Classification – The way that AI data is classified produces discriminatory results in all facets: gender, race, socioeconomic, political, etc. As Crawford states, “Classification is an act of power.” Because classification relies on machine learning and predictive analysis based on data sets scraped from human input, biases exist. This “power to define” people (141) is dangerous.
- Determine which frameworks are creating the problems.
- Change the practices that create “nonconsensual classifications” and “normative assumptions” (148).
- Demand more oversight and more transparency (regulations)!
Affect – Decisions made by any entity based on AI’s facial recognition data is subject to bias, misinformation, and disinformation because as many in this chapter argue emotions cannot be adequately classified based on facial expressions alone.
- We must question the origins of claims made based on facial recognition.
- We must understand how unreliable factual recognition data sets are and the nefarious ways this information can be used: in political campaigns and the workplace, etc. for example.
- We must acknowledge that “Efforts to simply ‘read out’ people’s internal states from an analysis of their facial movements alone, without considering various aspects of context, are at best incomplete and at worst entirely lack validity” (178).
State- – Ai allows people to be “tracked and understood” without their knowledge (184). The
National Security Agency of the US Government, for example, has an “empire of information (184), “large aggregates of data and insights,” (185) on private citizens and groups – as do many other countries. As Crawford explains, this information is collected and used against human bodies, including spaces like “classrooms, police stations, workplaces, unemployment offices,” (209).
- Awareness of companies and institutions that collect data is a starting point, ie banks, employers (Walmart), government organizations (PDs. ICE, US Department of Health and Human Services, and Amazon’s Ring Devices – for a short list).
- Choosing not to participate. (Google’s employee protest regarding Maven).
- Demand transparency.
- Avoid date-collection software and protocols like social media platforms and crimereporting apps, etc.
- Hold decision-makers and politicians accountable (via capitalism, voting, etc.)
Power – Crawford reminds us that AI is designed to “discriminate, to amplify hierarchies, and to encode narrow classifications” in ways that “benefit the states, institutions, and corporations”
(212). It is inherent that individuals practice…
- Awareness – of stories of those “disempowered and discriminated against” (225).
- Resistance – call for “legal and technical restraints” (226).
- Refusal – “broader national and international movements that refuse [inequitable and injustice] technology-first approaches” (226-27).
- Both power and state discuss the global “AI” race. The warnings include the idea that whichever leader believes they have the advantage in AI could potentially assert that preconceived advantage to do harm to adversaries; international agreements are imperative as there are so many acting on self-interest.
Space – Crawford elaborates on the future of AI and the potential to use AI to colonize space. Of billionaires focused on space colonization – Bezos, Diamandis, Page, Schmidt, Musk, etc., Crawford warns “Extreme wealth and power…now enables a small group of men to pursue their own private space race,… subsidized by government funding and tax incentives, as well” (231).
- An additional topic that I know nothing about, but I’m learning, so awareness, again, I have never known how to combat billionaires except the opportunity to do so collectively if we read, learn, and vote for regulations, etc.
Work Cited
Crawford, Kate. Atlas of AI. Yale UP, 2021.
Tamara George
I’m at a war within myself. I want to preserve the environment and have been told that eliminating fossil fuels is the way to do just that. However, to do that, I have to rely on lithium batteries, and the creation of such is as detrimental to the environment as the fossil fuels I’m using. Several times this semester, I’ve mentioned my reliance on technology (my Alexa, Google Home, iPhones/iPad/Mac), and how I can remember life without such technology and choose to not revert back. So I’m at a war with my love for technology and my own desire for ethics, privacy, and security.
I have a responsibility to the environment. It would be flippantly easy to say, ”I’m just one person. What harm can I do?” But if each person on the planet pleads the same, all of us incur so much damage on Mother Earth that it is irreparable. I am not solely responsible for the mining of lithium or other natural resources, yet I play a role. I am not solely responsible for the data mining AI companies and developers are doing, yet I play a role. Sadly, I’m not sure I could willingly relinquish any of the tools I use. I enjoy the ease of life AI presents. But I do know I need to use AI ethically and responsibly. I do also realize that those who develop AI are those who hold power. I form the group of the powerless. If I choose to relinquish AI, I choose to give up mobile banking. Navigation services. Google searches. I choose to give up a valuable tool that makes life more efficient.
I don’t know how to change what I’m doing for the better of the greater good. I also don’t know that any of us can agree as to what that “greater good” is. Some feel that we need space exploration, yet I argue that we don’t . I argue that we need to utilize the technology that we have to preserve the society and environment we have. Why should we ruin Space or other planets the way we’ve ruined Earth? This would be riotous and unethical.
Kate Crawford states, “Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it. They are designed to discriminate, to amplify hierarchies, and to encode narrow classifications. When applied in social contexts such as policing, the court system, health care, and education, they can reproduce, optimize, and amplify existing structural inequalities” (211). This discrimination is not new. It is simply a rebranding of what our society has encountered for the past 400 years: suppress those who are deemed less valuable. As the billionaires such as Bezos and Musk race to space, they are willing to stand on the bones of the labor forces, the weak, the disabled. They refuse to see how they are strip-mining humanity on earth in order for them to abuse the properties of space.
Crawford also argues that, “AI systems are built to see and intervene in the world in ways that primarily benefit the states, institutions, and corporations that they serve. In this sense, AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them” (211). While scholars and tech developers argue that AI is there to ease the burden of labor, I am hesitant to buy into such jargon. AI, although it might be a beneficial tool (and I do use it!), is also redacting critical thinking and independent thought as it encourages laziness and interdependence on algorithms.
Careful use of AI is necessary. Discriminatory permissions are our responsibility as we allow AI to invade our lives.
Thomas Depew
Various worldwide cataclysmic events have been predicted since the 1967 book, “Famine 1975! America’s Decision: Who Will Survive?” (Lott, 2019). Global warming, freezing, nuclear winter and other threats have been predicted but none have come to pass. My point in citing these items is only to say we should be careful with our concerns so we can make the right policy prescriptions.
I am less concerned about the infrastructure side of AI, e.g. energy and technology issues. AI does have tremendous power requirements, but what if a new generation of AI chips require 15% less power? We saw some of that innovation on the PC side with the introduction of the
Snapdragon Elite processors for Microsoft Windows that meet their performance requirements for running Copilot locally. Those chips are so efficient that you can have a laptop that does not require a fan for cooling (so called fan-less design).
As other manufacturers meet the 40 trillion operations per second (TOPS) performance standard, some of the existing data center workload can be offloaded to these PCs, lowering data center power usage. Additionally, there is an effort to further reduce the current 3 nanometer to 2 nanometer processes for chip production. That will usher in even more efficient chips which in turn will reduce energy consumption.
What I am concerned about is the uses of AI and how it is susceptible to misuse. As Crawford noted, “there has been a widespread pillaging of public spaces; the faces of people in the street have been captured to train facial recognition systems; social media feeds have been ingested to build predictive models of language; sites where people keep personal photos or have online debates have been scraped in order to train machine vision and natural language algorithms.” (Crawford, 2021).
It is this type of invasion of privacy that is concerning. Concurrent with the development of AI has been the rise of big data. I once learned that it would take 5 exabytes (5 followed by 18 zeros) to store a transcript of every word ever spoken by anyone in the entire history of the world. It is estimated that in 2024, 147,000 exabytes of data will be generated (Duarte, 2024). There is likely insufficient controls on the use of this data and thus open to misuse.
The AI world is hoovering up data at an alarming rate and will continue to do so. The technology industry response seems tepid at best. As Crawford notes, “To date, one common industry response has been to sign AI ethics principles. As European Union parliamentarian Marietje Schaake observed, in 2019 there were 128 frameworks for AI ethics in Europe alone.” (Crawford, 2021).
Are there neutral arbiters or protectors for this situation? Even Mark Zuckerberg regretted bowing to government pressure regarding information during the pandemic (Korte, 2024). The point here is the same point that Crawford makes that, “The intelligence agencies led the way on the mass collection of data, where metadata signatures are sufficient for lethal drone strikes and a cell phone location becomes a proxy for an unknown target.” (Crawford, 2021). Simply put, the control of data and its derivative uses is a difficult thing to regulate that is fair to all stakeholders. I suspect there is no end point for that effort but rather a situation that will require constant vigilance and courage.
References
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven and London: Yale University Press.
Duarte, F. (2024, June 13). Amount of Data Created Daily (2024). Retrieved from Exploding Topics: https://explodingtopics.com/blog/data-generated-per-day
Korte, L. (2024, August 27). Zuckerberg says he regrets caving to White House pressure on content. Retrieved from Politico: https://www.politico.com/news/2024/08/26/zuckerberg-meta-white-house-pressure00176399
Lott, M. (2019, March 19). 10 times ‘experts’ predicted the world would end by now. Retrieved from Foxnews: https://www.foxnews.com/science/10-times-expertspredicted-the-world-would-end-by-now
Our Short Conclusion
We believe the importance of understanding the implications for data privacy and surveillance pertaining to AI are vast, with many being unboxed in real time. We need continued investigation of the ethics of AI in courses like ENG 510: Ethics of Artificial Intelligence where in students work through their relationship to and declare their opinions about AI. You’ll note specific themes related to security, data privacy and surveillance, and labor apparent throughout all three of our manifestos. To us, these are effective places to begin your inquiries.
Please feel free to adapt Charles’s prompt and our manifestos as genre examples. Reach out to Drpcollective@gmail.comor Charles.woods@tamuc.edu for further information or other Atlas of AI prompts.
Works Cited & Referenced
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven and London: Yale University Press.
Stahl, B.C., (2021). Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies. Springer.
Stahl, B.C., Schroeder, D., Rodrigues, R. (2023). Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges. Springer.
Author Bios
Karon Jones is a PhD student at East Texas A&M University studying composition and rhetoric. Her research interests lie at the intersection of artificial intelligence (AI), disability studies, and counterstory. She is currently a dual credit ELAR teacher at a local high school.
Tamara George is a PhD student at East Texas A&M University studying composition and rhetoric. She is also a high school teacher in Texas.
Thomas Depew is a computer scientist and graduate student enrolled in the Masters of Artificial Intelligence program at East Texas A&M University.
Charles Woods is an Assistant Professor and master’s program coordinator for English at East Texas A&M University. He studies digital rhetorics.

Leave a comment