• RESEARCH
  • #USSOBOOKHOUR
  • REVIEWS
  • EYES ON EVENTS
  • SPECIAL SERIES
  • EVENTS
  • #WRITEAMSTUDIES
  • USSOCAST

British Association for American Studies

×

M3GAN and ChatGPT– A Critique of Contemporary AI?

In an interview about OpenAI’s ChatGPT, Matt Murray from the Wall Street Journal asks Microsoft CEO Satya Nadella ‘do we need to learn math anymore? Why learn math?’[i]

The New York Times article ‘Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach, retells the story of a teacher catching a student using ChatGPT to cheat on a paper. They described the AI-generated essay as ‘the best paper in the class […] with clean paragraphs, fitting examples and rigorous arguments’.[ii]

ChatGPT is a large language model chatbot, which means it has been trained on extremely large datasets in order to generate answers to text-based prompts. Large parts of the internet are scraped to gather information to train ChatGPT to give effective and well-written answers, or put plainly, ‘these algorithms are shown a bunch of text in order to understand how text functions’.[iii]

ChatGPT has both been penned as the most advanced chatbot currently available[iv] and met with existential questions about the future of job security and education.[v] Indeed, even the term AI (artificial intelligence) is contested as the name suggests real computer-simulated cognitive intelligence. At best we currently possess narrow AI, defined as AI that can solve a specific task, like Google Maps planning a route or Amazon’s Alexa playing a song you asked for.[vi] However, proponents of AI tend to speak of their systems as ‘advanced AI that benefits everyone’ to hype up their products.[vii] Of particular note right now is Silicon Valley’s ‘AI arms race’[viii] between Google and Microsoft. Google has long reigned supreme in AI development, however, Microsoft has invested billions of dollars in ChatGPT with hopes to challenge Google in search and cloud computing.[ix] We can expect to see multiple AI products emerging from these companies with each of them touting its superiority over the other.

But what are the use cases for AI? Microsoft’s Nadella, in response to Murray’s question, praised the educational potential of ChatGPT as it can become a ‘personalised tutor’ to ‘help you learn’.[x] This was a somewhat fair argument as AI has assisted doctors in cancer cell detection[xi] and helped scientists detect deadly methane gas leaks.[xii] However, others have rightfully pointed to the erasure of job security for many professions, including but not limited to, journalists, coders, artists, graphic designers, and maybe even teachers and academics.[xiii] Others still have gone further to point to the theft of intellectual property needed to train technologies like the large language models that make up ChatGPT or the text-to-image generative AI that create images out of scraped data.[xiv]

To tackle some of the threats on education posed by ChatGPT, American Ivy League universities like Harvard University and The University of North Carolina highlight the need to move beyond mere test scores that are easily cheated (ChatGPT can pass law exams reasonably well) and place emphasis on displays of human experience and values during the admissions process.[xv] However, Drake Bennett notes that it may be easier to make such tech-resistant changes within education, but more difficult to know is ‘how much generative AI will change the world students graduate into. If ChatGPT can be an OK law school student, maybe it can be an OK lawyer. And to someone looking for legal advice, it’s the work product—and often, the price—that matters’.[xvi]

At the centre of such discussions are questions regarding whether AI is a tool to aid humanity or whether AI will conquer us all. Cancer detection and methane gas leak detection are good examples of tools that help us. Importantly, humans are still “in the loop” when using these tools. However, software like text-to-image generative AI—which scrapes the internet, steals artists’ work, and provides the end user with free images or designs that normally require paying a skilled human—removes the necessity of human expertise.[xvii] The removal of the human overseer tends to be where the threat lies. Asking questions like ‘Do we need to learn math anymore?’ or ‘Can CHATGPT be an OK lawyer?’ therefore creates an uneasy feeling. Such a world removes humans from the practice of math or law and can lead to situations where humans can no longer check if the equations are even being solved correctly or if AI is making just decisions. A healthy level of mistrust should be maintained to avoid disastrous overreliance on potentially flawed systems.[xviii]

M3GAN and the Removal of the Human

The potential removal of the human is at the heart of the film M3GAN (2023). Directed by Gerard Johnstone, M3GAN is a science fiction horror (with elements of comedy) about an AI children’s doll called M3GAN, which stands for Model 3 Generative Android. After the death of her parents, young Cady is left in the custody of her aunt Gemma, a roboticist at a high-tech toy company called Funki. Gemma builds the M3GAN doll as a companion for Cady who is struggling with the tragedy, and as an aid for Gemma in the unfamiliar territory of parenthood and guardianship.

Gemma impresses her boss with the M3GAN doll and gets caught up in its worldwide release, allowing Cady and M3GAN to become unhealthily attached. M3GAN becomes increasingly advanced and protective of Cady, worrying Cady’s therapist and Gemma’s friends. M3GAN becomes homicidal in her mission to protect Cady and eventually becomes self-aware and murderous in her own self-protection. By the film’s close, Gemma realises that Cady is her number one priority and destroys M3GAN, although a menacing image of Gemma’s Alexa-like smart home device lingers in the final moments of the film.

M3GAN lies somewhere between traditional science fiction anxieties about the human being replaced by the mechanic, and Black Mirror-esque critiques of today’s tech-heavy capitalism. Though often goofy and far-fetched, which is part of the fun of this film, there were several concerns expressed in M3GAN that can easily be related to problems in real-world AI systems.

M3GAN’s fast evolution speaks to concerns about machine learning in real-world AI, a popular technique whereby algorithms autonomously enhance and self-correct to improve their outcomes. Computer scientists themselves often cannot account for how these models work, as calculations are made deep within AI models, meaning identifying and fixing errors can be challenging. Real-world AI are often trained on large datasets that cannot be fully audited, documented or accounted for, meaning that scientists do not fully know what data their machines are “learning” from.[xix]

There are efforts by researchers in the US, informed by the EU’s GDPR and AI Act[xx], to implement measures demanding companies can provide ‘system audits, documentation and data protocols (for traceability), AI monitoring, and diversity awareness training’ to improve the transparency of their AI systems.[xxi] But, to date, there are no regulations of AI systems under congressional law. However, Ted Lieu, a Democratic member of the U.S. House of Representatives from California, hopes to change this. He intends to introduce ‘legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply’.[xxii] Such moves towards regulatory efforts are welcome, but often take many years as opposed to the fast pace of AI development. It will remain to be seen whether ChatGPT will be required to reveal its vast training data.

Machine Learning and the Paperclip Thought Experiment

M3GAN’s machine learning capabilities are a major theme of the film, which takes a dystopian turn when her development runs away from Gemma’s original concept. The doll autonomously learns based on data scraped from the internet at large, which begins innocently as learning how to parent Cady but then fuels her path towards menacing sentience. M3GAN’s predilection for homicide parallels real-world chatbots like Microsoft’s ‘Tay’ chatbot, which was released on Twitter in 2016 for users to chat with. However, within just 24 hours Tay was taken down after it tweeted numerous violently racist, anti-Semitic, and sexist tweets.[xxiii] Both fictional and real-world events point to the need to carefully document machine learning training data, as such proponents as Timnit Gebru have demanded for a number of years.[xxiv]

M3GAN quickly becomes obsessed with protecting Cady, be it from Gemma when Gemma tries to parent her, or from a school bully. This obsession stems from a prompt made by Gemma to protect her, a flyaway comment that a real-world computer scientist should know would be disastrous when creating an AI system. This can be exemplified by the paperclip thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. In Bostrom’s summation:

‘The thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips.

It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat (think The Terminator).

This AI is much smarter than us, so it is likely to win that battle. We have a situation in which an engineer has switched on an AI for a simple task but, because the AI expanded its capabilities through its capacity for self-improvement, it has innovated to better produce paperclips, and developed power to appropriate the resources it needs, and ultimately to preserve its own existence’.[xxv]

M3GAN could easily be read as a retelling of this thought experiment. Machines cannot make inferences, read between the lines, or understand nuance and subtext. Morality is a human construct, the meaning of morality itself is still up for debate, so we cannot expect machines to make moral decisions or understand such a concept.

That M3GAN was instructed to simply ‘protect’ Cady means she will kill if necessary. Though we are far away from the artificial general intelligence presented in M3GAN, if it is even possible at all, even narrow AI trained on data scraped ad hoc from the internet will still attempt to solve the task at hand with equally imprecise and unintended results. ChatGPT has several reported instances of hallucinated answers, where it will give convincingly well-argued answers that are entirely false, for example, its insistence that a kilo of beef weighs more than a kilo of air.

Though often comedic, the consequences of this take a darker turn. Such fake facts can and will seep into public discourse and further exacerbate issues around disinformation.[xxvi] Though we may be far away from killer dolls (even if we do have killer robots[xxvii]) or super intelligent paper clip makers, it is still worth investigating what tasks AI are being given to solve and what information they are given to solve them.

The central conflict of M3GAN is between Cady and Gemma and their ability to form a relationship. This concern speaks to the anxiety of the replacement of the human as Gemma unwittingly outsources the task of parenting to M3GAN. What was meant to be a tool becomes her replacement.

Large language models like ChatGPT or other machine learning AI do not have humans in the loop, and often their creation is based on the removal of the human. The machine learning process is done autonomously by design, so humans are removed at the creation level. Their implementation into real-world situations is also premised on the removal of the human. For example, customer service agents being replaced by chatbots can drastically cut staff costs for businesses, as would any other profession that can outsource to machines.

Beyond issues such as reskilling masses of workers displaced by automation or the environmental and economic concerns of running such data-heavy, energy-guzzling applications, machines have shown to be erroneous at best and dangerous at worst. The film essentially calls for humans to remain in the loop, a lesson to bear in mind as we enter 2023, prophesied by many to be the year of AI explosion.[xxviii] Summarised nicely by technology ethicist Katie Darling, we must be cautious about the tasks we outsource to machines:

‘I’m not concerned about what I saw in the [M3GAN] trailer happening in real life – the AI becoming too intelligent and not listening to commands,” Darling said. “I am concerned about whether AI should be used to replace human ability in relationships, and the answer is no’.[xxix]

[i] Wall Street Journal. “Satya Nadella: Microsoft’s Products Will Soon Access Open AI Tools Like ChatGPT | WSJ,” January 17, 2023. https://www.youtube.com/watch?v=UNbyT7wPwk4.

[ii] Kalley Huang. “Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach.” The New York Times, January 16, 2023. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html?te=1.

[iii] Whitney Terrell and V.V. Ganeshananthan, “Chatbot vs. Writer: Vauhini Vara on the Perils and Possibilities of Artificial Intelligence,” interview of Vauhini Vara, Literary Hub, January 26, 2023, https://lithub.com/chatbot-vs-writer-vauhini-vara-on-the-perils-and-possibilities-of-artificial-intelligence/.

[iv] ChatGPT Pro. “ChatGPT: The Most Advanced AI Chatbot in 2022​,” January 21, 2023. https://chatgpt.pro/.

[v] A simple Google search will provide fear-mongering articles as: Rob Waugh. “‘AI Will Take 20% of All Jobs within Five YEARS,’ Expert Warns.” Mail Online, January 23, 2023. https://www.dailymail.co.uk/sciencetech/article-11655443/amp/AI-20-jobs-five-YEARS-expert-warns.html.

[vi] See: Paris Marx, “Don’t Fall for the AI Hype,” interview of Timnit Gebru, January 19, 2023, https://open.spotify.com/episode/7M0JirnYw6wN6l2lHt09Zx?si=ad34b909a87e459b and Silicon Republic, “AI & U,” interview of Abeba Birhane (Headstuff Podcasts, November 2020), https://open.spotify.com/episode/4KQcwQaYzRcLQn09mEGm54?si=161f0b8f6e28476f.

[vii] Tim Bradshaw and Cristina Criddle, “Microsoft Confirms ‘Multibillion-Dollar Investment’ in ChatGPT Maker OpenAI,” Financial Times, January 23, 2023, https://www.ft.com/content/298db34e-b550-4f80-a27b-a0cf7148f5f6.

[viii] Richard Waters and Madhumita Murgia, “How Will Google Solve Its AI Conundrum?,” Financial Times, January 26, 2023, https://www.ft.com/content/f61d1e9d-caec-4a0e-a9bd-364c13dc2aa8.

[ix] Bradshaw and Criddle, “Microsoft Confirms ‘Multibillion-Dollar Investment’ in ChatGPT Maker OpenAI,”.

[x] Wall Street Journal, “Satya Nadella: Microsoft’s Products Will Soon Access Open AI Tools Like ChatGPT | WSJ.”

[xi] Ian Tucker. “AI Cancer Detectors.” The Guardian, June 10, 2018. https://www.theguardian.com/technology/2018/jun/10/artificial-intelligence-cancer-detectors-the-five.

[xii] Sonia Fernandez-Ucsb. “AI Makes Detecting Methane Leaks Less Confusing.” Futurity, March 10, 2020. https://www.futurity.org/methane-emissions-detection-artificial-intelligence-hyperspectral-imaging-2301552/.

[xiii] See: Annie Lowrey. “How ChatGPT Will Destabilize White-Collar Work.” The Atlantic, January 20, 2023. https://www.theatlantic.com/ideas/archive/2023/01/chatgpt-ai-economy-automation-jobs/672767/. Read about how Meta tried to create an academic paper generator which failed in two days: Jackson Ryan, “Meta Trained an AI on 48M Science Papers. It Was Shut Down After 2 Days,” CNET, November 20, 2022, https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days/. However, fears about AI may just be continuing a trend of automation fears. See these two reports for early predictions: The Economist. “A Study Finds Nearly Half of Jobs Are Vulnerable to Automation.” The Economist, April 24, 2018. https://www.economist.com/graphic-detail/2018/04/24/a-study-finds-nearly-half-of-jobs-are-vulnerable-to-automation and Office for National Statistics. “Occupations and the Risk of Automation.” GOV.UK, March 25, 2019. https://www.gov.uk/government/statistics/occupations-and-the-risk-of-automation.

[xiv] See: Jason Sadowski and Edward Ongweso Jr, “How AI Makes Living Labor Undead,” January 19, 2023, https://open.spotify.com/episode/4cORLcMG1jJcudpok8QR6p?si=cbf6b72f80a64927. Also see the canonical work on the inherent biases of large language models: Emily M. Bender et al., “On the Dangers of Stochastic Parrots,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, https://doi.org/10.1145/3442188.3445922.

[xv] Drake Bennett, “ChatGPT Is an OK Law Student. Can It Be an OK Lawyer?,” Bloomberg, January 27, 2023, https://www.bloomberg.com/tosv2.html?vid=&uuid=976437dc-9fdc-11ed-bb6a-646d4e624244&url=L25ld3MvbmV3c2xldHRlcnMvMjAyMy0wMS0yNy9jaGF0Z3B0LWNhbi1oZWxwLXdpdGgtdGVzdC1leGFtcy1pdC1tYXktZXZlbi1vZmZlci1sZWdhbC1hZHZpY2U/Y21waWQ9QkJEMDEyNzIzX1RFQ0g=.

[xvi] Bennett, “ChatGPT Is an OK Law Student. Can It Be an OK Lawyer?,”.

[xvii] Chloe Xiang, “Artists Are Revolting Against AI Art on ArtStation,” Vice, December 14, 2022, https://www.vice.com/en/article/ake9me/artists-are-revolt-against-ai-art-on-artstation.

[xviii] See: Janet Haven, “ChatGPT and the Future of Trust,” Nieman Lab, 2022, https://www.niemanlab.org/2022/12/chatgpt-and-the-future-of-trust/.

[xix] For an excellent article on the difficulties of machine learning, see: Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms,” Big Data &Amp; Society 3, no. 1 (January 6, 2016): 205395171562251, https://doi.org/10.1177/2053951715622512.

[xx] “The Artificial Intelligence Act,” Future of Life Institute (FLI), November 28, 2022, https://artificialintelligenceact.eu/.

[xxi] François Candelon et al., “AI Regulation Is Coming,” Harvard Business Review, August 30, 2021, https://hbr.org/2021/09/ai-regulation-is-coming.

[xxii] Ted Lieu, “Opinion | AI Needs To Be Regulated Now,” The New York Times, January 23, 2023, https://www.nytimes.com/2023/01/23/opinion/ted-lieu-ai-chatgpt-congress.html.

[xxiii] Amy Kraft, “Microsoft Shuts down AI Chatbot, Tay, after It Turned into a Nazi,” CBS News, March 26, 2016, https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/ and James Vincent, “Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less than a Day,” The Verge, March 24, 2016, https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.

[xxiv] For more on Timnit Gebru: Dylan Walsh, “Timnit Gebru: Ethical AI Requires Institutional and Structural Change,” Stanford University Human-Centred Artifical Intelligence, May 26, 2022, https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change.

[xxv] Joshua Gans, “AI and the Paperclip Problem,” CEPR, June 10, 2018, https://cepr.org/voxeu/columns/ai-and-paperclip-problem.

[xxvi] Josh Schwartz, “The AI Spammers Are Coming,” Nieman Lab, 2022, https://www.niemanlab.org/2022/12/the-ai-spammers-are-coming/.

[xxvii] Ben Derico James Clayton, “San Francisco to Allow Police ‘Killer Robots,’” BBC News, November 30, 2022, https://www.bbc.co.uk/news/technology-63816454.

[xxviii] See articles like: Kelsey Piper, “From Image Generators to Language Models, 2023 Will Be the Year of AI,” Vox, January 4, 2023, https://www.vox.com/future-perfect/2023/1/4/23538647/artificial-intelligence-chatgpt-openai-google-meta-facial-recognition.

[xxix] Alaina Demopoulos, “How Soon Will M3gan Become Reality? Robot Ethicists Weigh In,” The Guardian, January 19, 2023, https://www.theguardian.com/film/2023/jan/16/megan-film-robot-ai-ethics-real-life.

About the Author

I am a graduate of Edinburgh Napier University, where I received a 1st class BA (Hons) in English Literature and Film. I am currently studying for an MA in Digital Culture and Society at King's College London. Since graduating with my BA, I have kept a film and television blog, and won Highly Commended at the Global Undergraduate Awards in 2018. The paper I submitted for that award has now been published, as well as a conference review about New Research on American Literature and Neoliberalism. My research interests involve media, including social media and streaming platforms, and the changing ways this media is consumed, including the implications media has on the wider culture. I am also interested in postmodernity, science fiction, neoliberalism, capitalism, existentialism, and the nature of reality and human subjectivity in an age when "Truth" is constantly being questioned.