top of page
Search

Project 2

  • Adam Lyda
  • Dec 5, 2025
  • 6 min read

The decision to focus my Investigative Field Essay upon short form content was intended with regards to the proposed worry of obsoletion of television and film. Long form media has felt the potential risk of growing unpopularity as a result of the growing market share within video of platforms like Reels and TikTok due to their addictive nature and importantly lack of production. Long form media – even if it is of greater popularity to short form videos – can take years, millions of dollars, and thousands of contributing people for a single product. One of the allures and benefits of short form content as a whole is the simplicity and ease with which content can be created, remixed, and mass produced. Part of the argument made was the unarguable fact that the production of short form content requires exponentially less overall work and effort. Beyond the discussion of film and television requiring large-scale effort and resources, the media discourse community as a whole has faced a similarly rooted issue of time and effort. As with any job needing done or service being completed, media takes human ingenuity to create. A news article is thoroughly researched, a book is written, a TV show is a combination of writing, filming, acting, editing, and so on.

            Arguably the greatest technological achievement of the 2020s so far is the advent of Artificial Intelligence into a more tangible product. While artificially “intelligent” systems rooted in algorithms have existed for decades, extremely complex new systems have developed within the last few years including the Large Language Model. LLM’s have revolutionized how people receive information and communicate with the internet both for the good and the controversial. While Artificially Intelligent language models can respond to a person with direct answers to questions, or present information to a user within seconds, none of the content is original, as AI is trained off of work made by humans and sources across the internet with no permission or crediting, essentially “stealing”, as some would say, from the actual creatives.

 

            Leaps in AI have not been exclusive to language however. AI image generation is another albeit primitive part of technological revelation that has only gotten better in the couple years it has been available. Last year OpenAI, founded by Sam Altman and creator of ChatGPT, announced their AI video generation software “Sora”. Only available for specific users, AI generated video created ripples within the media discourse as to how it will be used, the jobs it could replace, and the “theft” it perpetuates. Television and film especially, but media as a whole is considered to be art by many, from the acting, to sound design, visuals, scripting, color grading. To outsource the process and creation to a robot at the whim of a short prompt undermines the creativity, and many worry that the ease with which AI video can be produced – at a scale cut down to a momentary calculation – could be the demise of the industry and need for creatives as a whole.

 

The article, “Sora Is Incredible and Scary’: Emerging Governance Challenges of Text-to-Video Generative AI Models” by Zhou, Choudhry, Gumusel, and Sanfilippo supports this argument by viewing the platform in a similarly cynical tone. Following the public reactions and fallout of the announcements, the authors describe a landscape of anxious creatives in response to the technology. From the first paragraph, the authors define the rhetorical situation clearly as people are torn between intrigue of such a complex technology as well as worry for what it could mean. They state that, “public reactions to Sora oscillate between fascination and fear,” establishing the tone of the paper. Emphasizing the existence of bipartisan response to a new technology is important, and clearly warranted hear as the authors present this cautious response.

 

The main claim presented however, is this: “the speed at which Sora was unveiled leaves limited time for policymakers, creators, and the public to understand or prepare for its consequences”. By discussing the dangers which come with the territory, the authors’ argument is grounded in logos, emphasizing that regulation and restriction be considered urgently and thoroughly for the benefit of all these people. Maintaining objectivity in a scholarly article is key for the authors, as their goal is to provide an unbiased analysis.  The rhetorical effectiveness lies in a balance between fascination and fear. They recognize the artistic potential of Sora and use clever metaphors to once again insist on the benefits of monitored use, “Once released generative video content spreads virally, complicating containment of harms in ways comparable to biological contagions.” Through imagery, rhetorical evidence succeeds through comparable negative correlation driven by pathos for a ‘virus’, hitting even closer to home for the average reader in regards to Covid, another unforeseen sudden shift in our modern world, also reinforcing the authors’ ethos as a responsible objectively-leaning observer. 

 

Equally important is concern as to authorial reference and copyright, as they argue that “Generative AI models like Sora raise existential concerns about authorship, ownership, and the marginalization of human creative labor.” Combining pathos and ethos, the authors worry of the economic and industrial fronts while also mentioning the worry of creativity and personal attachment as well. By examining the ‘existential concerns’, the authors raise thought for the philosophical. The rhetoric is inherently academic but matches concerns of morality in a way that questions AI from a number of angles, strengthening the case of the cynical.

 

Finally, the authors warn that, “without transparency in datasets or guardrails against synthetic misinformation, the technology risks eroding public trust in visual evidence.” The tone dives even further into the academic and metaphorical here, as the authors describe that the result of a lack in protective provisions could result in ‘eroding public trust’. This appeal to trust functions as both a logical and emotional anchor, pointing to the foundation of media consumption. If trust lies within media and the consumption of it is built on that trust to some regard, then Sora’s subversion of that trust could destroy that foundation which media is built upon, destroying consumption of media as it is known. By describing this danger of a new technology from a formal perspective and tone, the authors create a rhetoric of unease and urgency for regulation and application. The article doesn’t denounce AI video generation entirely, but proposes an objective balance through a warning voice that emphasizes restraint collaborating with progress.

 

On October 30th, 2025, OpenAI livestreamed the reveal of Sora 2, a greatly enhanced version of the original Sora model, and this time publicly available through an app presented as short form and laid out in an extremely similar format to TikTok. The announcement video for Sora 2 works as both advertisement and declaration of a turning point with the technology. Only two minutes long, the reveal is entirely AI generated – visually and auditorily. Through curated AI visuals of consistent landscapes, people, and surreal places, OpenAI’s rhetorical appeal is rooted primarily in pathos. The video itself, rather than describe the technical workings, relies heavily on it’s spectacle. Unlike Zhou’s academic caution, the Sora 2 video uses feeling rather than fact. The rhetorical message is not what Sora does, but what it makes the viewer believe it can do, creating a sphere of amazement rather than addressing concerns that somebody parallel to Zhou might have.

 

The voiceover, again part of the spectacle, is entirely AI. With OpenAI’s founder Sam Altman as a main ‘guide’ to the video, he describes Sora 2 as, “The most powerful imagination engine ever built.” This lofty statement is a classic tech marketing statement sure, but it appeals to pathos as the viewer is meant to wonder of all the possibilities they have the opportunity of creating. With Standout marketing quotes like this, not much is described of the engine as the visual is made the main intent of this, but this doesn’t deny the fact that cautiously proceeding consumers like Zhou are left weary with an argument lacking in logos.

 

Compared to Zhou’s tone of restraint, the Sora 2 reveal is almost utopian. Where the article describes contagion and consequence, the video shows only potential. Its tone of aesthetic optimism depends on omission of ethics, data, or authorship, leaving viewers to with only excitement, hypothetically. This absence of discourse is rhetorical in itself, functioning as a persuasive marketing tactic, and the lack of skepticism or regulation-oriented language contrasts sharply with Zhou’s insistence on responsible caution and implementation of restriction.

With further elaboration and presentation, Sora 2 has the opportunity to expel the worry it propels. The issue is that it’s initial rhetoric is almost entirely rooted in pathos, whereas Zhou’s argument considers logical and reasonable sides of the coin. While this wouldn’t be an issue with regards to the reveal of technology like a new yearly iPhone, the inherent gravity and potential to uproot foundational societal norms found within a product like Sora 2 suggests better reception from an reveal event less focused in spectacle, and more based in reassurance and objective reason for existence.

Works Cited

Zhou, Kyrie Zhixuan, Abhinav Choudhry, Ece Gumusel, and Madelyn Rose Sanfilippo. “‘Sora Is Incredible and Scary’: Emerging Governance Challenges of Text-to-Video Generative AI Models.” arXiv, 18 Apr. 2024, arxiv.org/abs/2406.11859.

OpenAI. Introducing Sora 2. YouTube, 30 Sept. 2025, www.youtube.com/watch?v=gzneGhpXwjU.

 
 
 

Recent Posts

See All
Project 3 Rationale

In creating a cohesive collection of projects as a campaign for my message I created a multiplatform video and social media presence that tackles a shifting landscape of video techniques that addresse

 
 
 
Project 1 Revised

The ever-evolving nature of technology is directly correlated as equal parts cause and effect of evolving media formats. Alongside the introduction of every new configuration through which media may b

 
 
 
Project 1

Introduction             The ever-evolving nature of technology is directly correlated as equal parts cause and effect of evolving media formats. Alongside the introduction of every new configuration

 
 
 

Comments


bottom of page