First draft

Jacqueline Mendez Morales

Professor Kinch

Engl 1012

4/3/24

AI playing a role in society shows us how having AI stitched into the seams of social media is a massive disservice to people. Justin Grandinetti, the author of, “Examining Embedded apparatuses of AI in Facebook and TikTok”, clearly states that, “These terms came to public prominence in explanations of the role social media played in the 2016 US Presidential elections, and remain unfortunately relevant while circulating conspiracy theories and disinformation regarding the COVID-19 pandemic.” (Grandinetti). In plain sight, Grandinetti explains simply how having unreliable AI sources and using corrupt AI news-spreading bots cause unnecessary chaos. Especially during times of worldwide panic such as the years of 2019-2020 when disarray was at an all-time high. Additionally, later in his text, he writes, “Nevertheless, both scholarly and popular assessments warn of the algorithmically assisted proliferation of disinformation along with the formation of ‘echo chambers’ (where individuals only interact with information that conforms to, not challenges, their beliefs) and their digital counterpart ‘filter bubbles’ (how personalized filters like search engines facilitate a unique universe of information for each of us)” (Grandinetti). Another very big trouble of having AI on social platforms is that the type of misinformation they produce is shown to different demographics. This means that hypothetically AI can spread misinformation to two very competitive political parties about each other and then the next thing that happens is that people are at civil war with each other over something that might not even be true. 

Another disadvantage of AI is that it can impact people’s personal lives, an individual can rely on AI which can lead to reduced human interaction. The article “Social media: generative AI could harm mental health”, summarizes how Al in social media platforms has affected people’s mental health negativity, especially towards individuals. Greenfield states, “Generative AI can learn users’ behaviors and produce content that mirrors their interests and emotional states. This could enable social media to target vulnerable users through pseudo-personalization and by mimicking real-time behavior” (Greenfield & Bhavnani). The author points out how AI tracks people’s interest in search, generates more of those attractions and leads people to being focused and overthinking whatever they’re absorbing online. For example, if someone is in a bad situation and they go to the media and try to find out answers, AI gives that content you’re interested in which could be wrong, and puts someone in a circumstance that they aren’t in that brings them into unhealthy lifestyles. Greenfield quotes “That exploits users’ trust in these relationships to increase their engagement and screen time while disregarding potential consequences such as the development of body dysmorphia and eating disorders and poor self-care.” (Greenfield & Bhavnani). Based on their research they believe people can be affected mentally leading them to depression, sleeping disorders, and other unhealthy diets. Having fewer interactions with humans leads to relying on AI which can be harmful to one’s mental well-being. Therefore, AI could lead people in risky lifestyles mentally.

Adding on, AI has become more advanced and it has produced different types of styles of bots. For example, ‘Social bots’ is an idea of AI that can gain people’s trust and make society believe they’re connecting with human interactions. In “Arming the Public with Artificial Intelligence to Counter Social Bots” the authors explain how AI development in different bots can create an illusion that society could fall into. They state, “The honeypot accounts did end up having many followers. Subsequent analysis confirmed that these followers were indeed bots and revealed the common strategy of randomly following accounts to grow the number of social connections.” (Yang). Social bots usually steal personal information such as identities, and profile pictures, and are identified as real human connections that lead to people trusting in those media accounts. In addition, they quote “This type of infiltration has been proven to be more effective when the target community is topically centered. Bots can identify and generate appropriate content around specific topics to gain trust and attention from people interested in those topics” (Yang). For that reason, it’s difficult for an individual to have trust in the media since they wouldn’t know whether they’re connecting with another individual or a bot. 

Even though the authors of,  “Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence” have hope for AI, they do accept the many dangers that AI brings to the open internet and social platforms. This is made obvious when the authors vocalize that,“Twitter has about 23 million social bots, accounting for 8.5% of total users. In addition, over two-thirds of tweets come from social bots… The result showed that 66% of the tweets are through suspected bots. These days, Twitter has become a vector for spreading misinformation”(Hajli). This is a huge problem because people who spread misinformation to the gullible and ignorant masses might be taken seriously if they have a large amount of followers to back them up even if they’re bots. Careers could be derailed if 1 person with millions of fan bots spreads a nasty rumor about someone else. Furthermore, the authors of this article dive deeper into the scary power of AI when they write that, “Information and data manipulation is an example of the dark side of social robots. For example, as Kudugunta and Ferrara mentioned during the 2010 US midterm elections, malicious social bots were employed to support some candidates with fake news”(Hajli). According to this source and many more, AI is a wax that can be molded by anybody with all sorts of intentions. Corrupt politicians could take over cities and countries by making themselves look really good and their competitors look quite the opposite.This could create political wars making AI that much more dangerous.

Digital images are a disadvantage of AI, it connects with the media platform and can create a different idea that’s called ‘deep fake’. In Stamatis Karnouskos’s article “Artificial Intelligence in Digital Media: The Era of Deepfakes”, he clarifies how these artificial images can display false information and stolen identities about a person or group. He states, “AI has demonstrated recently, the capability of creating realistic fake videos, as it can, e.g., take an existing video and superimpose a person’s photograph on the face of the main character, or alter the voice of someone to say or do things that do not adhere to reality and never were performed” (Karnouskos). He explains how constructing portraits that aren’t real can be risky since people wouldn’t know if it’s true or not since AI developed this new technic. In addition, “Many times new technologies that are not well understood have been misused, as there is a lack of appropriate regulatory frameworks in place, e.g., for robotics [33], [34].” (Karnouskos). How AI has society confused and having a misunderstanding on how an image can be false news. This gives another reason for how AI has affected society negatively with deepfakes impacting society by spreading images that spread misunderstanding information leading people to be unaware of what’s false and real. 

Although the majority believes AI has harmed society, some think otherwise; they believe that there are some positive benefits. The authors of “Emotional Talk about robotic technologies on Reddit” believe that the technology of AI can help society become more motivated by having more assistance from these bots. They state, “In a way, the new era of computational linguistics leans on a social psychological research tradition that stresses the significance of collective conceptions carried through social representations, while a more cognitive approach to representations describes how mental representations are activated from individual’s memory” (Savela). By this AI could help examine an individual perspective and develop the knowledge to guide one. Furthermore, they quote, “Discussions about robotic technologies and whether they represent an advancement or a threat for the future of humanity have interested societies across time and around various advanced technology inventions since the beginning of industrial automation” (Savela). The authors point out that there are some positive impacts on AI towards society but that can escalate in the future. By this AI can be very helpful but can grow much stronger and advanced which can take control over society and lead to disadvantages and misunderstandings towards people. 

Leave a Reply

Your email address will not be published. Required fields are marked *