“AI: Neither Evil Nor Intelligent Yet Undeniably Omnipresent”

Instead, it’s settling into a place where its use is already commonplace, even for purposes for which it’s frankly ill-suited. The doomerism vs. e/acc debate continues apace, with all the grounded, fact-based arguments on either side that you can expect from the famously down-to-earth Silicon Valley elites. Witness everything always, forever, but if you’re looking for specifics, self-driving is a very handy recent one, as is VR and the metaverse. Utopian vs. dystopian debates in tech always do what they’re actually intended to do, which is distract from having real conversations about the real, current-day impact of technology as it’s actually deployed and used. Use of generative AI, according to most recent studies, is fairly prevalent and growing, especially among younger users.

Artificial intelligence – or rather, the variety based on large language models we’re currently enthralled with – is already in the autumn of its hype cycle, but unlike crypto, it won’t just disappear into the murky, undignified corners of the internet once its ‘trend’ status fades. Instead, it’s settling into a place where its use is already commonplace, even for purposes for which it’s frankly ill-suited.

Doomerism would have you believe that AI will become so advanced that it will enslave or destroy humanity. However, the truth is that it poses a greater threat as a ubiquitous source of errors and false perceptions that seep into our collective intellectual consciousness.

The ongoing doomerism vs. e/acc debate rages on, with each side presenting supposedly grounded, fact-based arguments. Yet, these arguments often come from the elite and notoriously down-to-earth figures of Silicon Valley. It’s important to remember that these individuals have built their careers on either praising or condemning the extreme success or failure of emerging technologies. However, these technologies often fall short of either perfection or total disaster. One need only look at the fizzling out of previous “trendy” technologies like self-driving cars, virtual reality, and the metaverse to see this pattern.

  • Utopian vs. dystopian arguments in the tech world ultimately serve to distract from meaningful discussions about the current, real-life impact of technology.
  • The advent of ChatGPT just over a year ago has undoubtedly had a massive impact on the use of AI. However, this impact is not about creating a virtual deity, but rather about how surprisingly popular and influential ChatGPT has become. Its capabilities have far exceeded the humble expectations of its creators.
  • Multiple studies have shown that the use of generative AI is on the rise, especially among younger users. Rather than being used for novelty or entertainment purposes, it is primarily employed to automate work-based tasks and communication.
  • A recent study by Salesforce revealed that the consequences of employing generative AI in these workplace settings are insignificant, with the exception of its occasional use in preparing legal arguments. However, it is also resulting in a digital landscape rife with easily overlooked factual errors and minor inaccuracies.
  • It’s worth noting that people are notoriously bad at disseminating accurate information without errors. This is evident in the rise of misinformation on social media, particularly during the Trump presidency. Even without malicious intent, error is a natural part of human belief and communication, and has always existed within shared knowledge pools.
  • What sets large language model (LLM) AI models apart is that they produce errors casually, constantly, and without self-reflection. They do so with a veneer of authoritative confidence that users are susceptible to, thanks to years of relatively reliable Google search results and information from sources such as Wikipedia.
  • As a society, we have become so conditioned to trust the information presented to us by Google and other online sources that our skepticism of information has diminished over time. This implicit trust in the results of a simple internet search has been short-circuited by years of reliable information.
  • As we continue to rely on AI, especially for mundane everyday tasks, the consequences of its questionable accuracy will likely be subtle but worth investigating and potentially mitigating. This requires examining why people feel comfortable entrusting these tasks to AI in its current state.
  • In the larger discussion of task automation, the focus should be on the task itself, rather than the method of automation. However, regardless of where the focus lies, it’s clear that the significant impacts of AI are already here. While they may not resemble the apocalyptic visions of an all-powerful Skynet, they are certainly more deserving of our attention and study than far-fetched, techno-optimistic dreams.
Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

Leave a Reply

Your email address will not be published. Required fields are marked *