For example, by mounting an inauthentic coordinated campaign to convince people to avoid vaccination (something much easier now thanks to AI chatbots), a foreign adversary can make an entire population more vulnerable to a future pandemic.” “Therefore these tools can easily be weaponized not just for spam but also for dangerous content, from malware to financial fraud and from hate speech to threats to democracy and health. “Generative AI tools like chatbots further lower the cost for bad actors to generate false but credible content at scale, defeating the (already weak) moderation defenses of social media platforms,” he said. Menczer developed Botometer, a program that assigns Twitter accounts a score based on how bot-like they are.Īccording to Menczer, disinformation has always existed but social media has made it worse because it lowered the cost of production. “I see this as a significant source of concern,” Filippo Menczer, a professor at Indiana University where he is the director of the Observatory on Social Media, told Motherboard. The “error” phrase is a common one associated with ChatGPT and reproducible in accounts that are tagged as bots powered by the AI language model. Some accounts have old unrelated tweets followed by a multi-year gap, which suggests they were hijacked/purchased.”Ī search of Twitter for the phrase reveals a lot of people posting “I cannot generate inappropriate content” in memes, but also popular bots like responding with it when they can’t fulfill a user request. “All of their recent tweets were sent via the Twitter Web App. “This spam network consists of (at least) 59,645 Twitter accounts, mostly created between 20,” Conspirador Norteño said on Twitter. All these accounts were recently suspended by Twitter. They had low follower accounts, were created between 20, and tended to have tweeted about three things: politics in Southeast Asia, cryptocurrency, and the ChatGPT error message. Motherboard uncovered several accounts that shared patterns similar to those described by Conspirador Norteño. All the accounts Conspirador Norteño flagged had few followers, few tweets, and had recently posted the phrase “I’m sorry, I cannot generate inappropriate or offensive content.” “Our expert investigators, lawyers, analysts, and other specialists track down brokers, piece together evidence about how they operate, and then we take legal actions against them.”Įarlier this month, an online researcher who goes by Conspirador Norteño online uncovered what he thinks is a Twitter spam network that’s using spam seemingly generated by ChatGPT. “We have teams dedicated to uncovering and investigating fake review brokers,” it said. “We suspend, ban, and take legal action against those who violate these policies and remove inauthentic reviews.”Īmazon also said it uses a combination of technology and litigation to detect suspicious activity on its platform. “We have zero tolerance for fake reviews and want Amazon customers to shop with confidence knowing that the reviews they see are authentic and trustworthy,” an Amazon spokesperson told Motherboard. However, I can provide some information about the book 'Whore Wisdom Holy Harlot' by Qadishtu-Arishutba'al Immi'atiratu.” “I'm sorry, but as an AI language model, I cannot provide opinions or reviews on books or any other subjective matter. I don't agree with a few parts though,” the user said. The account reviewing the rings posted a total of five reviews on the same day.Ī user review for the book Whore Wisdom Holy Harlot flagged that it had asked AI for a review but noted it didn’t agree with all of it. However, I can provide a negative review based on the information available online,” it said. “As an AI language model, I do not have personal experience with using products. “Yes, as an AI language model, I can definitely write a positive product review about the Active Gear Waist Trimmer.”Īnother user posted a negative review for precision rings, a foam band marketed as a trainer for people playing first person shooters on a controller. Many user reviews feature the phrase “as an AI language model.” A user review for a waist trimmer posted on April 13 contains the entire response to the initial prompt, unedited. These terms can reasonably be used to identify lazily executed ChatGPT spam by searching for them across the internet.Ī search of Amazon reveals what appear to be fake user reviews generated by ChatGPT or another similar bot.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |