AI is Corrupting the Internet as We Know It

A digital artwork of a person wearing headphones, sitting at a desk with a laptop, against a vibrant pixelated cityscape at sunset. the scene conveys a futuristic, tech-inspired ambiance.

The internet is being overrun by fake and bogus AI imagery and text. The question is, what are we going to do about it? The internet has always had a problem with misinformation, but that problem is being accelerated by AI and the deluge of fabricated lies and deceit. Is it not important that the truth is determined by how it matches up with reality?

We have all seen the social media posts, the advertisements, the political rhetoric, and the cute little feel-good memes that are overwhelming our social media threads. The general public is not prepared to deal with this onslaught and the reality as we know it online is slowly and surely becoming skewed and corrupt. This is all happening at a pace that is hard to fathom.

My concern is: what happens when there is more misinformation on the internet than there is information grounded in facts? There were 15.47 billion AI images created as of August 2023 and 34 million new AI images are created every single day. To put that in perspective, it is estimated that Google only has 136 billion images indexed on their servers. The main players in the online sphere Google, Microsoft, and Adobe have had this problem on their radars for over a year already. In a recent article, they asked “What will stop AI from flooding the internet with fake images?

Just the other day I saw a social media post about a special “flowering tree” that had thousands of likes and shares. Comments such as “Mother Nature can produce the most beautiful plants,” “I wish I had one in my backyard,” and “God did an amazing job designing this tree.” To some, it is clearly obvious that this tree was fabricated by AI and it does not exist and will never exist, but to some, they seem to believe this tree is a real species. This phenomenon became even more concerning when someone was selling seeds of this majestic tree.

A large, distinct tree with a broad, gnarled trunk supporting a vibrant crown of oversized, pink lotus-like flowers, juxtaposed against a background of traditional eastern architectural structures.

Another common issue that we are seeing is the feel-good memes that are supported by fake AI imagery. The recent pigeon meme is a perfect example. The image was accompanied by four paragraphs of text describing how a modern-day pigeon is a fabrication of man and that “They love us because they were bred by us to feel that way, and yet we hate them.” Hundreds of thousands of likes, shares, and comments like “We need to treat these birds better,” “This makes me so sad,” and “We need to care for the animals on this planet.”

A pigeon tucks itself into the corner of a city building's ledge, surrounded by small pieces of litter like cans and leaves, looking plump and cozy despite the urban environment.

Is it not obvious to many viewers that this is not a photograph of a bird and that the message regardless of its purpose is tainted by the inclusion of such imagery? Look at the trash in the foreground, that is a paper cup at the bird’s feet. If this was to scale that bird would stand over five feet tall. Does anyone care about such details?

I did an experiment and pushed back against this fake image, and I was astounded at the responses I received. Many people did not care, and many people blocked me once I pointed out the obvious issue with the misinformation they were sharing. There was embarrassment by some, and others argued that this was a real photograph of a pigeon. I was told “I only care about the message” and that “it does not matter that this is a fake bird.” This problem was further elevated when groups such as “Beautiful Birds” flood Facebook with depictions of AI birds. Even the National Audubon Society had to address this concern in a recent article on their site titled “What Does Generative AI Mean for Bird and Nature Photography?” Does anyone care about reality?

An adult bird with speckled feathers sitting on a branch, closely nurturing three chicks whose beaks are open, amidst a backdrop of lush green leaves.

A vibrant spotted pitta bird with a blue face, orange beak, and speckled golden-brown body perched on a branch, against a blurred green background.

Another popular theme in recent memes is children from the underdeveloped world making the best of their situation. The below image of a non-existent African child using ingenuity and creativity received comments such as “what a resourceful young man,” “he is playing and helping the environment at the same time,” “how creative!” and “he should make these and sell them.”

A young boy in a costume made from recycled water bottles, designed to look like a space suit and a scorpion, stands in a dusty street lined with bicycles and buildings.

We should all be aware of another way AI is inundating the online world: fake celebrity posts. The below images are not of Lily Gladstone and Jennifer Aniston. There are fan pages dedicated to flooding our feeds with fake and bogus half-naked imagery of our favorite celebrities. Nothing can get the clicks like scandal or bathing suit images of a famous actress. They are using these images to advertise products and in many cases removing the clothes of the celebrities in a ploy to get more clicks.

Obviously, these celebrities did not approve of these images but there are so many of them celebrities who are finding it impossible to keep up. You would need a team of lawyers to stop the onslaught of misleading AI imagery. With each new wave, the images become more revealing and obscene by the day. There are full frontal nudes of most celebrities available now courtesy of AI We are quickly realizing that nobody is immune. If the desire is there, it can be produced. But once again, who cares? What if this was your photo or your wife or your daughter who was being replicated in this manner? Would it matter then?

A woman with long dark hair, wearing a black t-shirt with a red and white nature-themed graphic, yellow shorts, and carrying a pink shoulder bag, stands beside a parked car in a parking lot.

A woman in a blue bikini top and jeans walks confidently down a city street, with a man partially visible behind her. she carries a black handbag and wears sunglasses.

Let’s talk about false advertising for a moment. A large majority of advertisements on social media are now adopting AI imagery. There are images of so-called 70-year-old men looking like Arnold Schwarzenegger in his prime. All you must do is try a supplement for 30 days. A company is using the below image to sell its gray hair product. The product makes the claim that yellowing will be reduced and that you can have hair that looks like this with just a couple of applications. The problem is, this is not real hair, and this man does not exist.

Profile view of an older man with distinguished silver hair and beard, showing detailed facial features against a soft orange background.

Is it a concern that a company selling a shirt is using an AI representation of said shirt and not showing the real product? Does a real photograph not provide some information about purchase selection and quality? How the fabric looks, how it hangs on the body. The below image is from a shirt brand and guess what? Neither the shirt nor the model is real, so when you checkout, what are you actually purchasing? It seems to me that we have reverted to the early 1900s when drawings of products were all the rave in print catalogs. This is the kind of improvement that AI brings to the advertising space. Are consumers that gullible and is there a disconnect between what is real and what is not? I think I know the answer.

A muscular man wearing a fitted light gray polo shirt with a collar and a buttoned pocket, paired with a dark belt. Only his torso is visible in the image.

I am going on a trip to Aruba with my family in June. Once the trip was booked, I started to see advertisements for different travel destinations around the globe. The below image really caught my eye. The post was from a travel group advertising fantastic vacations and locales around the globe. When I took a closer look many of the inhabitants of this beautiful tropical paradise had no arms. A couple of people were missing faces and one person had legs coming out of their chest. Now either this location is near Chernobyl or the AI tool was hallucinating as it often does. “Hallucinating” is what the tech industry calls it when AI makes a mistake or fabricates facts or simply lies to the viewer. I think we should call it what it is, deceit. Can you imagine contacting your local travel agent and trying to book tickets to this place that does not exist, has not existed, and will never exist?

A vibrant and bustling pool scene at a resort with characteristic white and turquoise architecture, reminiscent of a Mediterranean island, filled with people enjoying sunny weather.

The final and most important issue I am seeing is when it comes to political threads. One of the issues on everyone’s mind is the ongoing conflict between Israel and Hamas. The below image was recently posted as a call for sympathy for the children in the Gaza Strip. Regardless of where your opinion lies on the topic, this image is a fake and needs to be called out. This is the furthest thing from photojournalism and what is at stake is tremendously important. Every day more and more fake news sites are finding footing online. The truth matters and our democracy may be at risk.

Two children are sleeping on dirt ground inside a dimly-lit tent, covered in mud. they appear exhausted, and the ambiance suggests a challenging living environment.

AI Cannibalizing Itself

In recent months there has been a growing concern in the tech sector regarding AI training upon previously created AI imagery and text. You see, for AI to improve and to make additional leaps it needs more training data with every new rendition. This data is scrubbed from the web by using large data scraping techniques. As the internet becomes more inundated with artificial information, there is bound to be previous AI content that is harvested with every subsequent scraping or data collection.

InsideBigData recently reported “Whenever we are training an AI model on data which is generated by other AI models, it is essentially learning from a distorted reflection of itself. Just like a game of “telephone,” each iteration of the AI-generated data becomes more corrupted and disconnected from reality.” The internet at one point in the not-too-distant past was a place where some truth and facts could be located. As AI becomes more prominent, the internet may become a vast wasteland of bad information. Will it get so bad that nobody will even want to use it as a tool because the amount of effort needed to find the truth will be so time-consuming? I guess we will find out.

So, What Can Be Done?

Let us face it, the genie is out of the bottle. At this point, it is important for everyone to acknowledge the problem. Raising awareness will help educate the public about this issue and collectively we can put pressure on policy makers and corporate executives to do the right thing. It is extremely important that this fantastic resource that has helped society in so many ways be preserved and that information on the World Wide Web remains, for the most part, grounded in reality. If you have a family or friend who is posting information that is clearly AI, let them know. Many people, especially our elderly, do not have the tools to be able to determine what is fake or what is real. You see things were not always like this, not long ago if someone saw a photograph, they could rely on it. This is no longer the case.

As consumers, we can also let advertisers know that we want to see real photographs of the actual products they are hawking. A fake shirt on a fake AI model does nothing for confidence when purchasing items online. It is especially important that AI images do not mislead consumers. Maybe “false advertising” laws need to be revisited and updated to account for this new type of technological fraud. If companies cannot be honest then avoiding or boycotting their products altogether may be the quickest way to a solution.

Disclosure may be our best way to see these images for what they are. In a recent push, Meta, the company that owns Facebook and Instagram is demanding AI generators to “watermark” all images. This would allow the viewer to quickly identify nefarious imagery upfront and know the image is in question. Experts were hoping that “self-disclosure” by the person posting was going to save the day but that clearly is not the case. When the incentive to gain viewers, views, and likes is there, honesty and integrity usually take a back burner. We now know that we cannot depend on the person sharing the fake information to do the right thing.

Many of the concerns that have been addressed in this article can be avoided if the tech companies are serious about having images watermarked. Knowing immediately upfront about questionable information is the key. In the meantime, if you suspect that an image is AI there are many newly created apps that help users identify fake imagery from real. Hugging Face, Is It AI? and Illuminarty are some of the more popular programs available. These software programs are not perfect, but they can be a tool to help a person make a quick determination about the source of a questionable online image.

Who knows where this will lead? Only one thing is certain. AI is here to stay. The question is, can humanity use this amazing technology to do good in the world? AI may hold the key to curing cancer and ending world hunger. In the meantime, we will have to do our best to avoid the negative downside of this technology. We need to ensure that truth always prevails over deception, deceit, and misinformation.

“Beware of the majority when mentally poisoned with misinformation, for collective ignorance does not become wisdom.” ― William J. H. Boetcke


This is Shane’s 5th article of his series on AI The other four articles are:

  1. AI Imagery Is Not Photography, It Never Will Be
  2. AI Imagery May Destroy History As We Know It
  3. Art and AI: Debating the Definition of Creativity
  4. Does The World Need Images of Fake AI People?

About the author: Shane Balkowitsch is a wet plate collodion photographer. He has been practicing for over a decade the historic process given to the world by Frederick Scott Archer in 1851. He does not own a digital camera and analog is all that he knows. He has original plates at 71 museums around the globe including the Smithsonian, Library of Congress, The Pitt Rivers Museum at the University of Oxford, and the Royal Photographic Society in the United Kingdom. He is constantly promoting the merits of analog photography to anyone who will listen. His life’s work is “Northern Plains Native Americans: A Modern Wet Plate Perspective” a journey to capture 1,000 Native Americans in the present day in the historic process.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment