Hello,
you are reading Understanding TikTok. My name is Marcus. Last week i criticized a poorly designed TikTok study (#119) and wondered where the researchers got the absolute numbers for TikTok posts by hashtag. They obviously used TikTok’s own Creative Center which included a search functionality. TikTok has quietly curtailed the data tool right after the publication of the study as the New York Times has reported. Thanks Martin for pointing me to the article.
We are obviously entering the new digital dark age as Wired has pointed out after borrowing the term from James Bridle who wrote an entire book on it in 2018. While social media in the last years “represented greater access to data, more democratic involvement in knowledge production, and great transparency about social behavior” we will face a grim digital dark age, as social media platforms transition away from the logic of Web 2.0 and toward one dictated by AI-generated content.” But is this necessarily a bad thing?
🐲 Stop Worrying About Deepfakes
AI deepfakes of true-crime victims (Rolling Stone), Hamas’ terror victims (Yahoo), celebrity scams (NBC), politicians (New York Times) and fake news anchors (Forbes) – manipulated media that experts see as troubling in a historic election year. Yet media scholar Walter Scheirer who recently wrote a book called A History of Fake Things on the Internet makes the general point (Stop Worrying About Deepfakes) that we should regard “fake things on the internet not as a radical and terrifying departure from civil norms, but as a wholly natural evolution of our human drive for mythmaking and storytelling.”
Rob Horning meanders around this thought in his newsletter (recommended!) and points to an interesting discussion in the New York Times, where “four artists who work with A.I.-generated images — Alejandro Cartagena, Charlie Engman, Trevor Paglen and Laurie Simmons — talk about how they’re thinking about the technology and where we might go from here.” As Horning writes: That seems more sensible than insisting that technology can or should only be used to produce facts, or that there is some form of datafication or documentation that unproblematically captures “truth.” An important point in an ongoing debate that insists that it is possible to make unmanipulated images. It is not! Or as Horning writes somewhere else:
Generated content creates a retroactive illusion about the “meaningful interactions” we’ve lost, framing a fantasy of some form of purer communication that we must get back to, a reified thing that we can achieve unilaterally through some supremely earnest act of authentic being.
🐉 Start Worrying About Deepfakes
Next time someone asks me about deep fakes and elections, I'm going to show them this deSantis/ Grimes mash up, tweets Joan Donovan – ”one of the world’s leading experts on misinformation” (Guardian). In the video you can see US politician Ron DeSantis face layered onto the face of Canadian musician Grimes (Everything Grimes Eats) who has been much criticized for associating with far-right and neo-Nazi figures online and liking alt-right memes on Twitter/X (NME).The TikTok video by creator @allhailthealgorithm has currently 1.5 M views and is the most successful video on the account.
Why should we worry about a harmless meme video? It obeys to the laws of the platform and increases reach. The most successful TikTok video of the Irish Labour party is a meme reference heaven. This German TikTok video (3M views) shows Daniel Slump, a Twitch streamer with migrant history normalizing reading Hitler’s “Mein Kampf” in a video praising photos of a young Alice Weidel, now co-chairwoman of the right-wing party Alternative for Germany (AfD). And AI?
In 2024 companies such as OpenAI, Google, Meta and the New York-based Runway are likely to deploy image generators that allow people to generate videos not only images (New York Times). This makes it way more easy to smear “the line between fact and falsehood” (CEPA). TikTok requires that AI-generated content depicting realistic scenes must be clearly disclosed (TikTok) and has policies on AI-manipulated content (EU Disinfo Lab). Yet AI watermarking won't curb disinformation (EFF). And according to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity, as Nature writes. The phenomenon predates the digital age but now manifests itself on platforms.
NewsGuard in September 2023 for example has identified a network of 17 TikTok accounts using AI text-to-speech software to generate videos advancing false and unsubstantiated claims, with hundreds of millions of views (Newsguard). We will very likely see and hear way more with Russia that has spent $1.9 billion on media propaganda last year. And China's budget for cyber-propaganda and global disinformation being in the billions, according to a recent U.S. State Department report (Newsweek).
🧁 Now what?!
There is no going back to an age “before” – a fantasy of some form of purer communication that we must get back. The short phase of “social media” from Myspace to Instagram is over. And i feel fine. While we are entering (another) dark age (yet again) there is no simplistic either/or. The same tools that can be deployed to conceive and do harm can be used to detect, debunk, reflect or create. We will grow tired of the acronym AI and get rid of it soon when everything will be AI-powered anyhow. Your therapist (BBC) and your political campaigner (Reuters) are already. Understanding what is actually happening online might have become harder (#118) but not less exciting unless you age out of the Internet as Max Read describes here.
🪟 TikTok’s Sludge Content
In the last 12 months sludge content on TikTok has been sold again and again and again by media outlets as THE new thing on TikTok. “Sludge content” is a type of viral video that features multiple clips playing simultaneously on a screen.
Someone is chopping kinetic sand into neat, even cubes next to a clip from Family Guy. The jumping, coin-collecting gameplay of Subway Surfers plays alongside a segment of a Twitch stream. Slime is being coiled and stretched next to a reupload of someone else’s POV sketch (Polygon).
The phenomenon has been widely discussed but i saw a brand new article in Scientific American. In print since 1845, it is the oldest continuously published magazine in the United States. And this i why i point to the article with answers to questions like “Is sludge content helpful for some people psychologically?” and this great description: sludge videos are “baby sensory videos for teenagers“ referring to videos that use a variety of shapes, colors, and movement and claim to enhance visual stimulation in developing brains.”
This edition did neither mention the Stanley Cup nor silent reviews or Alex Consani. Hope you do not mind. Speak soon. Ciao