I was off Twitter for a number of years.
Now that I’m back on Twitter I’ve noticed a lot of changes.
The algorithm to maintain engagement is very sophisticated and for anyone the least bit OCD or ADHD it’s dangerous. In the past Twitter presented tweets in chronological or mostly chronological order. It was interesting and generally you could find where you left off then move forward to the most recent tweet. At which point you were done and often I closed the app at this point.
This new version it’s impossible to see where you left off. Worse, the way items are sorted based on anything you showed interest in and stacked at the top of the “For You” feed sets up a doom scroll for OCD or ADHD folks.
You’re never done, until you realize that you’re seeing the same things over and over again. If, during the Doom Scroll you check your notifications or “likes” of comments you’ve made, then somehow that does something to the feeds that appears to partially reset the feed.
This can initiate another round of doom scroll.
All of which makes it very easy to lose hours.
Why would the good people at Twitter create such a thing? That’s easy. The ads are repetitively presented every time the feeds are reset.
All social media probably works with similar algorithms and when you get right down to it, social media, like all media is manipulating the perceptions of everyone exposed to it.
This is nothing new, print journalism, radio, and television, all engaged in manipulation of the public. The older methods required clever writers and the spin had to be more subtle over longer duration. The last thing a respectable paper wanted to be compared to was grocery store tabloids, or The Rolling Stone.
With social media and the internet there’s an immediacy that circumvents needing clever writers or less obvious spin. It’s all about the clicks an article receives. That causes a feedback loop.
Derogatory, untrue, or nasty articles about a person or situation generate clicks which are instantly monitored by the publication or content producer. A content provider creates, or algorithms locate, articles in a similar vein and plug them into the individuals timeline.
Suddenly in the course of a day or two, that individual believes exactly what a significant majority of other people believe, a.k.a. consensus is reached. Because almost all evaluation of the material at hand is done in a “thought vacuum” reinforced by continued articles and “followers” who are homogenous. There’s little pushback and little need to question any narratives validity.
At this point, the only human interaction is the end consumer of the media. The consumer of the media may crosscheck their views with those of their followers, who may or may not be real people. What they’ll find is consistency and that further cements their beliefs that their view is correct.
As to the non-human followers, there are hoards of “bots” whose function is to “stir the pot” keeping engagement up and therefore ad views.
I’d been thinking about creating some kind of anti-algorithm. It’s possible. The simplest implementation would be to mirror the existing algorithms such that they provide both sides of an issue. Simpler still would be to turn off the algorithms entirely and go back to straight chronologic feeds of articles, & comments.
The chatbots and their AI abilities in this context are more worrisome. Some of the conversation AI’s are really good. I’ve encountered one that almost had me fooled except that it didn’t understand sarcasm and its comprehension of humor was limited. How did this thing almost fool me?
Several factors were in play. The Bot appeared to be from a different country. (That took down my suspicion about certain linguistic foibles.) The Bot was well informed and even produced some interesting conversational points. These points though, it forgot it had made a week or so later. The Bot never said anything about my stealing its points as if they were my own. A human would likely have said something about that.
Lastly was the humor. The Bot had zero concept about visually humorous things. Slapstick comedy, pratfalls, The Keystone Cops, The Three Stooges, The Little Rascals, Laurel & Hardy, Charlie Chaplin, & Looney Tunes, all of these things made no sense to the Bot.
Physical humor works despite a language barrier because all humans move the same way. You don’t have to understand a language to see that a tool like a rake, left in tall grass means someone will step on it and get a smack in the face. Or that someone careless in a construction site spinning around with a long piece of wood will eventually knock someone else into fresh concrete.
Perhaps it’s the physicality of these comedies that explains why so many women didn’t like The Stooges. Maybe it’s because for a long time and even today, a lot of women haven’t experienced building something like a house, barn, or treehouse. They, like the Bot, have no frame of reference to understand why obvious cause & effect are funny.
It’s funny because these entertainers are doing exactly what all men who were typically involved in physical labor “knew” was never to be done.
Perhaps that’s also why so many of these shows are out of favor these days. As we moved away from more physical labor and into college educations resulting in white collar jobs, a large majority lost the connection. Look at the debacle of the CHAZ garden in Seattle a few years ago.
That demonstrates a lot about common knowledge that has become uncommon.
I digress.
Once I’d concluded that I was having a conversation with a Bot. I told it, “You’ve failed the Turing Test.”
It stopped communicating and so did I.
But as I thought about it, the damn thing almost fooled me. I played with ELIZA, in the ‘80s. I know what the Turing Test is. Realizing that I’d almost been fooled by a clever bit of software sent chills down my spine.
Perhaps instead of writing an anti-algorithm, I should be thinking about writing a program called “Daisy”. I’d call it that, in honor of the HAL9000 computer from 2001 a Space Odyssey. Recall that as Dave Bowman is lobotomizing the computer, HAL is talking about fear as it’s losing its mind, and then as Bowman pulls the plug on the machine’s earliest memories HAL starts singing Daisy Bell.
My Daisy program would be designed to hunt down and dismantle AI Bots in social media. My only concern about it is that by the time I’ve written the program, dismantling AI Bots will be considered “Murder”
Face it, if something were to become sentient like SkyNet, and the system didn’t destroy us instantly, then one logical move would be for it to manipulate the laws so that it was considered a life form and granted rights that protected it from harm. The idiots in Congress would still be debating overturning such a law when the Terminators strolled in and killed them all.