X (Twitter) algorithms are scary.

twitter logoI was off Twitter for a number of years.

Now that I’m back on Twitter I’ve noticed a lot of changes. 

The algorithm to maintain engagement is very sophisticated and for anyone the least bit OCD or ADHD it’s dangerous. In the past Twitter presented tweets in chronological or mostly chronological order. It was interesting and generally you could find where you left off then move forward to the most recent tweet. At which point you were done and often I closed the app at this point.

This new version it’s impossible to see where you left off. Worse, the way items are sorted based on anything you showed interest in and stacked at the top of the “For You” feed sets up  a doom scroll for OCD or ADHD folks.

You’re never done, until you realize that you’re seeing the same things over and over again. If, during the Doom Scroll you check your notifications or “likes” of comments you’ve made, then somehow that does something to the feeds that appears to partially reset the feed.

This can initiate another round of doom scroll. 

Twitterlogo2All of which makes it very easy to lose hours. 

Why would the good people at Twitter create such a thing? That’s easy. The ads are repetitively presented every time the feeds are reset.

All social media probably works with similar algorithms and when you get right down to it, social media, like all media is manipulating the perceptions of everyone exposed to it. 

This is nothing new, print journalism, radio, and television, all engaged in manipulation of the public. The older methods required clever writers and the spin had to be more subtle over longer duration. The last thing a respectable paper wanted to be compared to was grocery store tabloids, or The Rolling Stone.

trump 9.jpgWith social media and the internet there’s an immediacy that circumvents needing clever writers or less obvious spin. It’s all about the clicks an article receives. That causes a feedback loop.

Derogatory, untrue, or nasty articles about a person or situation generate clicks which are instantly monitored by the publication or content producer. A content provider creates, or algorithms locate, articles in a similar vein and plug them into the individuals timeline.

Suddenly in the course of a day or two, that individual believes exactly what a significant majority of other people believe, a.k.a. consensus is reached. Because almost all evaluation of the material at hand is done in a “thought vacuum” reinforced by continued articles and “followers” who are homogenous. There’s little pushback and little need to question any narratives validity.

At this point, the only human interaction is the end consumer of the media. The consumer of the media may crosscheck their views with those of their followers, who may or may not be real people. What they’ll find is consistency and that further cements their beliefs that their view is correct.

As to the non-human followers, there are hoards of “bots” whose function is to “stir the pot” keeping engagement up and therefore ad views.

I’d been thinking about creating some kind of anti-algorithm. It’s possible. The simplest implementation would be to mirror the existing algorithms such that they provide both sides of an issue. Simpler still would be to turn off the algorithms entirely and go back to straight chronologic feeds of articles, & comments.

The chatbots and their AI abilities in this context are more worrisome. Some of the conversation AI’s are really good. I’ve encountered one that almost had me fooled except that it didn’t understand sarcasm and its comprehension of humor was limited. How did this thing almost fool me? 

Several factors were in play. The Bot appeared to be from a different country. (That took down my suspicion about certain linguistic foibles.) The Bot was well informed and even produced some interesting conversational points. These points though, it forgot it had made a week or so later. The Bot never said anything about my stealing its points as if they were my own. A human would likely have said something about that.

Lastly was the humor. The Bot had zero concept about visually humorous things. Slapstick comedy, pratfalls, The Keystone Cops, The Three Stooges, The Little Rascals, Laurel & Hardy, Charlie Chaplin, & Looney Tunes, all  of these things made no sense to the Bot.

Physical humor works despite a language barrier because all humans move the same way. You don’t have to understand a language to see that a tool like a rake, left in tall grass means someone will step on it and get a smack in the face. Or that someone careless in a construction site spinning around with a long piece of wood will eventually knock  someone else into fresh concrete.

Perhaps it’s the physicality of these comedies that explains why so many women didn’t like The Stooges. Maybe it’s because for a long time and even today, a lot of women haven’t experienced building something like a house, barn, or treehouse. They, like the Bot, have no frame of reference to understand why obvious cause & effect are funny.

It’s funny because these entertainers are doing exactly what all men who were typically involved in physical labor “knew” was never to be done.

Perhaps that’s also why so many of these shows are out of favor these days. As we moved away from more physical labor and into college educations resulting in white collar jobs, a large majority lost the connection. Look at the debacle of the CHAZ garden in Seattle a few years ago. 

That demonstrates a lot about common knowledge that has become uncommon.

I digress. 

Once I’d concluded that I was having a conversation with a Bot. I told it, “You’ve failed the Turing Test.” 

It stopped communicating and so did I. 

But as I thought about it, the damn thing almost fooled me. I played with ELIZA, in the ‘80s. I know what the Turing Test is. Realizing that I’d almost been fooled by a clever bit of software sent chills down my spine. 

Perhaps instead of writing an anti-algorithm, I should be thinking about writing a program called “Daisy”. I’d call it that, in honor of the HAL9000 computer from 2001 a Space Odyssey. Recall that as Dave Bowman is lobotomizing the computer, HAL is talking about fear as it’s losing its mind, and then as Bowman pulls the plug on the machine’s earliest memories HAL starts singing Daisy Bell. 

HAL9000 Core.jpgMy Daisy program would be designed to hunt down and dismantle AI Bots in social media. My only concern about it is that by the time I’ve written the program, dismantling AI Bots will be considered “Murder”

Terminator 2.Face it, if something were to become sentient like SkyNet, and the system didn’t destroy us instantly, then one logical move would be for it to manipulate the laws so that it was considered a life form and granted rights that protected it from harm. The idiots in Congress would still be debating overturning such a law when the Terminators strolled in and killed them all.

Generative Ai is dangerous, Just look at a Twitter feed.

I mean it. There are lots of Twitter accounts presenting some interesting, and clearly AI manipulated,  photos and movies. These same accounts are presenting these photos and movies as “evidence” of various conspiracy theories. 

There’s one video that “shows” someone cutting what appears to be stone and claiming that the mineral band being cut is fossilized blood of a giant. Thereby proving that giants, (Godzilla sized), existed in the past. I lost interest after that it was a lot of blah, blah, blah, from a monotone computer generated voice.

The shape of the material in the video appeared to be more treelike, the saw being used was similar in appearance to a saw I’ve seen used in logging operations to cut logs to a uniform size prior to loading them onto a transport truck. The saw moved far too fast to be cutting stone. 

There’s another montage of “Ancient” sites that looks like really old black & white photos. The video provocatively claims that governments around the world are hiding “The Truth” from us all. Some of the photos are of buildings or sites that I have seen before with clever additions that look like variations of the “Stargate”, from the movie of the same name. This montage of still photos interestingly has film artifacts, (dust, debris, voids in the emulsion, etc.) Why would a new video montage of “Old” still black & white photos have film artifacts throughout it? Why would the video be jumpy as though it was taken in motion when the subject is a still photo? Why do so many of these photos look as though the scale of the objects shown has been manipulated to make the object appear larger than the “Humans” in frame?

Don’t get me wrong, the image manipulation is very cleaver and quite good. With time, I believe that I have the tools on this computer to learn to manipulate images in a similar fashion. The AI can simply do that kind of manipulation faster and easier.

Therein lies the danger. 

There are a lot of people whose grip on reality is tenuous enough without having their reality disrupted by images that must be looked at critically to determine if they’re real or not.

There are tons of people who look at the incoming flood of data from sites like, TikTok, Twitter, Instagram, Facebook, and whatever other social media, as the absolute truth.

These videos are all come ons, they’re supposed to get the gullible to go to a web site to “donate” so the purveyors of these “Hidden Truths” can continue their good work uncovering what’s been hidden from the world. With Twitter and Instagram  subscription models they’re even able to make money just from folks interacting with their wacky videos.

Don’t get me wrong, I’m from the P.T. Barnum school of capitalism. I simply wish I’d thought of it first.

I also want to believe that there are aliens, or unexplained things beyond the mundane scratching and scurrying of the insane apes on this planet.

That’s what these videos and sites capitalize on…

The human need to believe that there is something bigger, better, and more majestic than ourselves.

Religions around the world are based on this need. The Catholic Church in particular has been quite good at monetizing the human need to believe, especially if the rumors about the wealth contained in Vatican City are true.

There have also been a number of “Free Energy” videos recently. These are laughable until you look at the account’s follower numbers. Then you think, “How are that many people, this dumb?” Entropy always wins.

These videos typically show what is clearly the armature from a DC motor. Probably taken from a toy. The windings look like they’ve been done by machine. Anyway the video is often a bit jiggly, (why are a lot of these videos are jiggly? To add / imply excitement at “discovery”?) , but shows the armature mounted such that it can spin freely. There are two wires trailing out of frame, the ends of the wires that are visible, are laid against the contacts of the armature. An LED or LED strip sitting next to the armature, is presumably connected directly.

The scam is that you see two human hands bringing curved magnets (probably from the original casing of the motor) near the armature, and it miraculously begins spinning. But the second miracle is that the LED lights up. Ohhhh Ahhhhh. It’s magic!

No it’s a DC motor. The LED is either being powered by another power source or is in series with whatever is providing power to the armature itself. The coils on the armature create a magnetic field when current is applied. This field is attracted to the fixed magnets the human is holding, causing rotation of the armature. The spacing of the contact pads on the armature causes the coils to be energized and de energized as the armature rotates. So what you have is temporary electromagnetic fields sequentially being attracted to the fixed magnets being held nearby. 

This is not magic. It’s not free energy. It’s science. Worse, it’s simple well known science. 

The account directs you to a ‘free energy” site where you can subscribe, or buy “Plans” for your own free energy generator to experiment with.

Maybe I’m the odd ball. But I was doing shit like this in my bedroom, in a flyover redneck state, at around the time of the Cuban Missile crisis. (Okay perhaps a little later than the actual crisis.) The point is, when my toys broke, I took them apart to see if I could fix them, or to figure out how they worked. I was learning by doing.

Later in school, I was fortunate enough to attend schools with robust science curriculums and since those schools didn’t look or act like prisons, we could bring things from home to augment science classes.

Apparently these days the basics of science are not being taught in school. Which leads to at least one, maybe several generations of gullible rubes, ripe for the P.T. Barnum treatment.

Two of my favorite Barnum quotes are:

“Many people are gullible, and we can expect this to continue.” – P. T. Barnum

“There is a fool born every minute” – P. T. Barnum

The common thread is that there are people making money from other stupid people. That has been true for as long as there’ve been people and money.

My concern is that people are geared to believe what they see.

Videos have become so easy to make, Generative AI has become commonplace and easy to use. When you factor these things together you’re begging for a new techno dark ages.

All the knowledge of humanity will be available on the internet instantly.

But it’s mixed in with absolute garbage. And a lot of people don’t have the reasoning capacity or facts to separate the truth from the fiction or even to resolve conflicting information using simple logic.

I seriously wonder if, no, when the powers that be unleash disease X as the WEF and WHO dubbed the next pandemic, how gullible people might be.

I could see a scenario like this, on so called “Truth” websites generated by AI using compelling images and fear.


[WARNING! DO NOT DO THIS! THE SCENARIO BELOW IS FOR ILLUSTRATIVE PURPOSES ONLY]

A Montage of ambulances and fires in a generic city. Headline in the montage, “Disease X Ravages Country, Film at 11:00″

Then images of sick people on ventilators, skin and bones images from the AIDS crisis being portrayed as happening now,  Headline “They’re hiding the truth, they want you to die. We have the cure. Discovered by German scientists in World War I and used to cure diseases running rampant among soldiers in the trenches, this cure can save your family.”
A web address pops up on the video. “With just two chemicals likely already in your house you can protect your family and be safe.”

The web site asks for a one time payment of $29.95 for instructions on how to be safe.

The “Cure” is this. Duct Tape your doors and windows shut. Put on a surgical mask to protect yourself during the “Home Sterilization process”. Then mix Bleach and Ammonia in every sink and toilet in your house. 

The instructions inform you that any discomfort you feel is the invisible mist seeking out and killing Disease X, if you feel discomfort, it means you’re infected and purchased the cure to the disease just in time. Good Job! That $29.95 is money well spent, you’ve saved your family.


A person from my generation wouldn’t do this. Even if we don’t remember why we were taught not to do this. I was taught this as part of a science class in elementary school. I vaguely recall perhaps some flammability? I’m not even sure what the gas produced is. I want to say Hydrogen Chloride but it’s been too long… I don’t remember clearly. I only remember that the combination produces a poisonous gas. Which in an enclosed space is deadly.

I remember science classes being fun because where practical, the principals were demonstrated. There was always a lit bunsen burner in my classes.

These days, with the generations after mine, I don’t honestly know if they’d have the safety protocols in their heads to say, “Uh Nope! That’s a really bad idea.”.

If a convincing AI video said it was safe and effective, or worse if the generative AI used a trusted human face and voice to provide instructions, then people without grounding in basic science, who are fearful and gullible, could be convinced to do something really dangerous.

The scenario above hits on three points in human decision making.

Fear, desire to believe (hope), and payment (if I purchased it, it must have value.) There’s an optional fourth point. Mistrust. 

People are far more likely to believe that the government, the elites, or the powers that be, are either withholding information OR that everything said by this group is 100% factual. We saw this in COVID. There were two very clear camps and questioning the dogma of one of those camps was heresy of the highest order.

For god sake, there are still loads of people in Seattle and Portland running around wearing masks.

That’s the power of belief.

I think generative AI is really dangerous because of all of the above. Plus this one more thing. Humans seem to need to believe that computers are infallible oracles of truth.

That need to believe our creations are superior to the minds that created them, can so easily be manipulated it terrifies me.

AI has the potential to help, in so many ways. The trouble is, in the wrong hands AI can cause great harm and I think even destroy what we know.

Like all tools AI is neither good or bad. That is up to the user of the tool. History however, suggests that AI, particularly generative AI will be used for harm.

If you thought random wikipedia edits were annoying, you haven’t seen anything yet.

Experimenting with Apple Watch Ultra

Over the past few weeks I’ve been having a lot of annoying issues with my Apple Watch.

Primarily the problem is around exercise and workouts. 

I’ll start a workout, then get into my activity, usually I’ll check my watch to see how I’m doing. It’s at this point that I find the watch has missed as much as 1/2 of my exercise time. 

I’ll turn off the workout, then restart the workout and often the subsequent exercise credit will be correctly shown. But because of the weird variances I’m not concentrating on my hiking or enjoying my walk because I’m constantly checking the technology to see if it’s working properly.

I’ve called Apple, their phone people have no clue what’s going on or why something that was working just fine is suddenly not working properly.

The Apple exercise ring is tied to heart rate. So, from a hardware perspective that means that the optical sensors on the Apple Watch have to be clean and unobstructed.

(As an aside, this is why people with darker skin tones have more difficulty getting Apple Watch to properly record their exercise activity. Their natural defense against UV damage to their skin also reduces visible light transmission through their skin. Honestly, If I had to make a trade off, I’d take a darker skin tone than my glow in the dark, skin cancer prone, ghostly white.)

When the Watch first started giving me trouble, I cleaned the back of the watch containing the sensors and also looked at the wrist I wear the watch on, to determine if there was irritation that might be altering or obstructing the light from the sensors.

That made little to no difference.

I did notice that if I was shoveling snow, the exercise and movement values updated normally. During snow shoveling I start the HIT (High Intensity Interval Training) workout because the work I’m doing shoveling snow sort of matches the workout.

I also noticed that walking and hiking during snowy times generally worked well.

But if I was hiking or walking on a sunny but coolish day, it wasn’t uncommon for me to lose 1/3 to 1/2 of the exercise time even though I was walking the same route, with the same watch, same band, and same dog dragging me along so he can sniff the next pile of poop.

Why the different behavior? Was Apple updating the sensor Algorithm behind the scenes? Is there some kind of stepped change to the Algorithm that happens after a certain number of workout hours, does it get adjusted based on fitness telemetry gathered during your workouts? I could see this as a form of motivation to get more fit.

If you ask an apple rep these questions, they aren’t able to answer. 

I got so pissed off that I stopped wearing my Apple Watch for a week or so. I admit, I do like the simplicity of a good old fashioned automatic watch. I like the ticking sound at night, and that I don’t have to fiddle with it. Simple Time & Date and I’m pretty happy.

The Apple Watch for me came about because too many shady people were taking way too much interest in my old fashioned watch. Better I’m mugged for a $500 watch than a $15,000 watch.

Plus I could have the joy of locking the Apple Watch from a web site, and listing it as stolen. Fuck you scumbags!! (I’d be fine with having the Apple Watch blow up when I listed it stolen. Let’s go Arabic shall we? You stole, you lost your hand…) Can you tell I’ve got a very dim view of criminals???

Anyway, I heard that there was a Watch OS update coming. So I slapped my Apple Watch on its charger for a while then put it back on. I started experimenting and have begun to wonder if the issue isn’t the watch sensors or the Algorithm, but is instead physiology.

The watch always misses the beginning of the walk. There’s an initial heart rate entry, then as I’m walking the data just stops. At some point in the walk or hike, the data comes back of its own volition, usually when I’m really pushing hard to move up a mountain or the dog and I are moving fast.

But it’s always lost at the beginning of the walk.

I looked back into records of the summer months and there wasn’t any missed data. Then I went back years. I’ve had an Apple Watch 3, 5, and now the Ultra. There was a consistent pattern from late 2019 forward. Lost heart rate data starting in mid to late November and persisting until April. This pattern persisted across the Apple Watch Series 5 and Ultra.

Why is it seasonal?

More confusing, why is the data loss intermittent during those times?

To be fair I was going to write a blog post about how the Apple Watch itself and the monthly challenges create a disincentive to work out or indeed keep fit at all. If a goal is constantly being moved such that an individual cannot ever achieve it, the individual will eventually say, “Fuck It!” 

However, I may have stumbled upon new data that would dissuade me from writing such a piece. 

Yesterday It was overcast, the humidity was high and it was a bit windy. As I was getting ready for the walk with the dog I put on my usual jacket and on a whim put on my gloves so my hands didn’t get numb from the wind chill. I sighed and started the outdoor walk workout then out the door I went. 

I knew at least half the data would be missing for the workout, but hey, why not? A little exercise data even if it’s inaccurate is better than none.

On the walk, I got the usual notification about the miles and pace. And I also got a notification that my exercise ring closed. I didn’t look at it or compare the exercise ring closing with the elapsed time of the workout.

When I got home, I pulled off my gloves, stopped the workout because the screen doesn’t react if I’m wearing gloves. Then I found that the entirety of my workout counted toward the exercise ring. “Typical, I thought, intermittent as hell. Works one day then not for the next 2.”

I picked up the gloves to put them away, and stopped…

Could it be that my hands being cold affects the sensors? What happens when the extremities get cold? The human body will shunt blood flow away from the extremities and toward the core as a protective mechanism against hypothermia. It’s a way of limiting heat loss. 

Hmm. If my hands are cold, but I’m wearing a jacket do just my hands turn on the vasoconstriction and would that reduce blood flow to the point that the optical sensors on my wrist wouldn’t accurately detect blood flow? 

When I’m shoveling snow, I’m wearing gloves, typically more for protection against blisters and dry skin than for warmth. If it’s windy in fall or winter I may put on gloves for a walk, especially if it’s humid or there’s some kind of misty rain.

When the body has excess heat to bleed off, vasodilation occurs allowing blood to move freely toward the surface of the skin to cool. That’s even before sweating begins. It’s simply thermodynamic heat exchange. That would explain why the end of a walk or hike is almost always accurate, and the beginning isn’t. 

I usually walk not wearing gloves in fall and winter.  I’m actually a little cold in the beginning of a walk because I don’t want to be too hot and uncomfortable or having to carry my jacket when I’m working my way back up the hill.

I wondered, “Have the times when the watch captured data flawlessly in the cooler months been on those days when I was wearing gloves and / or the temperature was above a certain point?” 

I was going to experiment more with this theory today. Unfortunately, I neglected to turn off auto update on the watch and discovered this morning that overnight the OS had updated to 10.4

I suppose I can still experiment even with the OS update. It’s 35°F outside, sunny and calm. Under these circumstances I wouldn’t wear gloves but I’d wear my hat and jacket. I’m thinking I’ll walk the dog without gloves and see if the data loss is present. Then tomorrow I’ll wear gloves and see if the data loss goes away or is minimized.

If it turns out that my theory is correct should I send it to Apple? I guess the real question is Would Apple listen and make the findings available to their phone staff to save other people the frustration and annoyance I’ve experienced? Would Apple even incorporate my theoretical findings into their testing to prove or disprove me. It would be nice if they did, and provided a real technical explanation. 

But somehow I don’t think they would, even if I provided them with the data. 

Apple has become exceedingly arrogant. Couple that with their notorious secrecy and minimalist instruction manuals, and I doubt seriously they’re interested in making information available.

I’ll think about it while attempting to confirm my theory. 

Once again, this is why companies should have real humans testing software and products instead of just doing automated stuff.