Wednesday, February 4, 2026

I'm just saying -- rethinking it

I loved the TV show Murphy Brown. It gave me lots of laughs. I found episodes on Internet Archive and one of them was a typical TV show let-down.

A high school graduate joins Murphy in her home and proceeds to be the teen from hell. She wants to be a journalist just like Murphy, but at the end, she says she doesn't want to go to college. Murphy has no experience with kids and freaks out.

The answer is to make the teen explain just how she plans to get to Murphy's level without college. "Jane" has no experience with professional writing. In her brief stay, she insists on doing anything she wants regardless of the effects on others. This includes smoking around a pregnant woman.

Murphy needed to explain to her that no news organization is going to pay an absolute newcomer for any job without evidence that they can do it. That's what college journalism is about: learning to write; learning to find and use sources; learning to present information effectively; learning which stories are important; learning to dig instead of give up. Her college class assignments and work on the college newspaper might get Jane an interview. Lacking them, she was dead in the water. 

Jane also needed to know that no story is news after its time. You have to beat the news cycle, not trail it. When your editor gives you a deadline, you have to meet it. No excuses. 

And you have to work in a people environment. If you walk into an interview with a non-smoker or somebody who gets sick from cigarette smoke, you can't light up. With coming bans on workplace smoking, Jane was about to hit a brick wall of employment.

The same thing faces high school kids now. AI is taking over scutwork. You have to come into an interview trained to do the job, and also explain why you can do a better job than AI. The most important thing is knowing how to back up your work with information, and AI is lousy at this, it will take any source that suits your keywords. That is why it lies to me on a regular basis and contradicts itself. If you don't understand how bad Wikipedia articles are and how this comes from the sources used in the articles, you will never be better than  AI.

Second, you have to deal with complexity. A recent article showed that using AI in customer service caused problems, it didn't fix them. It couldn't handle nuance or inflection, or customize answers, because it relied on information that didn't fit the situation. Using AI in online chat devolves into long transcripts because the AI can't actually understand the question, it can only deal with keywords.

It's the underlying problem of machine translation, which I think I've posted about before. Computer translation was promised in the 1980s and it has never happened because nobody has been able to program a computer to understand idioms. Idioms are phrases, the meanings of which go beyond the actual words. They are also used in a context, and computers cannot handle context. Actually, damned few humans can handle context, which results in those social media fuck-fests where people call each other names. At some point in the thread, somebody may say "read the thread".

Which doesn't solve anything either. Any time you walk into the middle of a conversation, you are dead in the water because you weren't there for the entire context. A counselor can tell you this; they come into the middle of a stressful situation and the only way to solve it is to make everybody go through the entire "conversation". Bear in mind that the parties have already gelled into their positions or they wouldn't need a counselor in the first place. Don't blame the counselor.

Because the counselor also has to deal with unreliable witnesses. Everybody tailors the story to favor themselves. It goes from being unable to understand language and so unable to understand what they said as part of the problem, to reshaping the narrative to suit themselves as time went on, to lying deliberately to make themselves look good. A counselor has to separate the noise from the signal.

AI can't do that and that's why it lies. Separating noise from signal is a matter of experience. High-schoolers tend not to have it; plenty of college graduates don't have it. I know of college professors who don't have it and pass urban legends because they can't tell they're false.

And most organizations that want to use AI are just as clueless. The companies that thought it would help them do customer service had no clue what went into customer service, and they have screwed up bigtime. A media outlet was bragging about going more to AI, which would result in publishing false information because of AI's inability to evaluate sources properly. A professor was bragging about using AI, which meant an idiot child was going to be running his college courses. It gets worse but I think you've seen enough.

We're in the hype quadrant of AI on Gartner Group's four-stage cycle. We're finding out who is absolutely clueless about how to do their jobs, as much as we're finding out that AI is an idiot child. 

I'm just saying....