Wednesday, February 25, 2026

Knitting -- the unfixable mistake?

So I'm replacing a jumper/sweater set that I did in cotton years ago, because the cotton got frowsy.

The jumper was a no-brainer, I knitted it in the round as usual.

The sweater was agony. Yes, I used steeking at the sleeves so that when I was done knitting I was done.

But I made mistake after mistake and after I bound off the neck, I found I had screwed up on the sleeve side of the back. 

What you do in a case like that depends on whether you're on a deadline, how much of a perfectionist you are, whether you're knitting for sale, and other factors.

I had no deadline. The sweater was for personal use. And I had an ace up my sleeve that you probably know about if you have been knitting long.

It turns out I had fluffed only 5 stitches in the entire body of the sweater. So I pulled out a nalbinding needle that functions beautifully as a tapestry needle, being straight and thin instead of curved and chunky. 

And I used duplicate stitch to turn 3 blue stitches white and 2 white stitches blue. Now it looks perfect.

By the way, I am using Norwegian Peer Gynt yarn by Sandnes Garn in this project. I got it from Wool and Company, a US small business. The pattern is from the Dale Garn Tradisjon 267 book that I downloaded years ago while it was still free on the Dale Garn site, and I wanted to try Peer Gynt. It is pretty much a heavy sport in 60 colors and makes a very warm garment in classic Norwegian patterns. 

Other sites carry Peer Gynt. One of them, LindeHobby, would not accept my Discover card so I abandoned my cart and told the company why. Always check the bottom of the home page to see if they accept your card. A lot of yarns are carried by more than one vendor. If you can avoid getting emotionally invested in a product or company, you won't have to worry about your card.

Tuesday, February 17, 2026

Fact-Checking the Torah -- DH and the KJV

I've been reading Thomas Costain's series about the Plantagenet kings of England on Internet Archive. I'm part of the way into the fourth book and suddenly came across information showing why KJV is absolutely the worst possible source for Documentary Hypothesis to rely on. WikiSource, LISTEN UP.

It seems that King James I did exactly what the forged Aristeas letter claims Ptolemy did to produce the Septuagint: divide the Jewish Bible into sections, assign them out to groups of scholars who supposedly knew Hebrew and Greek (which they didn't, as you know if you read my threads on Biblical Hebrew and Classical Greek) and put them to work. Each "translator" produced his own work and then they agreed on what to use out of each man's translation. According to the Aristeas forgery the Septuagint scholars didn't have to do this last step because they all produced the same result. 

We don't know what sources Costain used, one of several problems with his work. We don't know if his source repeats urban legends that took root in the Aristeas letter. We do know that the KJV repeats one major error in Septuagint, creating a character out of a construct state phrase in Genesis 26. We do know it copies the Septuagint error in Isaiah 7 turning "young woman" into "virgin". We do know that it copies Stephen Langton's chapter divisions which split the narratives and create false impressions.

If anybody out there still thinks DH is worthwhile, you're just not paying attention. When I started posting about DH I frequently asked its fans to ante up sources. That was 8 years ago and I discovered more ammunition against it in the meantime. Start here and find out what you missed.

http://pajheil.blogspot.com/2017/07/fact-checking-torah-structure-of-torah.html

Wednesday, February 4, 2026

I'm just saying -- rethinking it

I loved the TV show Murphy Brown. It gave me lots of laughs. I found episodes on Internet Archive and one of them was a typical TV show let-down.

A high school graduate joins Murphy in her home and proceeds to be the teen from hell. She wants to be a journalist just like Murphy, but at the end, she says she doesn't want to go to college. Murphy has no experience with kids and freaks out.

The answer is to make the teen explain just how she plans to get to Murphy's level without college. "Jane" has no experience with professional writing. In her brief stay, she insists on doing anything she wants regardless of the effects on others. This includes smoking around a pregnant woman.

Murphy needed to explain to her that no news organization is going to pay an absolute newcomer for any job without evidence that they can do it. That's what college journalism is about: learning to write; learning to find and use sources; learning to present information effectively; learning which stories are important; learning to dig instead of give up. Her college class assignments and work on the college newspaper might get Jane an interview. Lacking them, she was dead in the water. 

Jane also needed to know that no story is news after its time. You have to beat the news cycle, not trail it. When your editor gives you a deadline, you have to meet it. No excuses. 

And you have to work in a people environment. If you walk into an interview with a non-smoker or somebody who gets sick from cigarette smoke, you can't light up. With coming bans on workplace smoking, Jane was about to hit a brick wall of employment.

The same thing faces high school kids now. AI is taking over scutwork. You have to come into an interview trained to do the job, and also explain why you can do a better job than AI. The most important thing is knowing how to back up your work with information, and AI is lousy at this, it will take any source that suits your keywords. That is why it lies to me on a regular basis and contradicts itself. If you don't understand how bad Wikipedia articles are and how this comes from the sources used in the articles, you will never be better than  AI.

Second, you have to deal with complexity. A recent article showed that using AI in customer service caused problems, it didn't fix them. It couldn't handle nuance or inflection, or customize answers, because it relied on information that didn't fit the situation. Using AI in online chat devolves into long transcripts because the AI can't actually understand the question, it can only deal with keywords.

It's the underlying problem of machine translation, which I think I've posted about before. Computer translation was promised in the 1980s and it has never happened because nobody has been able to program a computer to understand idioms. Idioms are phrases, the meanings of which go beyond the actual words. They are also used in a context, and computers cannot handle context. Actually, damned few humans can handle context, which results in those social media fuck-fests where people call each other names. At some point in the thread, somebody may say "read the thread".

Which doesn't solve anything either. Any time you walk into the middle of a conversation, you are dead in the water because you weren't there for the entire context. A counselor can tell you this; they come into the middle of a stressful situation and the only way to solve it is to make everybody go through the entire "conversation". Bear in mind that the parties have already gelled into their positions or they wouldn't need a counselor in the first place. Don't blame the counselor.

Because the counselor also has to deal with unreliable witnesses. Everybody tailors the story to favor themselves. It goes from being unable to understand language and so unable to understand what they said as part of the problem, to reshaping the narrative to suit themselves as time went on, to lying deliberately to make themselves look good. A counselor has to separate the noise from the signal.

AI can't do that and that's why it lies. Separating noise from signal is a matter of experience. High-schoolers tend not to have it; plenty of college graduates don't have it. I know of college professors who don't have it and pass urban legends because they can't tell they're false.

And most organizations that want to use AI are just as clueless. The companies that thought it would help them do customer service had no clue what went into customer service, and they have screwed up bigtime. A media outlet was bragging about going more to AI, which would result in publishing false information because of AI's inability to evaluate sources properly. A professor was bragging about using AI, which meant an idiot child was going to be running his college courses. It gets worse but I think you've seen enough.

We're in the hype quadrant of AI on Gartner Group's four-stage cycle. We're finding out who is absolutely clueless about how to do their jobs, as much as we're finding out that AI is an idiot child. 

I'm just saying....