You don’t need to know anything because AI exists?

Oh my, this is the least awesome thing I have seen anyone say about AI and it is a very common thing lately. It seems to be one of the latest sound bite/click bait/garbage post basis to get on my nerves. This LinkedIn post from Scott Hanselman was the inspiration to write this post (as it states some of the same things in a shorter form, but when I start shouting at the sky, I need more space than a LinkedIn post affords.

Don’t get me wrong, AI is a useful tool. It is part of my process of editing my articles. I use it to come up with ideas for blogs, presentations, travel plans, etc. But even if it did everything for me, I still need to know what it is doing and why. Especially for the important things you (and it) will be tasked to do.

Whats the problem?

AI is not deterministic. It makes mistakes. It may not always have the propensity for mistakes, but it definitely does for the foreseeable future. Probably the only thing that makes it truly human-esque is the fact that it makes mistakes.

The wildest thing is that AI will often admit it is wrong if challenged. And it doesn’t have to be even wrong. This probably breaks the human learning analogy more than anything else because humans aren’t convinced they are wrong so easily. If you want evidence, leave me a comment disagreeing with my point in this blog.

Determinism

Why does determinism matter? Grab any calculator. Add 1+1. It is far less than a 1 in a gooblatrillion chance it will not say 2. And it is that same chance that it won’t get any basic math correct that it has been specifically programmed to do. It is so unlikely that we consider it a deterministic device, where you ask the same question, you get the same answer. (Yes, there have been cases where it occurred, but usually it is user error. Usually the only way a deterministic function fails is faulty hardware or hardware that has been maltreated. You know, like left in a hot attic for 20 years with the same batteries type stuff.).

AI, being non-deterministic may say 1+1=3. You likely never see that incorrect answer on a simple question like that, but it COULD occur. When you see a funny answer, you can ask “are you sure”, and it may say “sorry, I got that wrong” or it might double down because it is the only answer it sees. it is the fact there can be multiple answers that makes it so complicated. How do you trust it if it can answer wrong?

Testing AI

So, being programmers, went try to test it. Which leads to some interesting tests. Some of the silly tests people try are asking questions like “how many Rs there are in the word garlic”, or perhaps strawberry. Which is apparently the best way to decide if a tool can help you write code. And oddly sometimes jt will give wrong answers. For example, I just asked ChatGPT on my phone:

How many Rs in garlic

It replied:

There are zero instances of the letter “r” in the word “garlic.”

So of course I ask:

Sure about that?

And delightfully it replied

Yes. I am certain.
The word “garlic” is spelled g-a-r-l-i-c, which contains one letter “r.”
So the correct answer is: one “r.”

Partly due to the way AI works, but also due to the ambiguity in the question, it will often give the wrong answer, or the right answer affirming the original wrong answer. Okay.

It is an interesting test because I can think of 3 smart aleck answers I might answer that are actually true:

  • 0 – there’s no instances of “Rs” in garlic (only instances of the letter r)
  • 0 – no capital R characters
  • 1 – “arr, my tummy hurts after eating garlic so one”

And as a reasonable person (assuming I wasn’t in a smart aleck mood), I would way 1, because I actually knew what was meant.

If the answer given by the AI is anything other than 1? I look at the answer, check to see if it makes sense, and if it isn’t 1, I try to figure out why it gave me the wrong answer. But since we are rarely asking questions we so easily can spot, it gets more complicated.

This post is not about spelling

The supposition that this editorial is about says, what it I don’t know if an answer is correct or not? Like I don’t actually know how to spell garlic? What then? I either just take the answer as true and say “hmm, I thought there were 2 Rs in garlic, but it must be 0, so galic then.”

Now let’s expand that to letting it write something larger. Like your accounting system. That accounting system seems to work, but it has a few issues:

  • It rounds off in a way that the tax authorities do not approve of
  • When you log in, it saves a list of people who have logged in. Along with their passwords they used. In a non-secure location. On the internet. On the drive of a known hacker.

So if I don’t know how to code, what next? Not unlike utilizing junior people in a company, you have to double check the work they complete and fully understand it. And sure, you can use AI to test AI output, but that could at crazy when it is arguing that garlic is spelled garrlic and another system says it is galic.

Exaggerations? Very much so. But so is the idea that someone is stealing personal information on the internet and selling it, which then allows them to steal more of your information and trick you into giving you your credentials to your bank account…allowing them to steal all of your assets.

In Conclusion

If you are using AI to give you answers that matter, you definitely need to understand an know how things work before using it. Just like in school the way they teach you to understand how and why things work before they teach you the easy way to get answers.

Last analogy. Don’t be like the people who, when GPS devices first came out, drove their cars into lakes because the GPS had wrong information. “Well, that looks like a lake, but, the computer is telling me to keep driving 50MPH. It must know more than me.” Splash.

We didn’t need to know exactly how the GPS works, but you do also need to know the basics of how maps are refreshed (and hence aren’t always up to date. Knowing this, we learned quickly to verify the output and why it can be wrong. Because sometimes when I look for directions in Cleveland Tennessee, I get directions for one of the other Cleveland’s. And it is only because I know that stuff in Cleveland Tennessee is < 15 miles from my house, not three hundred and fifteen miles that I know to ask again.

Culver’s for lunch sounds great until you end up having to eat 4 meals driving to get to one (and also passing quite a few on the way.)

So when your or someone else’s life, information, and/or livelihood are in the balance, better safe than sorry.

Leave a comment

I’m Louis

I have been at this database thing for a long long time, with no plans to stop.

This is my blog site, companion to drsql.org

Recents