My personal reflections on the the words "LLM", "AI" and my perceptions

My personal reflections on the the words "LLM", "AI" and my perceptions

Something has been on my mind and I’d thought I’d share my thoughts.

Firstly this is more of an inward problem of me struggling to acknowledge LLMs as “AI” even though by the current definition I am wrong.

When I picture AI, I often have these grandiose ideas of hyper-intelligent machines that can seamlessly understand and interact with humans. Well, guess what? LLM isn't quite there yet, and I'm not afraid to say it.

Even as a human I don’t understand what “intelligence” is so I will let sleeping dogs lie.

But to me the problem is the illusion of understanding. This point for me goes both ways, I want to understand LLMs a lot more so it could very well be my misunderstanding, however LLM might give the impression that it's following along just fine, but if you dig a little deeper, it becomes painfully obvious that its understanding is limited. Talk about frustrating. It's hard to see it as a true AI when it keeps stumbling like that. This is the part I described as being a Lingatrix

And don't even get me started on the futuristic expectations. You know, the sci-fi movie-type AI that flawlessly interacts with humans and has a physical presence? Yeah, that's not LLM. It's just a disembodied text-based AI that tries its best, but still has a long way to go.

Now, if I have to admit it, LLM technically qualifies as AI. It's "taught" using deep learning techniques over countless hours, which, I suppose, is similar to a human learning at an accelerated pace without breaks. And yes, error rates occur, leading to deviations in responses, much like a human. But that doesn't change my overall opinion.

Lastly, even if we accept LLM as AI, I still feel as if this definition misses the mark. Take Google search, for example. They claim to have added various AI features over the years, but everything's hidden under the search functionality. Nobody ever calls Google search an AI, and that's exactly how I feel about LLM. It might technically meet the definition, but it's just not "correct" in my eyes.

I know AI technology is still evolving, and maybe LLM will get better over time. But for now, it's just not living up to its full potential. I guess I'll have to wait and see if it eventually meets my sci-fi oriented vision on them or I become more accepting with the term over time, such as if there is more progress with LLMs.

So why am I so unsettled by this.
The sci-fi AI realm has apparently evolved into the so-called "Artificial general intelligence" (AGI) – shocking, isn't it?  A word which I never even knew existed before the end of March/Early April 2023 is suddenly redefining what I thought "AI" meant. This feels like it moved the goalposts. I can't help but feel a bit miffed about this revelation and these confusing terms as well as the “technical truth” compared to the “expectation”.

In conclusion, I'd like you to take a moment and ponder this little inquiry, if you will. How does your human mind process the following comparisons?
Google search: AI,
ChatGPT: AI,
and "Bing chat": AI. (This trash)
Does that resonate with your inner sensibilities? Or do you, perhaps, have other thoughts on the matter?