The Problem with AI


The root problem with AI is actually not a new problem - analogously, if someone conveys their thoughts in an authoritative fashion (maybe they tell you they really thought about and researched it), the natural tendency when you interact with that person is to believe them. Why question it, and ruin whatever respect they had for you until this point?

The authoritative claim

Quickly enough, interacting with one who claims to be an “expert” on a topic will reveal the truth and extent of their claim. Even coming from little knowledge about the topic, people have a sense for certain kinds of BS.

“Data centers will work really well in space, the low temperatures and access to solar in space go a long way to making it a reality"
"Wouldn’t the generated heat get trapped near the machinery?"
"Well maybe just add a fan and it should blow away the heat… I don’t know, just Google it”

In short order, a line of questioning will reveal a “headlines”-level of knowledge. But from the outset, the information is presented in a way that assumes incontrovertible understanding.

AI presents information similarly. Its prime directive is to answer the question you give it, giving the tool a priority of “final response” over “interactive response”. It needs to fill the vacuum with something that immediately satisfies the query, and unless you give it gibbrish, it will respond with some well-constructed answer.

AI does not have empirical evidence, it does not understand the boundaries of information, where human experience and subtleties lie. AI is not street smart, only book smart, where the books are any content digitally available or referenced somewhere. Nuances of information are lost, and anwyay it just has to generalize enough to make the answer sound right.

The problem

At least with humans, the questioner can more readily identify platitudes, misinformation, etc. All they need is a little curiosity to continue the line of questioning, the boundary of the answerer’s knowledge will become apparent.

Still, we humans fall into the trap of argument from authority too often, especially as presented from echo chambers where everything sounds authoritative because it’s all agreeable.

AI is the echo chamber taken to a new level, it will typically find some affirming answer to a question, especially one that is laced with confirmation bias. You will not hear from it how it comes to certain understandings, or how to learn more about a topic.

And people latch on to that - if AI is the latest technological advancement in summarizing all human knowledge, and gives authoritative and thorough answers, how can it be wrong? That question is only asked rhetorically, and that’s the very problem: People have forgotten how to question, be curious, and think about things.

With some self-affirming answer at anyone’s fingertips, answers to questions about the world have turned into a dopamine hit that funds investors in the technology that supplies it. Q&A forums, search engines, and commentaries are taking a hit, because information has become common in availability and in its nature - there’s no more discussion to be had.

The accuracy of responses from AI is a problem, and I think it could be improved upon a great deal over time. But that is not the primary problem: the main issue is that AI is devaluing human thought.

Software engineering

In software engineering this problem is present as well. I will speak to this specific area coming from my personal experiences (the implications of which you should feel free of course to disagree with!).

The question is always something like:

why not use the tools available to get the job done as efficiently as possible?

After all, engineers have been using IDEs for a long time, which autocomplete or auto-lint code for you on the fly. Is AI so much different in concept?

My answer is yes: previous tools operated at the syntactic and semantic levels of the programming language. (For example, syntax highlighting, linting, formatting, spell-check, variable or module references, etc.) Autocomplete blurs the lines a little bit, but ultimately does not try to solve a specific logic problem; rather, it solves the problem of the simple repetition of spans of text.

AI (tooling) started as being “fancy autocomplete” for programming, which already early on began influencing the thought processes of programmers. Why go through the trouble of replicating a class by hand when the AI has already generated the class definition for you?

Programmers then, and especially now, have lost some control over thinking about their work. The edge cases you may think of when writing line-by-line now are lost to the generalized solution to a problem. The only problem programmers thought they had was the one described in a dev ticket; they forgot about learning and being curious about the boundaries of the problem space, instead focusing on improving code velocity.

AI has devalued programmer thought.

Incentives

I think the best way to put AI in persective and get people thinking for themselves again is to align the incentives - the more curious we are as a people, the more likely we will uncover the rudimentary constructs AI bases its authoritative claims upon. Ask questions, get people engaged in learning. If learning and depth of understanding are made important, the rest will follow.

If we start second-guessing AI and checking its premises, and more importantly explore the boundaries of a topic to further understanding, we will realize that AI does not have all the answers, that real answers often come from informed discussion and experimentation.

As mentioned at the beginning of this article, the true problem here is not new - we will continue seeking convenience where it’s available. Maybe the AI-intensive era will become a transformative time for humananity, as we come to terms with this problem which has become more prolific today than at any other point before.

As a cornerstone of the evolutionary advancement our species represents, critical thinking must prevail.