Illustration created by the Author and MidJourney. All rights reserved.
Today I encountered an article in Medium. You can find the article at this link: https://medium.com/towards-data-science/the-smarter-way-of-using-ai-in-programming-0492ac610385. The article is behind a paywall, but there is a link to the article on Substack.
Upon encountering the article, I knew I had to respond to it. Since I assume that the author felt the article would be useful for a sub-audience of Medium to read the article, I can see why. As a developer with substantial programming experience, I cannot support any article that promotes using an AI tool for creating code, and feel a responsibility to present the other side of the coin.
Illustration created by the Author and MidJourney. All rights reserved.
The article makes many assumptions about the usefulness of LLM-based AI code writers. These assumptions are a continuation of the fantasy that LLM-based AI will someday eliminate the programming task for humans. Based on the questions I have seen, there is concern that LLM-based AI code writers will be able to write the code at the level of expert coders. The very idea is still nothing more than a dream, and that dream will not be fulfilled by the present state of technologies like ChatGPT.
As with most of the OpenAI technology, there is no attempt to verify, validate, or otherwise test anything that their LLM machinery does. But it is a nice feeling to tell stories about what might happen in the future. The reality of this idea is still pretty far away. The time frame is concurrent with when we achieve AGI, and Sam Altman’s calculation that AGI will be achieved in two to three years must be fentanyl-induced because I cannot see any rationale underlying this predication.
Let me begin by putting you on notice that:
If you must have a way to use AI “smarter” to write programs, I am sorry, but you are chasing an imaginary pot of gold at the end of an invisible rainbow that is a hallucination.
I will reiterate what I said in the past: (in a previous article)(https://medium.com/@infoac.accsys/yes-i-started-this-story-with-a-direct-attack-at-anyone-that-thinks-it-is-okay-to-take-code-0932a61b8913). This article was written in April 2024.
NO ONE SHOULD USE ANY AI TO PRODUCE THE CODE FOR ANY COMPUTER PROGRAM.
Let me spell out some very straightforward reasons for the writer of this article. Fundamentally, I disagree with the writer of this article, so my purpose is to present an opposing view based on my own experience with ChatGPT and Gemini, and also my personal expertise gained through many years of software development. Here is why. In this article, I use the words “you” or “your” to refer to the writer of the article and “I” to refer to the writer of this commentary.
1. The generated code can be full of errors. Actually, riddled is not serious enough. In fact, 100% of the code could be incorrect. This basically means that each line of generated code has at least one error in it.
2. The code-writing tool can only produce code for the code that already exists in its database. More than this, the representation of the code in its database has ABSOLUTELY NO PROVISION FOR UNDERSTANDING. In other words, the mechanism that is writing code is like a parrot. “Polly want an IF-STATEMENT?”
3. The code you get could have sophisticated hacks. You would not know if there were sophisticated hacks because if you have to use an AI to generate your code, you are probably an extremely unskilled programmer and could not discern these hacks.
4. I have experimented with ChatGPT and Gemini and have asked each of these AI-based tools to create complex systems. The truth is that you can't get an AI to write a sophisticated program. A very basic reason for this is that those programs are typically not available in the public domain, and even then, I would wonder if a company like Oracle would license its ERP system for FREE. The code for very complex systems cannot be plagiarized by scraping if the code is not in the wild. If it has, all bets are off. There are other reasons why this is true. I won’t elaborate on them here.
5. If the only code you can produce with an AI is error-full, hackery, and simple, then how will you create code that solves any worthwhile problem? It can't be done.
6. I don't care if an AI can solve problems in programming tests and in programming competitions. As far as programming goes, these kinds of tests that LLM-based AI can accomplish certain tasks prove nothing other than that the very code they generate has been found in some repository somewhere, so the LLM is basically regurgitating what is available to it (what it plagiarizes). The code resides in the public domain, and scrapers can purloin it. They are available to be stolen. In any event, don't you think you would learn more by reviewing the code by yourself? But then again, if you are lazy enough to have some AI to write code, you are probably not interested in learning more about coding or improving your programming ability. It is clear that your only interest is to be lazy about your own learning and programming. At least this is the way that I see it.
7. If you came to me for a job, I would not hire you. Do you know why? Maybe you should ask the AI what the reason is that I would not hire you. I hope no programming manager anywhere in the world would hire a person for programming who has gone on record promoting LLM-based AI for programming.
8. I resent you for not acknowledging the ethical reasons why the use of an LLM-based AI code-writing tool is, in fact, not ethical. Among the important considerations, you probably don't understand anything about the effort required to create sophisticated code.
9. If you are willing to have an AI write code for you, then how do I know you are not using an AI to write your article? Plagiarizing seems to be in your blood.
10. A more bothersome limitation of LLM-based programs is that they have no capability to answer introspective questions or explain themselves. A fundamental requirement of AGI is that computational simulation of AGI must be able to accomplish this.
11. An interesting aspect of these two tasks (being able to explain and also to answer questions about themselves) was long thought to be important to include as part of earlier LLM-based AI technology. LLM-based AI technology does not do explanations or answer questions because its so-called knowledge representation is nothing more than a sequence of meaningless numbers. Even more bothersome is the fact that ELM-based technology producers apparently have no way to test their software (the LLM software) for proper operation. Instead of doing the most basic steps of software development, they skirt the issue and just accept outputs as proper tests. That is one of the reasons why the hallucination problem exists. I will leave you with some terminology: mumbo jumbo. LLM-based technology is NON-DETERMINISTIC, whereas non-ELM-based-technology IS DETERMINISTIC. If you are a computer scientist and have no idea what this mumbo jumbo means, look it up.
I think this commentary about your article is complete. I know for a fact that no LLM-based AI technology user has any idea what AI is, nor do they wish to have an informed, accurate conceptualization. They have drunk the Kool-Aid and are too far gone for any hope of redemption regarding my remarks about using these code-writing tools.
This article was prepared by an annoyed Dr. Randy M. Kaplan. Dr. Kaplan has 50+ years of experience in AI-related topics, an M.A. and a Ph.D in Computer Science/AI, and a B.S. in Mathematics. He has 60 Years of Programming Experience, and has been an academic for 30 Years. He has also been employed in various industries. Furthermore, he has also spent quite a bit of time actually IN INDUSTRY and has not resided in the rarefied atmosphere of academic towers. He has spent many years in the trenches of actual development and implementation.
Closing Comments
I hope you enjoyed reading this article. I enjoyed writing it. If you did like it please leave a comment. If you can clap for it, please do so. Even better please subscribe to this newsletter. I now am publishing on Substack, Vocal, and Medium. I am in the process of moving all of my articles to these platforms. If you have friends, please pass this article on to them. Have a great day and thanks again for reading.
You can reach me at therenguy@gmail.com. You don't have to join anything or pay anything (right now - maybe eventually).
🥰🥰🥰