Will ChatGPT replace developers?

Updated 5 months ago on June 03, 2024

AI is gaining momentum again with the recent release of ChatGPT, a natural language chatbot that people use to write emails, poems, song lyrics, and college essays. Early users have even used it to write Python code, as well as reverse engineer shell code and rewrite it in C. ChatGPT has given hope to people waiting for practical AI applications, but it also makes you wonder if it will displace writers and developers in the same way that robots and computers have displaced cashiers, assembly line workers, and perhaps in the future cab drivers.

It's hard to say how advanced AI's texting capabilities will be in the future, as the technology absorbs more and more examples of our writing online. But I can see its programming capabilities being quite limited. In the end, it may become another tool in the developer's kit for solving problems that don't require critical thinking skills from software engineers.

ChatGPT has impressed many people because it perfectly mimics human conversation and sounds literate. Developed by OpenAI, the creator of the popular AI engine DALL-E, it runs on a large language model trained on a large amount of text taken from the Internet, including code repositories. It uses algorithms to analyze text and fine-tune the system's training to answer users' questions with complete sentences that sound like they were written by a human.

But ChatGPT has drawbacks - the same limitations that prevent it from being used for writing content also make it unreliable for creating code. Because it's based on data rather than human intelligence, its sentences may sound coherent but fail to provide critically informed responses. It also uses offensive content, such as hate speech. The answers may sound reasonable, but may be highly inaccurate. For example, when asked which of two numbers, 1,000 and 1,062, is greater, ChatGPT will confidently answer that 1,000 is greater.

The OpenAI website provides an example of using ChatGPT to debug code. Responses are generated from previous code and do not have the ability to replicate human quality control, and thus can generate code with bugs and errors. OpenAI acknowledged that ChatGPT "sometimes writes plausible but incorrect or meaningless responses." "This is why it should not be used directly when creating programs.

The lack of reliability is already causing problems for the development community. Stack Overflow, a Q&A site that coders use to write code and troubleshoot problems, has temporarily banned its use, stating that ChatGPT generates such a huge number of responses that it can't provide the quality control that humans do.

"In sum, because the average percentage of correct answers from ChatGPT is too low, posting answers created by ChatGPT does substantial harm to the site and to users who ask or seek correct answers."

Bugs in the code aside, because ChatGPT, like all machine learning tools, is trained on data that matches its output (in this case, text), it lacks the ability to understand the human context of computation in order to program well. Software engineers must understand the purpose of the software they create and the people who will use it. Good software cannot be created by putting together programs made up of crumpled code.

For example, ChatGPT cannot understand the ambiguity in simple requirements. While it is clear that if one ball bounces and returns, and another bounces and returns again, the second ball has traveled farther, ChatGPT struggles with this nuance; this nuance will be necessary if these systems are ever to be replaced by developers.

It also has problems with basic math, such as when asked to determine which is greater and asked to choose between a negative and positive number. ChatGPT confidently tells us the correct summation of space, but can't figure out that -5 is less than 4. Imagine your thermostats going off the beat because the heating comes on at 40 degrees Celsius instead of -5 because the artificial intelligence program coded it that way!

AI's generation of pre-trained code also raises some legal issues regarding intellectual property rights: it cannot currently distinguish between code licensed in restrictive or open source form. This could expose people to license compliance risks if the AI borrows a pre-written line of code from a copyrighted repository. This problem has already prompted a class action lawsuit against another OpenAI-based product called GitHub Copilot.

We need people to build software that people rely on, but that doesn't mean there can't be a place for AI in software development. Just as automation is used by security operations centers for scanning, monitoring and basic incident response, AI can serve as a programming tool for lower-level tasks.

To a certain extent, this is already happening. GitHub Copilot allows developers to use ChatGPT to improve code, add tests, and find bugs. Amazon offers CodeWhisperer, a machine language tool designed to improve developer productivity with code recommendations generated from natural language comments and code in an integrated environment. And someone has created a Visual Studio code extension that works with ChatGPT.

And one company is testing AI for developers. DeepMind, whose parent company is owned by Google, released its own code generation tool called AlphaCode earlier this year. Earlier this month, DeepMind published in the journal Science under the headline "Machine Learning Systems Can Program Too" the results of simulation scores in a competition on the Codeforces platform. Disregarding the grammar of the headline, AlphaCode placed in the top 54% of competitors, solving problems "requiring a combination of critical thinking, logic, algorithms, coding, and natural language understanding." The abstract of the paper states, "The development of such coding platforms could have a huge impact on programmer productivity. It may even change the culture of programming by shifting human work to problem formulation, and machine learning will ... be responsible for generating and executing codes."

Machine learning systems are getting more advanced every day, but they can't think the way the human brain does. This has been the case for the past 40+ years of artificial intelligence research. While these systems can recognize patterns and improve performance on simple tasks, they can't always create code as well as humans. Before we allow computers to massively engage in code creation, we need to see systems like AlphaCode make it into the top 75% of entrants on a platform like Codeforces, though I fear that may be too much for such a system. In the meantime, machine learning can help solve simple programming problems in the future, allowing the developers of tomorrow to think about more complex issues.

At this point, ChatGPT will not disrupt any area of technology, especially software engineering. Fears that robots will displace programmers are greatly exaggerated. There will always be tasks that human developers can do that machines will never be able to do.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype