ChatGPT: friend or foe for app developers?

Updated 4 months ago on July 16, 2024

Coding was once considered a time-consuming, complex and skill-intensive task, but today the world of coding has changed dramatically. Where once everything had to be written from scratch and coding libraries didn't exist, the modern developer now has a whole world of technology at their disposal to make the process easier. For companies, this means that their development teams can create code faster than ever before, allowing them to better meet the growing consumer demand for fast, high-quality applications.

The latest technological advancement that further accelerates the speed of coding is artificial intelligence, more specifically ChatGPT. ChatGPT puts even more power in the hands of developers: it is now possible to instantly auto-generate code in any desired programming language using simple prompts. While the adoption of ChatGPT and other artificial intelligence tools in the coding industry is already well underway, it is important to stop and evaluate the cybersecurity implications they may have. It is critical that developers are familiarized with cybersecurity best practices when using these tools to ensure that the code they create is secure. With all the responsibility that ChatGPT can take on, the ultimate responsibility for code security always lies with people. For this reason, it is important to keep an eye on how developers utilize this technology.

AI: the next step in the evolution of coding

One of the aspects of software development that I find most enjoyable is its constant development. As a developer, you are constantly looking for ways to improve efficiency and avoid duplicate code by following the "don't repeat yourself" principle. Throughout history, people have been looking for ways to automate repetitive tasks. From a developer's perspective, eliminating repetitive code allows us to create better and more complex applications.

AI bots are not the first technology to help us in this endeavor. On the contrary, they represent the next stage in application development, building on previous advances.

How much should developers trust ChatGPT?

Before the advent of artificial intelligence tools, developers searched for code solutions on platforms like Google and Stack Overflow, comparing multiple answers to find the best fit. With ChatGPT, developers specify the programming language and functionality required, getting the answer that the artificial intelligence tool thinks is best. This saves time by reducing the amount of code developers need to write. By automating repetitive tasks, ChatGPT allows developers to focus on higher-level concepts, resulting in cutting-edge applications and faster development cycles.

However, there are challenges to using AI tools. They provide a single answer without validation from other sources, unlike the collective community of software developers, so developers need to validate any AI solution. In addition, since the tool is in beta testing phase, the code provided by ChatGPT should be evaluated and double-checked before being used in any application.

There are many examples of hacks that started because someone copied code and didn't check it thoroughly. Consider the Heartbleed exploit, a security bug in a popular library that led to the infection of hundreds of thousands of websites, servers, and other devices that used the code.

Because the library was so widely used, it was assumed that someone would have checked it for vulnerabilities, of course. But instead, the vulnerability persisted for years, quietly exploited by attackers to exploit vulnerable systems.

This is the dark side of ChatGPT: attackers also have access to the tool. Although OpenAI has created some safeguards to avoid answering questions regarding problematic topics such as code injection, the CyberArk Labs team has already discovered several ways in which the tool can be used for malicious purposes. Hacks have occurred due to blind code injection without thorough testing. Attackers can exploit ChatGPT by using its capabilities to create polymorphic malware or create malicious code faster. Even with defenses in place, developers should exercise caution.

The blame is always on the people

Given these potential security risks, there are some important rules to follow when using code generated by artificial intelligence tools such as ChatGPT. To do this, you should test the ChatGPT-generated solution against another source, such as a trusted community or friends. Then you should make sure that the code follows best practices for granting access to databases and other critical resources, following the principle of least privilege, secrets management, auditing and authenticating access to critical resources.

Be sure to double-check your code for potential vulnerabilities and watch what exactly you enter into ChatGPT. There is a question of how secure the information you enter into ChatGPT is, so be careful when using highly sensitive data. Make sure you don't accidentally disclose personal data that could violate regulatory requirements.

No matter how much developers use ChatGPT in their work, when it comes to the security of the code they create, the responsibility always lies with humans. They cannot blindly trust a machine that is ultimately as error-prone as they are. To prevent potential problems, developers should work closely with security teams, analyzing how they use ChatGPT and making sure they are applying identity security best practices. Only then will they be able to reap the benefits of artificial intelligence without jeopardizing security.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype