By Christian Prokopp on 2023-01-25
ChatGPT and similar language models have recently been gaining attention for their potential to revolutionise code generation and enhance developer productivity. I was curious to see what all the hype was about, so I decided to try it out for some development work.
After conducting my own research, it became clear that ChatGPT and similar language models are powerful tools that can help developers address various issues. Their ability to solve problems in code, self-optimise and find alternatives to given solutions is particularly impressive. Additionally, ChatGPT can introduce developers to new libraries and tools they may not have been aware of before.
However, it is noteworthy that these tools have severe limitations. The model sometimes generates subtle errors, such as comparisons without conditions or accessing non-existing parts in data structures. Additionally, it may offer pointless improvements or duplicate steps like applying multiple compressions.
Is the speed at which machine learning advances solving these issues soon? Maybe, maybe not. For example, self-driving cars have been just around the corner for many years. Replacing human capabilities is not a linear problem. While car manufacturers have solved most of the problem swiftly, the last bit has been intractable and requires further research and development of an unknown timeframe.
Similarly, ChatGPT code and other generations have matured to impressive abilities quickly. But to make it reliable and trustworthy, the last bit of maturity could be many, many years away. As developers are already exploring the technology, it has the potential to make its way into critical systems such as financial, infrastructure, defence, or healthcare systems. This is why it is important to be cautious and aware of the limitations of these tools.
Developers need to remember to treat output from ChatGPT with caution. While it can be intriguing and helpful, developers should always double-check and adapt the code. This is why colleagues or an interactive community like Stack Overflow, where developers validate, share experiences and discuss the pros and cons of the code, remain superior.
In conclusion, ChatGPT and similar language models are powerful tools that can be valuable for developers. However, they come with limitations which negate many of their benefits. These tools can be a great way to inspire, explore, and learn, but they require experienced developers and good practices to be used safely. They are not tools that will replace developers and make them unnecessary but instead means to help experienced developers do their work more efficiently in the future.
Note, I used ChatGPT with some bullet points to generate the first draft of this post. That was helpful but the amount of edits needed to get to something worth posting was surprisingly high. This experience feels similar to trying to use it as a code copilot. It looks good on first glance but the devil is in the detail.
Christian Prokopp, PhD, is an experienced data and AI advisor and founder who has worked with Cloud Computing, Data and AI for decades, from hands-on engineering in startups to senior executive positions in global corporations. You can contact him at firstname.lastname@example.org for inquiries.
Large-language models (LLMs) are great generalists, but modifications are required for optimisation or specialist tasks. The easiest choice is Retr...
Open Source libraries offer user documentation. But expert users and contributors have a deeper understanding of the inner workings stemming from a...
Five tips for Cloud Engineers to deploy Databricks' Delta Lake on AWS safely.
Prevent errors and inconsistencies with Delta Lake's robust data management technology.
Many Amazon marketplace customers know that its huge product catalogue has data quality issues. However, they might expect its top sellers, which t...
Public data has an enormous commercial and social impact. For example, in Ukraine, it affects war and peace, and with the Coronavirus, it involves...