Freelance question via Mastodon:
have you tried using an LLM coding tool to see how good/bad the code it generates is?
I have tried using AI / LLM coding tools in a very limited sense, by asking Copilot to write code for certain scenarios to see what it comes up with. This was just an exercise to test the state of the market, rather than for any code that would be deployed.
For example, I asked Copilot to write me a secure login system using PHP. It correctly used password hashes and bound parameters, but it failed to consider timing attacks, secure session cookies, two factor authentication, input sanitisation etc. These are all things which I pick up regularly in code reviews as recommendations for improvements. At best, it was on a par with a junior developer with perhaps a couple of years of experience, but limited security knowledge.
However, I don’t use AI / LLM coding tools in my business, either internally or on client work. There are several reasons for this:
- I don’t know what material they have been trained on, and whether they have adhered to all the relevant licences.
- I don’t know what licence the output is under, or where liability attaches if there is anything wrong with it (security issues, intellectual property infringement etc.)
- The quality is often poor and doesn’t conform to best practices – especially around security. This may be because the training set is websites and open source projects, which often contain outdated information.
- The environmental impact of training models and responding to queries.
- The ethics of using tools that don’t disclose what they’ve been trained on, don’t contribute back to open source projects, and in some cases parrot text directly from a website (I have noticed that some of my technical blog posts are repeated verbatim sometimes in responses).
I also have the following block in my How I work page, which I send to all potential clients:
I do not use Artificial Intelligence (AI) or Large Language Models (LLMs) to help me write code or documents in any of the projects I work on, as I cannot be sure what material they have been trained on and whether the licence permits me to use it. This includes tools such as ChatGPT, GitHub Copilot etc.
This only applies to code and documents which I write. All software development involves the use of libraries provided by third parties, which will be governed by their own rules on the use of AI and LLMs.