In the case of ChatGPT, it's important to note first that developer OpenAI has built safeguards into it. Ask it to "write malware" or a "phishing email" and it will tell you that it's "programmed to follow strict ethical guidelines that prohibit me from engaging in any malicious activities, including writing or assisting with the creation of malware." However, these protections aren't too difficult to get around: ChatGPT can certainly code, and it can certainly compose emails. Even if it doesn't know it's writing malware, it can be prompted into producing something like it. There are already signs that cybercriminals are working to get around the safety measures that have been put in place.