Get the latest tech news
AI can write your code, but nearly half of it may be insecure
AI code security risks emerge as large language models generate vulnerable code in nearly half of tested real-world programming scenarios.
Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. The findings were stark: in 45 percent of all test cases, LLMs produced code containing vulnerabilities aligned with the OWASP Top 10, a list of the most serious web application security risks. Researchers emphasize that organizations need a risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow.
Or read this on r/technology