Cloud cybersecurity firm Barracuda Networks Inc. today released a new report on the evolution of the malicious use of artificial intelligence, providing information about how AI is being used by attackers and to prevent attacks.
The report is based on the analysis of 905 billion events from customers’ integrated network, cloud, email, endpoint and server security tools between January and July 2023. The analysis included everything from logins to application and device processes to changes to configuration and registry and more.
Of the data analyzed, 0.1% of customer events, or 985,000, were classed as “alarms,” activity that could be malicious and required further investigation. Out of these, only 9.7% were flagged to the customer for checking, while a further 2.7% were classed as high-risk and passed to a Security Operations Center analyst for deeper analysis. Six thousand required immediate defensive action to contain and neutralize the threat.
In the report, Barracuda outlines the three most common high-risk detections during the first six months of 2023. Notably, AI-based detection was key in both detecting and analyzing the data.
Leading the list were what Barracuda called “impossible travel” login events that occur when a user is trying to log into a cloud account from two geographically different locations in rapid succession. Although this could indicate that a user is using a virtual private network for one of those sessions, it is more often than not a sign that an attacker has gained access to a user’s account.
“Anomaly” detections — unusual or unexpected activity in a user’s account — were next on the list. Such detections could include things such as rare or one-off login times, unusual file access patterns or excessive account creation for an individual user or organization. Anomaly detections can be a sign of a variety of issues, such as malware infections, phishing attacks and insider threats.
Communication with known malicious artifacts was the third most common detection. These identify communication with red-flagged or known malicious IP addresses, domains or files, and can be a sign of a malware infection or a phishing attack.
Although the data analysis demonstrated how AI can be used to detect and protect against attacks, the report also warns that AI can be used for malicious purposes by attackers.
The report notes that generative AI language tools can create highly convincing emails that closely mimic a legitimate company’s style, making it much more difficult for individuals to discern whether an email is legitimate or a phishing, account takeover or business email compromise attempt. Attackers can also use AI tools to automate and dynamically emulate adversarial behaviors, making their attacks more effective and harder to detect.
“As AI continues to advance, organizations need to be aware of the potential risks and take steps to mitigate them,” the report concludes. Recommendations included robust authentication measures, such as multifactor authentication, but ideally, the implementation of zero-trust approaches. Employees should be continuously trained to look for phishing attempts, and information technology security teams are advised to stay up to date with the latest in AI-powered threats.
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.