When you think about all the different types of activity users are performing within Microsoft Azure AD and Office 365—how certain can we be that everything happening is by an employee and not motions being made by an actor working to take over an account? In fact, Vectra recently surveyed over 1,000 security pros and an astonishing 71% revealed they had suffered an average of 7 account takeovers of authorized users over the last 12 months.
We probably shouldn’t be surprised that attacks are being attempted, especially considering the popularity of Office 365 and its hundreds of millions of users. It’s a powerful productivity tool that continues to provide many connectivity and collaboration benefits to teams near and far. So, while Microsoft has built an incredible platform that many companies can’t live without, cybercriminals view this large pool of users as an opportunity to try and swoop in and takeover accounts.
So, how can we spot the malicious activity and get the right alerts to security teams, so they aren’t spending valuable cycles chasing benign activity? Thankfully, collecting the right data and using meaningful artificial intelligence (AI) can help organizations have a vision for what authorized use looks like when it comes to the cloud service they adopt.
In the case of Vectra customers, threat detections are triggered when something out of the ordinary happens in their Azure AD or Office 365 environments. We recently took an in-depth look at the Top 10 Threat Detections across our customer base in our most recent Spotlight Report: Vision and Visibility: Top 10 Threat Detections for Microsoft Azure AD and Office 365.
Many of the Microsoft environment threat detections represent activities that provide ease of use or collaboration with external parties, which is of course convenient for the user, but can also provide access to an attacker. Make sure to download the report so you have all the details, but here's a couple of detection scenarios that registered in the top 10.
As the name indicates, this detection would trigger if an external account has been added to a team in O365. This type of activity could indicate that an adversary has added an account under their control, and your security team needs to know about it in the event that’s the case.
You would get notified by this detection if an account was seen sharing files and/or folders at a volume that was higher than normal, which could indicate that an attacker is utilizing SharePoint to exfiltrate data or maintain access after initial access has been remediated.
This detection would notify your team if abnormal Azure AD operations have been detected, which could indicate that attackers are escalating privileges and performing admin-level operations after regular account takeover.
Considering these detection examples, you can see that every activity being detected isn’t necessarily malicious, which is why it’s so important to have the right data to leverage. It’s about being able to identify and tell the difference between what’s considered normal activity for your environment and what could be a potential issue that needs to be addressed.
Using a collaboration tool like Microsoft Teams is certainly convenient for legitimate users, however, it could also be a convenient means for attackers to find useful information or obtain documents and information. Regardless of whether the activity is external Teams access, suspicious download activity, or any other risky operations happening in your environment—your security team needs to know.
We cover these activities and more in this latest report. You’ll see how meaningful AI can provide the right vision and visibility for your environment and even how it can help you avoid a costly cyberattack.
Get your copy of the report today! And, to hear a detailed assessment from Vectra’s Technical Director, Tim Wade, and our CMO, Jennifer Geisler, don't miss our webinar on Tuesday, June 8 at 8:00 am PT | 11:00 am ET | 4:00 pm BST | 5:00 pm CET.