Video

GenAI Part 4: How Attackers Use LLMs

Welcome to the fourth episode of our ongoing series on Large Language Models (LLMs), featuring Oliver Tavakoli, CTO at Vectra AI, and Sohrob Kazerounian, Distinguished AI Researcher. In this episode, we dive into the dark side of LLMs, exploring how attackers are exploiting these advanced tools to enhance their malicious activities.

GenAI Part 4: How Attackers Use LLMs
Select language to download
Access
Video
Can't see the form?

We noticed you may not be able to see our form. This occurs when privacy tools (which we fully support) block third-party scripts.

Firefox users:

Click the shield icon in your address bar → "Turn off Tracking Protection for this site"

Chrome with privacy extensions:

Temporarily allow this site in your ad blocker or privacy extension settings

Prefer not to change settings?

Email us directly at support@vectra.ai or info@vectra.ai


We use HubSpot forms for functionality, not tracking. Your privacy matters to us—that's why we're giving you these options.

Trusted by experts and enterprises worldwide

FAQs