Introduction
A few weeks ago I stumbled across something genuinely funny while poking around Vanguard's driver architecture. Riot Games' anti-cheat contains a ridiculous amount of unused executable space that can be exploited for arbitrary code execution, way more than the typical 2 KB you'd find in most kernel drivers. I reported this to Riot through their HackerOne program and they basically said "not our problem" and classified it as not a security issue under their threat model. Per HackerOne policy this means I can publicly disclose it, so here's the full breakdown.
This is honestly just a meme find that I wanted to document. Nothing groundbreaking in terms of technique, but the architectural implications are kind of wild when you think about them. The fact that this primitive exists at all raises some serious questions about how we think about trust boundaries in kernel-mode anti-cheat systems, especially when those systems are running 24/7 on millions of machines regardless of whether anyone's actually playing the game.
Background on Vanguard
For those unfamiliar, Vanguard is Riot Games' kernel-mode anti-cheat system used primarily for Valorant. It runs as a boot-start driver with extensive virtualization and obfuscation throughout its codebase. I should mention upfront that I haven't reversed Vanguard whatsoever because my focus stays on EasyAntiCheat at the moment until Vanguard is actually worth my time, which honestly it doesn't seem like it is through this simple vulnerability. So feel free to correct me if any of my assumptions about their implementation are wrong, I'm working off pretty surface-level analysis here.
Vanguard is known for being incredibly invasive with deep system integration, running continuously from boot regardless of whether Valorant is actually active. This design choice is exactly what makes the vulnerability I'm discussing particularly concerning from a security perspective, because you're not just affecting people who are actively playing Valorant, you're affecting anyone who has it installed period.
The Discovery
During some basic reconnaissance of vgk.sys, I did analysis on the binary to understand its structure. The results were immediately interesting and honestly kind of surprising. The driver contains approximately 221 KB of total slack space across all sections, which is a massive amount compared to what you typically see. The largest contiguous region is 16 KB in the .nt section at RVA 0xb7000, and it's marked as read-write only (not executable) consisting entirely of null bytes. This suggests it's probably a placeholder section that Vanguard uses for runtime storage or dynamic allocation, though without deeper reverse engineering I can't say for certain.
The executable .text section is where things get really interesting though. It contains approximately 127 KB of total slack space scattered across 366 regions, and while individual regions vary in size, there are multiple contiguous blocks that can be targeted for code injection. These regions contain predictable padding patterns consisting of 0x00 or 0xCC bytes, making them pretty straightforward injection targets if you know what you're looking for.
How the Exploitation Works
Here's the key insight that makes this whole thing work. Vanguard runs 24/7 from boot, but it only actually performs integrity checks when you launch a Vanguard-protected game like Valorant. If you never launch Valorant, those integrity checks never trigger. This means you can map arbitrary code into vgk.sys slack space and it'll just sit there executing happily as long as you never start a Riot game. Vanguard establishes its kernel-mode communication channel during system boot, so any code you map into vgk.sys can hijack Vanguard's DriverObject by pointing IRPs to your own handlers. This works because the handlers themselves are still inside vgk.sys .text section, so from a memory layout perspective everything looks completely legitimate.
This is perfect for persistence against anti-cheats like EasyAntiCheat that abuse interrupts and focus heavily on detecting your code actively executing on the CPU. You're running from a legitimately signed Riot driver that's expected to be doing invasive kernel operations anyway, so the behavioral profile looks completely normal. EAC can't really tell the difference between your code executing from vgk.sys and Vanguard's own code executing, because from a technical perspective they're indistinguishable at the memory and execution level.
The Trust Amplification Angle
Here's where it gets crazy from a security perspective. Since vgk.sys is a legitimately signed driver from Riot Games running as boot-start, it's inherently trusted by the operating system in ways that matter for exploitation. Security products have extremely limited visibility into its behavior without potentially breaking legitimate functionality, and honestly most security vendors probably don't want to risk breaking Valorant for millions of users by being too aggressive with their scanning.
Vanguard's invasive nature actually works in an attacker's favor here. Unusual kernel-level behavior originating from vgk.sys isn't particularly surprising to security products because Vanguard is already doing all kinds of invasive stuff normally. This means you can get away with things that would immediately trigger alerts if they came from an unsigned driver or even a less well-known signed driver. It's basically a post-compromise trust amplification vector where an attacker with any kernel write primitive can leverage Vanguard's legitimacy to evade detection across the entire security stack.
The kernel write primitive part is obviously a prerequisite here, you need something like a vulnerable signed driver or any other method of writing to kernel memory. Now obviously, this does not support HVCi as you need to flip permissions to write it so the only real way to get away with this, is through modifying something like KDMapper. But once you have that primitive, using Vanguard as your execution host is way more effective than trying to load your own driver or allocate new memory regions that will immediately look suspicious.
Why This Actually Matters Beyond Game Cheating
Here's the ironic part that I find genuinely hilarious. This technique works incredibly well for games protected by anti-cheat systems other than Vanguard itself, like EasyAntiCheat or BattlEye. Vanguard's invasive kernel presence and signed status makes it the perfect evasion platform for attacking their competitors' security solutions. You're literally using Riot's anti-cheat to bypass other companies' anti-cheats, which is just chef's kiss in terms of irony.
But the more serious concern from a broader security perspective is that this primitive could easily be leveraged by malware authors to achieve persistence and evasion on any system with Vanguard installed. This affects millions of users who might not even actively play Valorant anymore but still have the anti-cheat running continuously from boot because they forgot to uninstall it or whatever. You're creating this massive attack surface that has nothing to do with Valorant's integrity and everything to do with the fact that Riot decided their anti-cheat should run 24/7 at the kernel level.
This is why I fundamentally disagree with Riot's classification of this as "not a security issue" even though I understand their position from a strict bug bounty perspective. They're looking at it purely through the lens of Valorant integrity, but the real-world security implications extend way beyond that. Your driver is being weaponized to facilitate malware operations that have nothing to do with game cheating, and that should matter from an ethical standpoint if nothing else. In fact, that is the only reason I make this post, to raise awareness and hopefully get them to patch this.
The Detection Problem
Now here's where it gets even funnier. Detection from Vanguard's perspective would be completely trivial to implement. They could simply allocate a seperate pool of memory containing their .text and flag any modifications to the slack space periodically. However, this would break their terms of 'not running anything before the game start'. Vanguard absolutely has the capability to detect this, it's their own driver after all.
But here's the thing. The reason this technique is viable isn't about Vanguard detecting it, it's about other anti-cheats detecting it. Competing anti-cheat systems like EasyAntiCheat would need to implement checks that specifically target Vanguard's driver, which is way more complicated than it sounds. They'd need to specifically know about vgk.sys's structure, scan its memory regions, compare them to known-good versions, and flag modifications. If you think you could just compare it to the on-disk and call it a day, you've already built a flawed system. Vanguard virtualizes everything top to bottom which may require them dynamically allocating memory inside these slack regions. You cannot just check if the on-disk version and the memory is different in those small regions, as you need to know what Vanguard specifically does with that region. That's a massive engineering effort for a single specific driver from a competitor.
Even worse, if Vanguard ever updates and changes the amount or location of slack space in their driver, it could completely break EAC's detection logic depending on how they implemented it. You'd need version-specific checks for every vgk.sys build, which is a maintenance nightmare. And that's assuming Riot doesn't consider this type of scanning to be overstepping professional boundaries, because you're essentially reverse engineering and monitoring a competitor's anti-cheat implementation.
The reality is that implementing detection for this would be time-consuming, fragile, and potentially controversial from a business perspective. It's genuinely a meme technique in the research sense because it works not due to technical sophistication but because of the awkward position it puts competing products in.
What This Says About Anti-Cheat Architecture
The beauty of this approach is the persistence model. You're running from a boot-start driver that's always present, your code sits dormant in slack space until you need it, you have IOCTL communication handed to you, and as long as you never launch a Vanguard-protected game, their integrity checks never examine your modifications. It's perfect for long-term persistence against anti-cheats that focus on runtime detection of actively executing code.
The broader point here is that this kind of primitive shouldn't even exist in a properly designed anti-cheat system, but it does because of fundamental architectural decisions. The decision to run from boot with extensive kernel privileges creates this massive attack surface. The decision to only perform integrity checks when the protected game launches creates the exploitation window. The decision to use a boot-start driver that's always present means attackers can leverage that presence for operations completely unrelated to your game.
None of these decisions are inherently wrong in isolation, but the combination creates this exploitable primitive that's now affecting millions of users who might not even play Valorant anymore. Think about what's actually happening here. Someone who installed Valorant once, played it for a week, and moved on still has vgk.sys running from boot every single day. That's a permanent kernel-level attack surface that has nothing to do with protecting game integrity at that point.
The goal of documenting this isn't to provide a copy-paste exploit for people who don't understand what they're doing. It's to make people think critically about the architectural decisions we make when designing kernel-mode security software and the downstream consequences of those decisions. It's to highlight that "protecting game integrity" and "protecting user security" are not the same goal and sometimes they're actively in tension with each other.
Disclosure Timeline and Vendor Response
I discovered this in late January 2026 and immediately reported it to Riot Games via their HackerOne program with a detailed technical writeup and proof-of-concept demonstration. Their security team assessed the report pretty quickly and classified it as "not a security issue" under their threat model, which as I mentioned focuses primarily on Valorant integrity rather than cross-product security implications or potential malware abuse.
I understand their position from a bug bounty perspective. They're looking at whether this affects Valorant specifically, and the answer is basically no because you need an existing kernel write primitive which means the system is already compromised at the kernel level. From that narrow perspective their classification makes sense. But from a broader security perspective I think they're missing the forest for the trees. The fact that their driver can be weaponized to attack other software and facilitate malware operations should matter regardless of whether it affects Valorant's integrity directly.
Per HackerOne policy their classification enables public disclosure without any embargo period or coordination requirements, so here we are. I'm not providing a working implementation because that would just be irresponsible and also unnecessary given how easy it is to implement if you have the prerequisite knowledge.
Conclusion
This is just another example of how trust assumptions in kernel drivers can be abused when you really think about the threat model holistically. The technique itself isn't revolutionary or particularly clever. What's interesting is the architectural decisions that make it possible and the industry dynamics that make detection unlikely despite being technically feasible.
Hopefully this case study contributes to the broader conversation about how we design and evaluate kernel-mode anti-cheat systems, particularly around trust boundaries and cross-vendor security implications. We need to start thinking more critically about the downstream security consequences of running invasive kernel-mode software 24/7 on millions of machines, especially when that software isn't even actively protecting anything most of the time.
If you're a security researcher or anti-cheat developer interested in discussing these architectural concerns further, feel free to reach out. I'm always happy to talk about defensive techniques and better design patterns for kernel-mode security software, because honestly we can do better than this as an industry.