Thursday, August 16, 2018

The Problems and Promise of WebAssembly

Posted by Natalie Silvanovich, Project Zero

WebAssembly is a format that allows code written in assembly-like instructions to be run from JavaScript. It has recently been implemented in all four major browsers. We reviewed each browser’s WebAssembly implementation and found three vulnerabilities. This blog post gives an overview of the features and attack surface of WebAssembly, as well as the vulnerabilities we found.

Building WebAssembly


A number of tools can be used to write WebAssembly code. An important goal of the designers of the format is to be able to compile C and C++ into WebAssembly, and compilers exist to do so. It is likely that other languages will compile into WebAssembly in the future. It is also possible to write WebAssembly in WebAssembly text format which is a direct text representation of WebAssembly binary format, the final format of all WebAssembly code.

WebAssembly Modules


Code in WebAssembly binary format starts off in an ArrayBuffer or TypedArray in JavaScript. It is then loaded into a WebAssembly Module.

var code = new ArrayBuffer(len);
… // write code into ArrayBuffer
var m = new WebAssembly.Module(code);

A module is an object that contains the code and initialization information specified by the bytes in binary format. When a module is created, it parses the binary, loads needed information into the module, and then translates the WebAssembly instructions into an intermediate bytecode. Verification of the WebAssembly instructions is performed during this translation.

WebAssembly binaries consist of a series of sections (binary blobs) with different lengths and types. The sections supported by WebAssembly binary format are as follows.

Section
Code
Description
Type
1
Contains a list of function signatures used by functions defined and called by the module. Each signature has an index, and can be used by multiple functions by specifying that index.
Imports
2
Contains the names and types of objects to be imported. More on this later.
Functions
3
The declarations (including the index of a signature specified in the Type Section) of the functions defined in this module.
Table
4
Contains details about function tables. More on this later.
Memory
5
Contains details about memory. More on this later.
Global
6
Global declarations.
Exports
7
Contains the names and types of objects and functions that will be exported.
Start
8
Specifies a function that will be called on Module start-up.
Elements
9
Table initialization information.
Code
10
The WebAssembly instructions that make up the body of each function.
Data
11
Memory initialization information.

If a section has a code that is not specified in the above table, it is called a custom section. Some browsers use custom sections to implement upcoming or experimental features. Unrecognized custom sections are skipped when loading a Module, and can be accessed as TypedArrays in JavaScript.

Module loading starts off by parsing the module. This involves going through each section, verifying its format and then loading the needed information into a native structure inside the WebAssembly engine. Most of the bugs that Project Zero found in WebAssembly occured in this phase.

To start, CVE-2018-4222 occurs when the WebAssembly binary is read out of the buffer containing it. TypedArray objects in JavaScript can contain offsets at which their underlying ArrayBuffers are accessed. The WebKit implementation of this added the offset to the ArrayBuffer data pointer twice. So the following code:

var b2 = new ArrayBuffer(1000);
var view = new Int8Array(b2, 700); // offset
var mod = new WebAssembly.Module(view);

Will read memory out-of-bounds in an unfixed version of WebKit. Note that this is also a functional error, as it prevents any TypedArray with an offset from being processed correctly by WebAssembly.

CVE-2018-6092 in Chrome is an example of an issue that occurs when parsing a WebAssembly buffer. Similar issues have been fixed in the past. In this vulnerability, there is an integer overflow when parsing the locals of a function specified in the code section of the binary. The number of locals of each type are added together, and the size_t that contains this number can wrap on a 32-bit platform.

It is also evident from the section table above (and specified in the WebAssembly standard) that sections must be unique and in the correct order. For example, the function section can’t load unless the type section containing the signatures it needs has been loaded already.  
CVE-2018-4121 is an error in section order checking in WebKit. In unfixed versions of WebKit, the order check gets reset after a custom section is processed, basically allowing sections to occur any number of times in any order. This leads to an overflow in several vectors in WebKit, as its parsing implementation allocates memory based on the assumption that there is only one of each section, and then adds elements to the memory without checking. Even without this implementation detail, though, this bug would likely lead to many subtle memory corruption issues in the WebAssembly engine, as the order and non-duplicate nature of WebAssembly binary sections is very fundamental to the functionality of WebAssembly.

This vulnerability was independently discovered by Alex Plaskett, Fabian Beterke and Georgi Geshev of MWR Labs, and they describe their exploit here.

WebAssembly Instances


After a binary is loaded into a Module, an Instance of the module needs to be created to run the code. An Instance binds the code to imported objects it needs to run, and does some final initialization.

var code = new ArrayBuffer(len);
… // write code into ArrayBuffer
var m = new WebAssembly.Module(code);
var i = new WebAssembly.Instance(m, imports);

Each module has an Import Section it loaded from the WebAssembly binary. This section contains the names and types of objects that must be imported from JavaScript for the code in the module to run. There are four types of object that can be imported. Functions (JavaScript or WebAssembly) can be imported and called from WebAssembly. Numeric types can also be imported from JavaScript to populate globals.

Memory and Table objects are the final two types that can be imported. These are new object types added to JavaScript engines for use in WebAssembly. Memory objects contain the memory used by the WebAssembly code. This memory can be accessed in JavaScript via an ArrayBuffer, and in WebAssembly via load and store instructions. When creating a Memory object, the WebAssembly developer specifies the initial and optional maximum size of the memory. The Memory object is then created with the initial memory size allocated, and the allocated memory size can be increased in JavaScript by calling the grow method, and in WebAssembly using the grow instruction. Memory size can never decrease (at least according to the standard).

Table objects are function tables for WebAssembly. They contain function objects at specific indexes in the table, and these functions can be called from WebAssembly using the call_indirect instruction. Like memory, tables have an initial and optional maximum size, and their size can be expanded by calling the grow method in JavaScript. Table objects cannot be expanded in WebAssembly.  Table objects can only contain WebAssembly functions, not JavaScript functions, and an exception is thrown if the wrong type of function is added to a Table object. Currently, WebAssembly only supports one Memory object and one Table object per Instance object. This is likely to change in the future though.

More than one Instance object can share the same Memory object and Table object. If two or more Instance objects share both of these objects, they are referred to as being in the same compartment. It is possible to create Instance objects that share a Table object, but not a Memory object, or vice versa, but no compiler should ever create Instances with this property. No compiler ever changes the values in a table after it is initialized, and this is likely to remain true in the future, but it is still possible for JavaScript callers to change them at any time.

There are two ways to add Memory and Table objects to an Instance object. The first is through the Import Section as mentioned above. The second way is to include a Memory or Table Section in the binary. Including these sections causes the WebAssembly engine to create the needed Memory or Table object for the module, with parameters provided in the binary. It is not valid to specify these objects in both the Import Section and the Table or Memory Section, as this would mean there is more than one of each object, which is not currently allowed. Memory and Table objects are not mandatory, and it is fairly common for code in WebAssembly not to have a Table object. It is also possible to create WebAssembly code that does not have a Memory object, for example a function that averages the parameters that are passed in, but this is rare in practice.

One feature of these objects that has led to several vulnerabilities is the ability to increase the size of the allocated Memory or Table object. For example, CVE-2018-5093, a series of integer overflow vulnerabilities in increasing the size of Memory and Table objects was recently found by OSS-Fuzz. A similar issue was found in Chrome by OSS-Fuzz.

Another question that immediately comes to mind about Memory objects is whether the internal ArrayBuffer can be detached, as many vulnerabilities have occured in ArrayBuffer detachment. According to the specification, Memory object ArrayBuffers cannot be detached by script, and this is true in all browsers except for Microsoft Edge (Chakra does not allow this, but Edge does). The Memory object ArrayBuffer also do not change size when the Memory object is expanded. Instead, they are detached as soon as the grow method is called. This prevents any bugs that could occur due to ArrayBuffers changing size.

Out of bounds access is always a concern when allowing script to use memory, but these types of issues are fairly uncommon in WebAssembly. One likely reason for this is that a limited number of WebAssembly instructions can access memory, and WebAssembly currently only supports a single page of memory, so the code that accesses memory is a WebAssembly engine is actually quite small. Also, on 64-bit systems, WebAssembly implements memory as safe buffers (also called signal buffers). To understand how safe buffers work, it is important to understand how loads and stores work in WebAssembly. These instructions have two operands, an address and an offset. When memory is accessed, these two operands are added to the pointer to the start of the internal memory of the Memory object, and the resulting location is where the memory access happens. Since both of these operands are 32-bit integers (note that this is likely to change in future versions of WebAssembly), and required to be above zero, a memory access can be at most 0xfffffffe (4GB) outside of the allocated buffer.

Safe buffers work by mapping 4GB into memory space, and then allocating the portion of memory that is actually needed by WebAssembly code as RW memory at the start of the mapped address space. Memory accesses can be at most 4GB from the start of the memory buffer, so all accesses should be in this range. Then, if memory is accessed outside of the allocated memory, it will cause a signal (or equivalent OS error), which is then handled by the WebAssembly engine, and an appropriate out of bounds exception is then thrown in JavaScript. Safe buffers eliminate the need for bounds checks in code, making vulnerabilities due to out-of-bounds access less likely on 64-bit systems. Explicit bounds checking is still required on 32-bit systems, but these are becoming less common.

After the imported objects are loaded, the WebAssembly engine goes through a few more steps to create the Instance Object. The Elements Section of the WebAssembly binary is used to initialize the Table object, if both of these exist, and then the Data Section of the WebAssembly binary is used to initialize the Memory object, if both exist. Then, the code in the Module is used to create functions, and these functions are exported (attached to a JavaScript object, so they are accessible in JavaScript). Finally, if a start function is specified in the Start Section, it is executed, and then the WebAssembly is ready to run!

var b2 = new ArrayBuffer(1000);
var view = new Int8Array(b2, 700); // offset
var mod = new WebAssembly.Module(a);
var i = new WebAssembly.Instance(m, imports);
i.exports.call_me(); //WebAssembly happens!

The final issue we found involves a number of these components. It was discovered and fixed by the Chrome team before we found it, so it doesn’t have a CVE, but it’s still an interesting bug.

This issue is related to the call_indirect instruction which calls a function in the Table object. When the function in the Table object is called, the function can remove itself from the Table object during the call. Before this issue was fixed, Chrome relied on the reference to the function in the Table object to prevent it from being freed during garbage collection. So removing the function from the Table object during a call has the potential to cause the call to use freed memory when it unwinds.

This bug was originally fixed by preventing a Table object from being changed in JavaScript when a WebAssembly call was in progress. Unfortunately, this fix did not completely resolve the issue. Since it is possible to create a WebAssembly Instance in any function, it was still possible to change the Table object by creating an Instance that imports the Table object and has an underlying module with an Elements Section. When the new Instance is created, the Elements Section is used to initialize the Table, allowing the table to be changed without calling the JavaScript function to change a Table object. The issue was ultimately resolved by holding an extra reference to all needed objects for the duration of the call.

Execution


WebAssembly is executed by calling an exported function. Depending on the engine, the intermediate bytecode generated when the Module was parsed is either interpreted or used to generate native code via JIT. It’s not uncommon for WebAssembly engines to have bugs where the wrong code is generated for certain sequences of instructions; many such issues have been reported in the bugs trackers for the different engines. We didn’t see any such bugs that had a clear security impact though.

The Future


Overall, the majority of the bugs we found in WebAssembly were related to the parsing of WebAssembly binaries, and this has been mirrored in vulnerabilities reported by other parties. Also, compared to other recent browser features, surprisingly few vulnerabilities have been reported in it. This is likely due to the simplicity of the current design, especially with regards to memory management.

There are two emerging features of WebAssembly that are likely to have a security impact. One is threading. Currently, WebAssembly only supports concurrency via JavaScript workers, but this is likely to change. Since JavaScript is designed assuming that this is the only concurrency model, WebAssembly threading has the potential to require a lot of code to be thread safe that did not previously need to be, and this could lead to security problems.

WebAssembly GC is another potential feature of WebAssembly that could lead to security problems. Currently, some uses of WebAssembly have performance problems due to the lack of higher-level memory management in WebAssembly. For example, it is difficult to implement a performant Java Virtual Machine in WebAssembly. If WebAssembly GC is implemented, it will increase the number of applications that WebAssembly can be used for, but it will also make it more likely that vulnerabilities related to memory management will occur in both WebAssembly engines and applications written in WebAssembly.

Tuesday, August 14, 2018

Windows Exploitation Tricks: Exploiting Arbitrary Object Directory Creation for Local Elevation of Privilege

Posted by James Forshaw, Project Zero

And we’re back again for another blog in my series on Windows Exploitation tricks. This time I’ll detail how I was able to exploit Issue 1550 which results in an arbitrary object directory being created by using a useful behavior of the CSRSS privileged process. Once again by detailing how I’d exploit a particular vulnerability I hope that readers get a better understanding of the complexity of the Windows operating system as well as giving Microsoft information on non-memory corruption exploitation techniques so that they can mitigate them in some way.

Quick Overview of the Vulnerability

Object Manager directories are unrelated to normal file directories. The directories are created and manipulated using a separate set of system calls such as NtCreateDirectoryObject rather than NtCreateFile. Even though they’re not file directories they’re vulnerable to many of the same classes of issues as you’d find on a file system including privileged creation and symbolic link planting attacks.

Issue 1550 is a vulnerability that allows the creation of a directory inside a user-controllable location while running as SYSTEM. The root of the bug is in the creation of Desktop Bridge applications. The AppInfo service, which is responsible for creating the new application, calls the undocumented API CreateAppContainerToken to do some internal housekeeping. Unfortunately this API creates object directories under the user’s AppContainerNamedObjects object directory to support redirecting BaseNamedObjects and RPC endpoints by the OS.

As the API is called without impersonating the user (it’s normally called in CreateProcess where it typically isn’t as big an issue) the object directories are created with the identity of the service, which is SYSTEM. As the user can write arbitrary objects to their AppContainerNamedObjects directory they could drop an object manager symbolic link and redirect the directory creation to almost anywhere in the object manager namespace. As a bonus the directory is created with an explicit security descriptor which allows the user full access, this will become very important for exploitation.

One difficulty in exploiting this vulnerability is that if the object directory isn’t created under AppContainerNamedObjects because we’ve redirected its location then the underlying NtCreateLowBoxToken system call which performs the token creation and captures a handle to the directory as part of its operation will fail. The directory will be created but almost immediately deleted again. This behavior is actually due to an earlier issue I reported which changes the system call’s behavior. This is still exploitable by opening a handle to the created directory before it’s deleted, and in practice it seems winning this race is reliable as long as your system has multiple processors (which is basically any modern system). With an open handle the directory is kept alive as long as needed for exploitation.

This is the point where the original PoC I sent to MSRC stopped, all the PoC did was create an arbitrary object directory. You can find this PoC attached to the initial bug report in the issue tracker. Now let’s get into how we might exploit this vulnerability to go from a normal user account to a privileged SYSTEM account.

Exploitation

The main problem for exploitation is finding a location in which we can create an object directory which can then be leveraged to elevate our privileges. This turns out to be harder than you might think. While almost all Windows applications use object directories under the hood, such as BaseNamedObjects, the applications typically interact with existing directories which the vulnerability can’t be used to modify.

An object directory that would be interesting to abuse is KnownDlls (which I mentioned briefly in the previous blog in this series). This object directory contains a list of named image section objects, of the form NAME.DLL. When an application calls LoadLibrary on a DLL inside the SYSTEM32 directory the loader first checks if an existing image section is present inside the KnownDlls object directory, if the section exists then that will be loaded instead of creating a new section object.
KnownDlls is restricted to only being writable by administrators (not strictly true as we’ll see) because if you could drop an arbitrary section object inside this directory you could force a system service to load the named DLL, for example using the Diagnostics Hub service I described in my last blog post, and it would map the section, not the file on disk. However the vulnerability can’t be used to modify the KnownDlls object directory other than adding a new child directory which doesn’t help in exploitation. Maybe we can target KnownDlls indirectly by abusing other functionality which our vulnerability can be used with?

Whenever I do research into particular areas of a product I will always note down interesting or unexpected behavior. One example of interesting behavior I discovered when I was researching Windows symbolic links. The Win32 APIs support a function called DefineDosDevice, the purpose of this API is to allow a user to define a new DOS drive letter. The API takes three parameters, a set of flags, the drive prefix (e.g. X:) to create and the target device to map that drive to. The API’s primary use is in things like the CMD SUBST command.

On modern versions of Windows this API creates an object manager symbolic link inside the user’s own DOS device object directory, a location which can be written to by a normal low privileged user account. However if you look at the implementation of DefineDosDevice you’ll find that it’s not implemented in the caller’s process. Instead the implementation calls an RPC method inside the current session’s CSRSS service, specifically the method BaseSrvDefineDosDevice inside BASESRV.DLL. The main reason for calling into a privileged service is it allows a user to create a permanent symbolic link which doesn’t get deleted when all handles to the symbolic link object are closed. Normally to create a permanent named kernel object you need the SeCreatePermanentPrivilege privilege, however a normal user does not have that privilege. On the other hand CSRSS does, so by calling into that service we can create the permanent symbolic link.

The ability to create a permanent symbolic link is certainly interesting, but if we were limited to only creating drive letters in the user’s DOS devices directory it wouldn’t be especially useful. I also noticed that the implementation never verified that the lpDeviceName parameter is a drive letter. For example you could specify a name of “GLOBALROOT\RPC Control\ABC” and it would actually create a symbolic link outside of the user’s DosDevices directory, specifically in this case the path “\RPC Control\ABC”. This is because the implementation prepends the DosDevice prefix “\??” to the device name and passes it to NtCreateSymbolicLink. The kernel would follow the full path, finding GLOBALROOT which is a special symbolic link to return to the root and then follow the path to creating the arbitrary object. It was unclear if this was intentional behavior so I looked in more depth at the implementation in CSRSS, which is shown in abbreviated form below.

NTSTATUS BaseSrvDefineDosDevice(DWORD dwFlags,
                               LPCWSTR lpDeviceName,
                               LPCWSTR lpTargetPath) {
   WCHAR device_name[];
   snwprintf_s(device_name, L"\\??\\%s", lpDeviceName);
   UNICODE_STRING device_name_ustr;
   OBJECT_ATTRIBUTES objattr;
   RtlInitUnicodeString(&device_name_ustr, device_name);
   InitializeObjectAttributes(&objattr, &device_name_ustr,
                              OBJ_CASE_INSENSITIVE);

   BOOLEAN enable_impersonation = TRUE;
   CsrImpersonateClient();
   HANDLE handle;
   NTSTATUS status = NtOpenSymbolicLinkObject(&handle, DELETE, &objattr);①
   CsrRevertToSelf();

   if (NT_SUCCESS(status)) {
       BOOLEAN is_global = FALSE;

       // Check if we opened a global symbolic link.
       IsGlobalSymbolicLink(handle, &is_global); ②
       if (is_global) {
           enable_impersonation = FALSE; ③
           snwprintf_s(device_name, L"\\GLOBAL??\\%s", lpDeviceName);
           RtlInitUnicodeString(&device_name_ustr, device_name);
       }

       // Delete the existing symbolic link.
       NtMakeTemporaryObject(handle);
       NtClose(handle);
   }

   if (enable_impersonation) { ④
       CsrRevertToSelf();
   }

   // Create the symbolic link.
   UNICODE_STRING target_name_ustr;
   RtlInitUnicodeString(&target_name_ustr, lpTargetPath);

   status = NtCreateSymbolicLinkObject(&handle, MAXIMUM_ALLOWED,
                               objattr, target_name_ustr); ⑤

   if (enable_impersonation) { ⑥
       CsrRevertToSelf();
   }
   if (NT_SUCCESS(status)) {
       status = NtMakePermanentObject(handle); ⑦
       NtClose(handle);
   }
   return status;
}

We can see the first thing the code does is build the device name path then try and open the symbolic link object for DELETE access . This is because the API supports redefining an existing symbolic link, so it must first try to delete the old link. If we follow the default path where the link doesn’t exist we’ll see the code impersonates the caller (the low privileged user in this case) then creates the symbolic link object ⑤, reverts the impersonation ⑥ and makes the object permanent ⑦ before returning the status of the operation. Nothing too surprising, we can understand why we can create arbitrary symbolic links because all the code does is prefix the passed device name with “\??”. As the code impersonates the caller when doing any significant operation we can only create the link in a location that the user could already write to.

What’s more interesting is the middle conditional, where the target symbolic link is opened for DELETE access, which is needed to call NtMakeTemporaryObject. The opened handle is passed to another function ②, IsGlobalSymbolicLink, and based on the result of that function a flag disabling impersonation is set and the device name is recreated again with the global DOS device location \GLOBAL?? as the prefix ③. What is IsGlobalSymbolicLink doing? Again we can just RE the function and check.

void IsGlobalSymbolicLink(HANDLE handle, BOOLEAN* is_global) {
   BYTE buffer[0x1000];
   NtQueryObject(handle, ObjectNameInformation, buffer, sizeof(buffer));
   UNICODE_STRING prefix;
   RtlInitUnicodeString(&prefix, L"\\GLOBAL??\\");
   // Check if object name starts with \GLOBAL??
   *is_global = RtlPrefixUnicodeString(&prefix, (PUNICODE_STRING)buffer);
}

The code checks if the opened object’s name starts with \GLOBAL??\. If so it sets the is_global flag to TRUE. This results in the flag enabling impersonation being cleared and the device name being rewritten. What this means is that if the caller has DELETE access to a symbolic link inside the global DOS device directory then the symbolic link will be recreated without any impersonation, which means it will be created as the SYSTEM user. This in itself doesn’t sound especially interesting as by default only an administrator could open one of the global symbolic links for DELETE access. However, what if we could create a child directory underneath the global DOS device directory which could be written to by a low privileged user? Any symbolic link in that directory could be opened for DELETE access as the low privileged user could specify any access they liked, the code would flag the link as being global, when in fact that’s not really the case, disable impersonation and recreate it as SYSTEM. And guess what, we have a vulnerability which would allow us to create an arbitrary object directory under the global DOS device directory.

Again this might not be very exploitable if it wasn’t for the rewriting of the path. We can abuse the fact that the path “\??\ABC” isn’t the same as “\GLOBAL??\ABC” to construct a mechanism to create an arbitrary symbolic link anywhere in the object manager namespace as SYSTEM. How does this help us? If you write a symbolic link to KnownDlls then it will be followed by the kernel when opening a section requested by DLL loader. Therefore even though we can’t directly create a new section object inside KnownDlls, we can create a symbolic link which points outside that directory to a place that the low-privileged user can create the section object. We can now abuse the hijack to load an arbitrary DLL into memory inside a privileged process and privilege elevation is achieved.

Pulling this all together we can exploit our vulnerability using the following steps:

  1. Use the vulnerability to create the directory “\GLOBAL??\KnownDlls”
  2. Create a symbolic link inside the new directory with the name of the DLL to hijack, such as TAPI32.DLL. The target of this link doesn’t matter.
  3. Inside the user’s DOS device directory create a new symbolic link called “GLOBALROOT” pointing to “\GLOBAL??”. This will override the real GLOBALROOT symbolic link object when a caller accesses it via the user’s DOS device directory.
  4. Call DefineDosDevice specifying a device name of “GLOBALROOT\KnownDlls\TAPI32.DLL” and a target path of a location that the user can create section objects inside. This will result in the following operations:
    1. CSRSS opens the symbolic link “\??\GLOBALROOT\KnownDlls\TAPI32.DLL” which results in opening “\GLOBAL??\KnownDlls\TAPI32.DLL”. As this is controlled by the user the open succeeds, and the link is considered global which disables impersonation.
    2. CSRSS rewrites the path to “\GLOBAL??\GLOBALROOT\KnownDlls\TAPI32.DLL” then calls NtCreateSymbolicLinkObject without impersonation. This results in following the real GLOBALROOT link, which results in creating the symbolic link “\KnownDlls\TAPI32.DLL” with an arbitrary target path.
  5. Create the image section object at the target location for an arbitrary DLL, then force it to be loaded into a privileged service such as the Diagnostics Hub by getting the service to call LoadLibrary with a path to TAPI32.DLL.
  6. Privilege escalation is achieved.

Abusing the DefineDosDevice API actually has a second use, it’s an Administrator to Protected Process Light (PPL) bypass. PPL processes still use KnownDlls, so if you can add a new entry you can inject code into the protected process. To prevent that attack vector Windows marks the KnownDlls directory with a Process Trust Label which blocks all but the highest level level PPL process from writing to it, as shown below.
How does our exploit work then? CSRSS actually runs as the highest level PPL so is allowed to write to the KnownDlls directory. Once the impersonation is dropped the identity of the process is used which will allow full access.

If you want to test this exploit I’ve attached the new PoC to the issue tracker here.

Wrapping Up

You might wonder at this point if I reported the behavior of DefineDosDevice to MSRC? I didn’t, mainly because it’s not in itself a vulnerability. Even in the case of Administrator to PPL, MSRC do not consider that a serviceable security boundary (example). Of course the Windows developers might choose to try and change this behavior in the future, assuming it doesn’t cause a major regression in compatibility. This function has been around since the early days of Windows and the current behavior since at least Windows XP so there’s probably something which relies on it. By describing this exploit in detail, I want to give MS as much information as necessary to address the exploitation technique in the future.

I did report the vulnerability to MSRC and it was fixed in the June 2018 patches. How did Microsoft fix the vulnerability? The developers added a new API, CreateAppContainerTokenForUser which impersonates the token during creation of the new AppContainer token. By impersonating during token creation the code ensures that all objects are created only with the privileges of the user. As it’s a new API existing code would have to be changed to use it, therefore there’s a chance you could still find code which uses the old CreateAppContainerToken in a vulnerable pattern.

Exploiting vulnerabilities on any platform sometimes requires pretty in-depth knowledge about how different components interact. In this case while the initial vulnerability was clearly a security issue, it’s not clear how you could proceed to full exploitation. It’s always worth keeping a log of interesting behavior which you encounter during reverse engineering as even if something is not a security bug itself, it might be useful to exploit another vulnerability.