Thursday 27 December 2018

Abusing Mount Points over the SMB Protocol

This blog post is a quick writeup on an interesting feature of SMBv2 which might have uses for lateral movement and red-teamers. When I last spent significant time looking at symbolic link attacks on Windows I took a close look at the SMB server. Since version 2 the SMB protocol has support for symbolic links, specifically the NTFS Reparse Point format. If the SMB server encounters an NTFS symbolic link within a share it'll extract the REPARSE_DATA_BUFFER and return it to the client based on the SMBv2 protocol specification §2.2.2.2.1.

Screenshot of symbolic link error response from SMB specifications.

The client OS is responsible for parsing the REPARSE_DATA_BUFFER and following it locally. This means that only files the client can already access can be referenced by symbolic links. In fact even resolving symbolic links locally isn't enabled by default, although I did find a bypass which allowed a malicious server to bypass the client policy and allowing resolving symbolic links locally. Microsoft declined to fix the bypass at the time, it's issue 138 if you're interested.

What I found interesting is while IO_REPARSE_TAG_SYMLINK is handled specially on the client, if the server encounters the IO_REPARSE_TAG_MOUNT_POINT reparse point it would follow it on the server. Therefore, if you could introduce a mount point within a share you could access any fixed disk on the server, even if it's not shared directly. That could have many uses for lateral movement, but the question becomes how could we add a mount point without already having local access to the disk?

First thing to try is to just create a mount point via a UNC path and see what happens. Using the MKLINK CMD built-in you get the following:

Using mklink on \\localhost\c$\abc returns the error "Local NTFS volumes are required to complete the operation."

The error would indicate that setting mount points on remote servers isn't supported. This would make some sense, setting a mount point on a remote drive would result in unexpected consequences. You'd assume the protocol either doesn't support setting reparse points at all, or at least restricts them to only allowing symbolic links. We can get a rough idea what the protocol expects by looking up the details in the protocol specification. Setting a reparse point requires sending the FSCTL_SET_REPARSE_POINT IO control code to a file, therefore we can look up the section on the SMB2 IOCTL command to see if any there's any information about the control code.

After a bit of digging you'll find that FSCTL_SET_REPARSE_POINT is indeed supported and there's a note in §3.3.5.15.13 which I've reproduced below.

"When the server receives a request that contains an SMB2 header with a Command value equal to SMB2 IOCTL and a CtlCode of FSCTL_SET_REPARSE_POINT, message handling proceeds as follows:
If the ReparseTag field in FSCTL_SET_REPARSE_POINT, as specified in [MS-FSCC] section 2.3.65, is not IO_REPARSE_TAG_SYMLINK, the server SHOULD verify that the caller has the required permissions to execute this FSCTL.<330> If the caller does not have the required permissions, the server MUST fail the call with an error code of STATUS_ACCESS_DENIED."
The text in the specification seems to imply the server only needs to check explicitly for IO_REPARSE_TAG_SYMLINK, and if the tag is something different it should do some sort of check to see if it's allowed, but it doesn't say anything about setting a different tag to be explicitly banned. Perhaps it's just the MKLINK built-in which doesn't handle this scenario? Let's try the CreateMountPoint tool from my symboliclink-testing-tools project and see if that helps.

Using CreateMountPoint on \\localhost\c$\abc gives access denied.

CreateMountPoint doesn't show an error about only supporting local NTFS volumes, but it does return an access denied error. This ties in with the description §3.3.5.15.13, if the implied check fails the code should return access denied. Of course the protocol specification doesn't actually say what check should be performed, I guess it's time to break out the disassembler and look at the implementation in the SMBv2 driver, srv2.sys.

I used IDA to look for immediate values for IO_REPARSE_TAG_SYMLINK which is 0xA000000C. It seems likely that any check would first look for that value along with any other checking for the other tags. In the driver from Windows 10 1809 there was only one hit in Smb2ValidateIoctl. The code is roughly as follows:

NTSTATUS Smb2ValidateIoctl(SmbIoctlRequest* request) { // ... switch(request->IoControlCode) { case FSCTL_SET_REPARSE_POINT: REPARSE_DATA_BUFFER* reparse = (REPARSE_DATA_BUFFER*)request->Buffer;
// Validate length etc. if (reparse->ReparseTag != IO_REPARSE_TAG_SYMLINK &&
!request->SomeOffset->SomeByteValue) {
return STATUS_ACCESS_DENIED; } // Complete FSCTL_SET_REPARSE_POINT request. } }

The code extracts the data from the IOCTL request, it fails with STATUS_ACCESS_DENIED if the tag is not IO_REPARSE_TAG_SYMLINK and some byte value is 0 which is referenced from the request data. Tracking down who sets this value can be tricky sometimes, however I usually have good results by just searching for the variables offset as an immediate value in IDA, in this case 0x200 and just go through the results looking for likely MOV instructions. I found an instruction "MOV [RCX+0x200], AL" inside Smb2ExecuteSessionSetupReal which looked to be the one. The variable is being set with the result of the call to Smb2IsAdmin which just checks if the caller has the BUILTIN\Administrators group in their token. It seems that we can set arbitrary reparse points on a remote share, as long as we're an administrator on the machine. We should still test that's really the case:

Using CreateMountPoint on \\localhost\c$\abc is successful and listing the directory showing the windows folder.


Testing from an administrator account allows us to create the mount point, and when listing the directory from a UNC path the Windows folder is shown. While I've demonstrated this on local admin shares this will work on any share and the mount point is followed on the remote server.

Is this trick useful? Requiring administrator access does mean it's not something you could abuse for local privilege escalation and if you have administrator access remotely there's almost certainly nastier things you could do. Still it could be useful if the target machine has the admin shares disabled, or there's monitoring in place which would detect the use of ADMIN$ or C$ in lateral movement as if there's any other writable share you could add a new directory which would give full control over any other fixed drive.

I can't find anyone documenting this before, but I could have missed it as the search results are heavily biased towards SAMBA configurations when you search for SMB and mount points (for obvious reasons). This trick is another example of ensuring you test any assumptions about the security behavior of a system as it's probably not documented what the actual behavior is. Even though a tool such as MKLINK claims a lack of a support for setting remote mount points by digging into available specification and looking at the code itself you can find some interesting stuff.




Sunday 18 November 2018

Finding Windows RPC Client Implementations Through Brute Force

Recently, @SandboxEscaper wrote a detailed blog post (link seems she's locked down the blog, here's a link to an archive) about reverse engineering local RPC servers for the purposes of discovering sandbox escapes and privilege escalation vulnerabilities. After reading I thought I should put together a sort-of companion piece on RPC client implementation for PoC writing, specifically not implementing one unless you really need to.

If you go and read the blog post it goes through finding an RPC service to investigate using RpcView, then using the tool to decompile the RPC interface to an IDL file which can be added to a C++ project. This has a few problems when you're dealing with an unknown RPC interface:

  • Even if the decompiler was perfect (and RpcView or my own in my NtObjectManager PowerShell module are definitely not) the original IDL to NDR compilation process is lossy. Reversing this process with a decompiler doesn't always produce a 100% correct IDL file and thus the regenerated NDR might not be 100% compatible.
  • The NDR engine is terrible at giving useful diagnostic information for why the IDL is incorrect, usually just returning error code 1783 "The stub received bad data". This is made even more painful when dealing with complex structures or unions which must be exactly correct otherwise it all goes to hell.
  • It's hard to use the IDL from any language but C/C++, as that's really the only supported output format for RPC interfaces.
While all three of these problems are annoying when trying to produce a working PoC, the last one annoys me especially. I have a thing about writing my PoCs in C#, about the only exception to using C# is when I need to interact with an RPC server. There's plenty of ways around this, for example I could build the client into a native DLL and export methods to call from C#, but this feel unsatisfactory. 

At least in some cases, Microsoft have already done most of the work for me. If there's a native RPC server on a default installation of Windows there must be some sort of client component. In some cases this client might be embedded completely inside a binary and not directly callable, COM is a good example. However in other cases the developers also provide a general purpose library to interact with the server. If you can find the client library, it'll bring a number of advantages:
  • If it's a truly general purpose the library will export methods which can be easily interacted with from C# using P/Invoke (or any other language which can invoke native exports).
  • The majority of these libraries will deal with setting up the RPC client connection, dealing with asynchronous calls and custom serialization requirements.
  • The NDR client code is going to be 100% compatible with the server, which should eliminate error code 1783 as well as dealing with changes to parameters, method layout and interface IDs which can happen between major versions of the OS. 
  • You only have to deal with calling a C style method (or sometimes a COM interface, but that's still a C calling convention) which gives a bit more flexibility which it comes to getting structure definitions correct.
  • As it's a library there's a chance that useful type information might be disclosed in the client code, or it will allow to to track down callers of these APIs in other binaries that you can RE to get a better idea of how to call the methods correctly.
There's sadly some disadvantages to this approach:
  • Not all clients will actually be in a general purpose library with easy entry points, or at least the entry points don't cleanly map to the underlying RPC methods. That's not to say it's useless as you could load the DLL then use a relative pointer to the RPC client structures and manually reconstruct the call but that removes many of the advantages.
  • The library might be general purpose but the developers added a significant amount of client side parameter verification or don't expose some parameters at all. Some bugs are only going to present themselves by calling the RPC method with parameters the developers didn't expect to receive, perhaps because they verify in the client.
To prevent this blog post getting even longer let's look how I could identify the client library for the Data Sharing Service which SandboxEscaper dropped a bug in that was recently fixed as CVE-2018-8584. The bug SandboxEscaper discovered was in the method PolicyChecker::CheckFilePermission implemented in dssvc.dll. By calling one of the RPC methods, such as RpcDSSMoveFromSharedFile an arbitrary file can be deleted by the SYSTEM user. Looking at dssvc.dll it doesn't contain any client code, so we have to go hunting for the client. For this we'll use my NtObjectManager PowerShell module as it contains code to do just this. Any lines which start with PS> are to be executed in PowerShell.

Step 1: Install the NtObjectManager module from the PowerShell gallery.

PS> Install-Module NtObjectManager -Scope CurrentUser
PS> Import-Module NtObjectManager

You might need to also disable the script execution policy for this to work successfully.

Step 2: Parse RPC interfaces in all system DLLs using Get-RpcServer cmdlet.

PS> $rpc = ls c:\windows\system32\*.dll | Get-RpcServer -ParseClients

This call passes the list of all DLLs in system32 to the Get-RpcServer command and specifies that it should also parse all clients. This command does a heuristic search in a DLL's data sections for RPC servers and clients and parses the NDR structures. You can use this to generate RPC server definitions similar to RpcView (but in my own weird C# pseudo-code syntax) but for this scenario we only care about the clients. My code does have some advantages, for example the parsed NDR data is stored as a .NET object so you can do better analysis of the interface, but that's something for another day.

Step 3: Filter out the client based on IID and Client status.

PS> $rpc | ? {$_.Client -and $_.InterfaceId -eq 'bf4dc912-e52f-4904-8ebe-9317c1bdd497'} | Select FilePath

The server's IID is bf4dc912-e52f-4904-8ebe-9317c1bdd497 which you can easily get from the IDL server definition in the uuid attribute. We also need to filter only client implementations using the Client property. 

If you've followed these procedures you'll find that the client implementation is in the DLL dsclient.dll. Admittedly we might have been able to guess this based on the similarity of names, but it's not always so simple. 

Step 4: Disassemble/RE the library to find out how to call the methods.


It doesn't mean the DLL contains a general purpose library, we'll still need to open it in a disassembler and take a look. In this case we're lucky, if we look at the exports for the dsclient.dll library we find the names match up with the server. For example there's a DSMoveFromSharedFile which would presumably match up with RpcDSSMoveFromSharedFile.


Decompilation of DSMoveFromSharedFile


If you follow this code you'll find it's just a simple wrapper around a call to the method DSCMoveFromSharedFile which binds to the RPC endpoint and calls the server. There's no parameter verification taking place so we can just determine how we can call this method from C# using the server IDL we generated earlier. 

And that's it, I was able to implement a PoC for CVE-2018-8584 by defining the following C# P/Invoke method:

[DllImport("dsclient.dll", CharSet = CharSet.Unicode)]
public static extern int DSMoveFromSharedFile(string token, string source_file);

Of course your mileage may vary depending on your RPC server. But what I've described here is a quick and easy way to determine if there's a quick and easy way to avoid writing C++ code :-)




Tuesday 9 October 2018

Farewell to the Token Stealing UAC Bypass

With the release of Windows 10 RS5 the generic UAC bypass I documented in "Reading Your Way Around UAC" (parts 1, 2 and 3) has been fixed. This quick blog post will describe the relatively simple change MS made to the kernel to fix the UAC bypass and some musing on how it still might be possible to bypass.

As a quick recap, the UAC bypass I documented allowed any normal user on the same desktop to open a privileged UAC admin process and get a handle to the process' access token. The only requirement was there was an existing elevated process running on the desktop, but that's a very common behavior. That in itself didn't allow you to do much directly. However by duplicating the token which, made it writable, it was possible to selectively downgrade the token so that it could be impersonated.

Prior to Windows 10 all you needed to do was downgrade the token's integrity level to Medium. This left the token still containing the Administrators group, but it passed the kernel's checks for impersonation. This allows you to directly modify administrator only resources. For Windows 10 an elevation check was introduced which prevented a process in a non-elevated session from impersonating an elevated token. This was indicated by a flag in the limited token's logon session structure. If the flag was set, but you were impersonating an elevated token it'd fail. This didn't stop you from impersonating the token as long as it was considered non-elevated then abusing WMI to spawn a process in that session or the Secondary Logon Service to get back administrator privileges.

Let's look now at how it was fixed. The changed code is in the SeTokenCanImpersonate method which determines whether a token is allowed to impersonated or not.

TOKEN* process_token = ...; TOKEN* imp_token = ...; #define LIMITED_LOGON_SESSION 0x4 if (SeTokenIsElevated(imp_token)) { if (!SeTokenIsElevated(process_token) && (process_token->LogonSession->Flags & LIMITED_LOGON_SESSION)) { return STATUS_PRIVILEGE_NOT_HELD; } } if (process_token->LogonSession->Flags & LIMITED_LOGON_SESSION && !(imp_token->LogonSession->Flags & LIMITED_LOGON_SESSION)) { SepLogUnmatchedSessionFlagImpersonationAttempt(); return STATUS_PRIVILEGE_NOT_HELD; }

The first part of the code is the same as was introduced in Windows 10. If you try and impersonate an elevated token and your process is running in the limited logon session it'll be rejected. The new check introduced ensures that if you're in the limited logon session you're not trying to impersonate a token in a non-limited logon session. And there goes the UAC bypass, using any variation of the attack you need to impersonate the token to elevate your privileges.

The fix is pretty simple, although I can't help think there must be some edge case which this would trip up. The only case which comes to mind in tokens returned from the LogonUser APIs, however those are special cases earlier in the function so I could imagine this would only be a problem when there might be a more significant security bug.

It's worth bearing in mind that due to the way Microsoft fixes bugs in UAC this will not be ported to versions prior to RS5. So if you're on a Windows Vista through Windows 10 RS4 machine you can still abuse this to bypass UAC, in most cases silently. And there's hardly a lack of other UAC bypasses, you just have to look at UACME. Though I'll admit none of the bypasses are as interesting to me as a fundamental design flaw in the whole technology. The only thing I can say is Microsoft seems committed to fixing these bugs eventually, even if they seem to introduce more UAC bypasses in each release.

Can this fix be bypassed? It's predicated on the user not having control over a process running outside of the limited logon session. A potential counter example would be processes spawned from an elevated process where the token is intentionally restricted, such as in sandboxed applications such as Adobe Reader or Chrome. However in order for that to be exploitable you'd need to convince the user to elevate those applications which doesn't make for a general technique. There's of course potential impersonation bugs, such as my Constrained Impersonation attack which could be used to bypass Over-The-Shoulder elevation but also could be used to impersonate SYSTEM tokens. Bugs like that tend to be something Microsoft want to fix (the Constrained Impersonation one was fixed as CVE-2018-0821) so again not a general technique.

I did have a quick think about other ways of bypassing this, then I realized I don't actually care ;-)

Sunday 9 September 2018

Finding Interactive User COM Objects using PowerShell

Easily one of the most interesting blogs on Windows behaviour is Raymond Chen's The Old New Thing. I noticed he'd recently posted about using "Interactive User" (IU) COM objects to go from an elevated application (in the UAC sense) to the current user for the desktop. What interested me is that registering arbitrary COM objects as IU can have security consequences, and of course this blog entry didn't mention anything about that.

The two potential security issues can be summarised as:

  1. An IU COM object can be a sandbox escape if it has non-default security (for example Project Zero Issue 1079) as you can start a COM server outside the sandbox and call methods on the object.
  2. An IU COM object can be a cross-session elevation of privilege if it has non-default security ( for example Project Zero Issue 1021) as you can start a COM server in a different console session and call methods on the object.
I've blogged about this before when I discuss how I exploited a reference cycle bug in NtCreateLowBoxToken (see Project Zero Issue 483) and discussed how to use my OleView.NET tool find classes to check. Why do I need another blog post about it? I recently uploaded version 1.5 of my OleView.NET tool which comes with a fairly comprehensive PowerShell module and this seemed like a good opportunity on doing a quick tutorial on using the module to find targets for analysis to see if you can find a new sandbox escape or cross session exploit.

Note I'm not discussing how you go about reverse engineering the COM implementation for anything we find. I also won't be dropping any unknown bugs, but just giving you the information needed to find interesting COM servers.

Getting Started with PowerShell Module


First things first, you'll need to grab the release of v1.5 from the THIS LINK (edit: you can now also get the module from the PowerShell Gallery). Unpack it to a directory on your system then open PowerShell and navigate to the unpacked directory. Make sure you've allowed arbitrary scripts to run in PS, then run the following command to load the module.

PS C:\> Import-Module .\OleViewDotNet.psd1

As long as you see no errors the PS module will now be loaded. Next we need to capture a database of all COM registration information on the current machine. Normally when you open the GUI of OleView.NET the database will be loaded automatically, but not in the module. Instead you'll need load it manually using the following command:

PS C:\> Get-ComDatabase -SetCurrent

The Get-ComDatabase cmdlet parses the system configuration for all COM information my tool knowns about. This can take some time (maybe up to a minute, more if you have Process Monitor running), so it'll show a progress dialog. By specifying the -SetCurrent parameter we will store the database as the current global database, for the current session. Many of the commands in the module take a -Database parameter where you can specify the database you want to extract information from. Ensuring you pass the correct value gets tedious after a while so by setting the current database you never need to specify the database explicitly (unless you want to use a different one).

Now it's going to suck if every time you want to look at some COM information you need to run the lengthy Get-ComDatabase command. Trust me, I've stared at the progress bar too long. That's why I implemented a simple save and reload feature. Running the following command will write the current database out to the file com.db:

PS C:\> Set-ComDatabase .\com.db

You can then reload using the following command:

PS C:\> Get-ComDatabase .\com.db -SetCurrent

You'll find this is significantly faster. Worth noting, if you open a 64 bit PS command line you'll capture a database of the 64 bit view of COM, where as in 32 bit PS you'll get a 32 bit view. 

Finding Interactive User COM Servers


With the database loaded we can now query the database for COM registration information. You can get a handle to the underlying database object  as the variable $comdb using the following command:

PS C:\> $comdb = Get-CurrentComDatabase

However, I wouldn't recommend using the COM database directly as it's not really designed for ease of use. Instead I provide various cmdlets to extract information from the database which I've summarised in the following table:


Command
Description
Get-ComClass
Get list of registered COM classes
Get-ComInterface
Get list of registered COM interfaces
Get-ComAppId
Get list of registered COM AppIDs
Get-ComCategory
Get list of registered COM categories
Get-ComRuntimeClass
Get list of Windows Runtime classes
Get-ComRuntimeServer
Get list of Windows Runtime servers

Each command defaults to returning all registered objects from the database. They also take a range of parameters to filter the output to a collection or a single entry. I'd recommend passing the name of the command to Get-Help to see descriptions of the parameters and examples of use.

Why didn't I expose it as a relational database, say using SQL? The database is really an object collection and one thing PS is good at is interacting with objects. You can use the Where-Object command to filter objects, or Select-Object to extract certain properties and so on. Therefore it's probably a lot more work to build on a native query syntax that just let you write PS scripts to filter, sort and group. To make life easier I have spent some time trying to link objects together, so for example each COM class object has an AppIdEntry property which links to the object for the AppID (if registered). In turn the AppID entry has a ClassEntries property which will then tell you all classes registered with that AppID.

Okay, let's get a list of classes that are registered with RunAs set to "Interactive User". The class object returned from Get-ComClass has a RunAs property which is set to the name of the user account that the COM server runs as. You also need to only look for COM servers which run out of process, we can do this by filtering for only LocalServer32 classes.

Run the following command to do the filtering:

PS C:\> $runas = Get-GetComClass -ServerType LocalServer32 | ? RunAs -eq "Interactive User"

You should now find the $runas variable contains a list of classes which will run as IU. If you don't believe me you can double check by just selecting out the RunAs property (the default table view won't show it) using the following:

PS C:\> $runas | Select Name, RunAs

Name                  RunAs
----                  -----
BrowserBroker Class   Interactive User
User Notification     Interactive User
...

On my machine I have around 200 classes installed that will run as IU. But that's not the end of the story, only a subset of these classes will actually be accessible from a sandbox such as Edge or cross-session. We need a way of filtering them down further. To filter we'll need to look at the associated security of the class registration, specifically the Launch and Access permissions. In order to launch the new object and get an instance of the class we'll need to be granted Launch Permission, then in order to access the object we get back we'll need to be granted Access Permissions. The class object exposes this as the LaunchPermission and AccessPermission properties respectively. However, these just contain a Security Descriptor Definition Language (SDDL) string representation of the security descriptor, which isn't easy to understand at the best of times. Fortunately I've made it easier, you can use the Select-ComAccess cmdlet to filter on classes which can be accessed from certain scenarios.

Let's first look at what objects we could access from the Edge content sandbox. First we need the access token of a sandboxed Edge process. The easiest way to get that is just to start Edge and open the token from one of the MicrosoftEdgeCP processes. Start Edge, then run the following to dump a list of the content processes.

PS C:\> Get-Process MicrosoftEdgeCP | Select Id, ProcessName

   Id ProcessName
   -- -----------
 8872 MicrosoftEdgeCP
 9156 MicrosoftEdgeCP
10040 MicrosoftEdgeCP
14856 MicrosoftEdgeCP

Just pick one of the PIDs, for this purpose it doesn't matter too much as all Edge CP's are more or less equivalent. Then pass the PID to the -ProcessId parameter for Select-ComAccess and pipe in the $runas variable we got from before.

PS C:\> $runas | Select-ComAccess -ProcessId 8872 | Select Name

Name
----
PerAppRuntimeBroker
...

On my system, that reduces the count of classes from 200 to 9 classes, which is a pretty significant reduction. If I rerun this command with a normal UWP sandboxed process (such as the calculator) that rises to 45 classes. Still fewer than 200 but a significantly larger attack surface. The reason for the reduction is Edge content processes use Low Privilege AppContainer (LPAC) which heavily cuts down inadvertent attack surface. 

What about cross-session? The distinction here is you'll be running as one unsandboxed user account and would like to attack the another user account. This is quite important for the security of COM objects, the default access security descriptor uses the special SELF SID which is replaced by the user account of the process hosting the COM server. Of course if the server is running as a different user in a different session the defaults won't grant access. You can see the default security descriptor using the following command:

Show-ComSecurityDescriptor -Default -ShowAccess

This command results in a GUI being displayed with the default access security descriptor. You see in this screenshot that the first entry grants access to the SELF SID.

Default COM access security showing NT AUTHORITY\SELF

To test for accessible COM classes we just need to tell the access checking code to replace the SELF SID with another SID we're not granted access to. You can do this by passing a SID to the -Principal parameter. The SID can be anything as long as it's not our user account or one of the groups we have in our access token. Try running the following command:

PS C:\> $runas | Select-ComAccess -Principal S-1-2-3-4 | Select Name

Name
----
BrowserBroker Class
...

On my system that leaves around 54 classes, still a reduction from 200 but better than nothing and still gives plenty of attack surface.

Inspecting COM Objects


I've only shown you how to find potential targets to look at for sandbox escape or cross-session attacks. But the class still needs some sort of way of elevating privileges, such as a method on an interface which would execute an arbitrary executable or similar. Let's quickly look at some of the functions in the PS module which can help you to find this functionality. We'll use the example of the HxHelpPane class I abused previously (and is now fixed as a cross-session attack in Project Zero Issue 1224, probably).

The first thing is just to get a reference to the class object for the HxHelpPane server class. We can get the class using the following command:

PS C:\> $cls = Get-ComClass -Name "AP Client HxHelpPaneServer Class"

The $cls variable should now be a reference to the class object. First thing to do is find out what interfaces the class supports. In order to access a COM object OOP you need a registered COM proxy. We can use the list of registered proxy interfaces to find what the object responds to. Again I have command to do just that, Get-ComClassInterface. Run the following command to get back a list of interface objects:

PS C:\> Get-ComClassInterface $cls | Select Name, Iid

Name              Iid
----              ---
IMarshal          00000003-0000-0000-c000-000000000046
IUnknown          00000000-0000-0000-c000-000000000046
IMultiQI          00000020-0000-0000-c000-000000000046
IClientSecurity   0000013d-0000-0000-c000-000000000046
IHxHelpPaneServer 8cec592c-07a1-11d9-b15e-000d56bfe6ee

Sometimes there's interesting interfaces on the factory object as well, you can get the list of interfaces for that by specifying the -Factory parameter to Get-ComClassInterface. Of the interfaces shown only IHxHelpServer is unique to this class, the rest are standard COM interfaces. That's not to say they won't have interesting behavior but it wouldn't be the first place I'd look for interesting methods.

The implementation of these interfaces are likely to be in the COM server binary, where is that? We can just inspect the DefaultServer property on the class object.

PS C:\> $cls.DefaultServer
C:\Windows\helppane.exe

We can now just break out IDA and go to town? Not so fast, it'd be useful to know exactly what we're dealing with before then. At this point I'd recommend at least using my tools NDR parsing code to extract how the interface is structured. You can do this by pass an interface object from Get-ComClassInterface or just normal Get-ComInterface into the Get-ComProxy command. Unfortunately if you do this you'll find a problem:

PS C:\> Get-ComInterface -Name IHxHelpPaneServer | Get-ComProxy
Exception: "Error while parsing NDR structures"
At OleViewDotNet.psm1:1587 char:17
+ [OleViewDotNet.COMProxyInterfaceInstance]::GetFromIID($In
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) []
    + FullyQualifiedErrorId : NdrParserException

This could be bug in my code, but there's a more likely reason. The proxy could be an auto-created proxy from a type library. We can check that using the following:

PS C:\> Get-ComInterface -Name IHxHelpPaneServer

Name                 IID             HasProxy   HasTypeLib
----                 ---             --------   ----------
IHxHelpPaneServer    8cec592c-07a1... True       True

We can see if the output that the interface has a registered type library, for an interface this likely means its proxy is auto-generated. Where's the type library? Again we can use another database command, Get-ComTypeLib and pass it the IID of the interface:

PS C:\> Get-ComTypeLib -Iid 8cec592c-07a1-11d9-b15e-000d56bfe6ee

TypelibId  : 8cec5860-07a1-11d9-b15e-000d56bfe6ee
Version    : 1.0
Name       : AP Client 1.0 Type Library
Win32Path  : C:\Windows\HelpPane.exe
Win64Path  : C:\Windows\HelpPane.exe
Locale     : 0
NativePath : C:\Windows\HelpPane.exe

Now you can use your favourite tool to decompile the type library to get back your interface information. You can also use the following command if you capture the type library information to the variable $tlb:

PS C:\> Get-ComTypeLibAssembly $tlb | Format-ComTypeLib
...
[Guid("8cec592c-07a1-11d9-b15e-000d56bfe6ee")]
interface IHxHelpPaneServer
{
   /* Methods */
   void DisplayTask(string bstrUrl);
   void DisplayContents(string bstrUrl);
   void DisplaySearchResults(string bstrSearchQuery);
   void Execute(string pcUrl);
}

You now know the likely names of functions which should aid you in looking them up in IDA or similar. That's the end of this quick tutorial, there's plenty more to discover in the PS module you'll just have to poke around at it and see. Happy hunting.


Sunday 22 July 2018

UWP Localhost Network Isolation and Edge


This blog post describes an interesting “feature” added to Windows to support Edge accessing the loopback network interface. For reference this was on Windows 10 1803 running Edge 42.17134.1.0 as well as verifying on Windows 10 RS5 17713 running 43.17713.1000.0.

I like the concept of the App Container (AC) sandbox Microsoft introduced in Windows 8. It moved sandboxing on Windows from restricted tokens which were hard to reason about and required massive cludges to get working to a reasonably consistent capability based model where you are heavily limited in what you can do unless you’ve been granted an explicit capability when your application is started. On Windows 8 this was limited to a small set of known capabilities. On Windows 10 this has been expanded massively by effectively allowing an application to define its own capabilities and enforce them though the normal Windows access control mechanisms.

I’ve been looking at AC more and it's ability to do network isolation, where access to the network requires being granted capabilities such as “internetClient”, seems very useful. It’s a little known fact that even in the most heavily locked down, restricted token sandbox it’s possible to open network sockets by accessing the raw AFD driver. AC solves this issue quite well, it doesn’t block access to the AFD driver, instead the Firewall checks for the capabilities and blocks connecting or accepting sockets.

One issue does come up with building a generic sandboxing mechanism this AC network isolation primitive is regardless of what capabilities you grant it’s not possible for an AC application to access localhost. For example you might want your sandboxed application to access a web server on localhost for testing, or use a localhost proxy to MITM the traffic. Neither of these scenarios can be made to work in an AC sandbox with capabilities alone.

The likely rationale for blocking localhost is allowing sandboxed content access can also be a big security risk. Windows runs quite a few services accessible locally which could be abused, such as the SMB server. Rather than adding a capability to grant access to localhost, there's an explicit list of packages exempt from the localhost restriction stored by the firewall service. You can access or modify this list using the Firewall APIs such as the  NetworkIsolationSetAppContainerConfig function or using the CheckNetIsolation tool installed with Windows. This behavior seems to be rationalized as accessing loopback is a developer feature, not something which real applications should rely on. Curious, I wondered whether I had AC’s already in the exemption list. You can list all available exemptions by running “CheckNetIsolation LoopbackExempt -s” on the command line.


On my Windows 10 machine we can see two exemptions already installed, which is odd for a developer feature which no applications should be using. The first entry shows “AppContainer NOT FOUND” which indicates that the registered SID doesn’t correspond to a registered AC. The second entry shows a very unhelpful name of “001” which at least means it’s an application on the current system. What’s going on? We can use my NtObjectManager PS module and it's 'Get-NtSid' cmdlet  on the second SID to see if that can resolve a better name.


Ahha, “001” is actually a child AC of the Edge package, we could have guessed this by looking at the length of the SID, a normal AC SID had 8 sub authorities, whereas a child has 12, with the extra 4 being added to the end of the base AC SID. Looking back at the unregistered SID we can see it’s also an Edge AC SID just with a child which isn’t actually registered. The “001” AC seems to be the one used to host Internet content, at least based on the browser security whitepaper from X41Sec (see page 54).

This is not exactly surprising. It seems when Edge was first released it wasn’t possible to access localhost resources at all (as demonstrated by an IBM help article which instructs the user to use CheckNetIsolation to add an exemption). However, at some point in development MS added an about:flags option to enable accessing localhost, and seems it’s now the default configuration, even though as you can see in the following screenshot it says enabling can put your device at risk.


What’s interesting though is if you disable the flags option and restart Edge then the exemption entry is deleted, and re-enabling it restores the entry again. Why is that a surprise? Well based on previous knowledge of this exemption feature, such as this blog post by Eric Lawrence you need admin privileges to change the exemption list. Perhaps MS have changed that behavior now? Let’s try and add an exemption using the CheckNetIsolation tool as a normal user, passing “-a -p=SID” parameters.


I guess they haven’t as adding a new exemption using the CheckNetIsolation tool gives us access denied. Now I’m really interested. With Edge being a built-in application of course there’s plenty of ways that MS could have fudged the “security” checks to allow Edge to add itself to the list, but where is it?

The simplest location to add the fudge would be in the RPC service which implements the NetworkIsolationSetAppContainerConfig. (How do I know there's an RPC service? I just disassembled the API). I took a guess and assumed the implementation would be hosted in the “Windows Defender Firewall” service, which is implemented in the MPSSVC DLL. The following is a simplified version of the RPC server method for the API.

HRESULT RPC_NetworkIsolationSetAppContainerConfig(handle_t handle,
    DWORD dwNumPublicAppCs,
    PSID_AND_ATTRIBUTES appContainerSids) {

  if (!FwRpcAPIsIsPackageAccessGranted(handle)) {
    HRESULT hr;
    BOOL developer_mode = FALSE:
    IsDeveloperModeEnabled(&developer_mode);
    if (developer_mode) {
      hr = FwRpcAPIsSecModeAccessCheckForClient(1, handle);
      if (FAILED(hr)) {
          return hr;
      }
    }
    else
    {
      hr = FwRpcAPIsSecModeAccessCheckForClient(2, handle);
      if (FAILED(hr)) {
          return hr;
      }
    }
  }
  return FwMoneisAppContainerSetConfig(dwNumPublicAppCs,
                                       appContainerSids);
}

What’s immediately obvious is there's a method call, FwRpcAPIsIsPackageAccessGranted, which has “Package” in the name which might indicate it’s inspecting some AC package information. If this call succeeds then the following security checks are bypassed and the real function FwMoneisAppContainerSetConfig is called. It's also worth noting that the security checks differ depending on whether you're in developer mode or not. It turns out that if you have developer mode enabled then you can also bypass the admin check, which is confirmation the exemption list was designed primarily as a developer feature.

Anyway let's take a look at FwRpcAPIsIsPackageAccessGranted to see what it’s checking.

const WCHAR* allowedPackageFamilies[] = {
  L"Microsoft.MicrosoftEdge_8wekyb3d8bbwe",
  L"Microsoft.MicrosoftEdgeBeta_8wekyb3d8bbwe",
  L"Microsoft.zMicrosoftEdge_8wekyb3d8bbwe"
};

HRESULT FwRpcAPIsIsPackageAccessGranted(handle_t handle) {
  HANDLE token;
  FwRpcAPIsGetAccessTokenFromClientBinding(handle, &token);

  WCHAR* package_id;
  RtlQueryPackageIdentity(token, &package_id);
  WCHAR family_name[0x100];
  PackageFamilyNameFromFullName(package_id, family_name)

  for (int i = 0;
       i < _countof(allowedPackageFamilies);
       ++i) {
      if (wcsicmp(family_name,
           allowedPackageFamilies[i]) == 0) {
        return S_OK;
      }
  }
  return E_FAIL;
}

The FwRpcAPIsIsPackageAccessGranted function gets the caller’s token, queries for the package family name and then checks it against a hard coded list. If the caller is in the Edge package (or some beta versions) the function returns success which results in the admin check being bypassed. The conclusion we can take is this is how Edge is adding itself to the exemption list, although we also want to check what access is required to the RPC server. For an ALPC server there’s two security checks, connecting to the ALPC port and an optional security callback. We could reverse engineer it from service binary but it is easier just to dump it from the ALPC server port, again we can use my NtObjectManager module.


As the RPC service doesn’t specify a name for the service then the RPC libraries generate a random name of the form “LRPC-XXXXX”. You would usually use EPMAPPER to find the real name but I just used a debugger on CheckNetIsolation to break on NtAlpcConnectPort and dumped the connection name. Then we just find the handle to that ALPC port in the service process and dump the security descriptor. The list contains Everyone and all the various network related capabilities, so any AC process with network access can talk to these APIs including Edge LPAC. Therefore all Edge processes can access this capability and add arbitrary packages. The implementation inside Edge is in the function emodel!SetACLoopbackExemptions.

With this knowledge we can now put together some code which will exploit this “feature” to add arbitrary exemptions. You can find the PowerShell script on my Github gist.


Wrap Up

If I was willing to speculate (and I am) I’d say the reason that MS added localhost access this way is it didn’t require modifying kernel drivers, it could all be done with changes to user mode components. Of course the cynic in me thinks this could actually be just there to make Edge more equal than others, assuming MS ever allowed another web browser in the App Store. Even a wrapper around the Edge renderer would not be allowed to add the localhost exemption. It’d be nice to see MS add a capability to do this in the future, but considering current RS5 builds use this same approach I’m not hopeful.

Is this a security issue? Well that depends. On the one hand you could argue the default configuration which allows Internet facing content to then access localhost is dangerous in itself, they point that out explicitly in the about:flags entry. Then again all browsers have this behavior so I’m not sure it’s really an issue.

The implementation is pretty sloppy and I’m shocked (well not that shocked) that it passed a security review. To list some of the issues with it:
      The package family check isn’t very restrictive, combined with the weak permissions of the RPC service it allows any Edge process to add an arbitrary exemption.
      The exemption isn’t linked to the calling process, so any SID can be added as an exemption.

While it seems the default is only to allow the Internet facing ACs access to localhost because of these weaknesses if you compromised a Flash process (which is child AC “006”) then it could add itself an exemption and try and attack services listening on localhost. It would make more sense if only the main MicrosoftEdge process could add the exemptions, not any content process. But what would make the most sense would be to support this functionality through a capability so that everyone could take advantage of it rather than implementing it as a backdoor.