Monday, 3 June 2024

Working your way Around an ACL

There's been plenty of recent discussion about Windows 11's Recall feature and how much of it is a garbage fire. Especially a discussion around how secure the database storing all those juicy details of your banking details, sexual peccadillos etc is from prying malware. Spoiler, it's only protected through being ACL'ed to SYSTEM and so any privilege escalation (or non-security boundary *cough*) is sufficient to leak the information. 

However, I've not spent the time to setup Recall on any machine I own and the files are probably correctly ACL'ed. Therefore, this blog isn't here to talk about that, instead I was following a thread about Recall and the security of the database by Albacore on Mastodon and one toot in particular caught my interest.

"@DrewNaylor File Explorer always runs unelevated, Administrators also have access to C:\Program Files\WindowsApps yet you simply can't open it in File Explorer without breaking ACLs no matter how you try."

I thought this wasn't true based on what I know about the "C:\Program Files\WindowsApps" folder, so I decided to see if I can get it show in an unelevated explorer. It turns out to be more complex than it should be for various reasons, so let's dig in.

What is the WindowsApps Folder?

The WindowsApps folder is used to store system installations of packaged applications. Think UWP, Desktop Bridge, Calculator etc. And it's true, if you try and view the folder from a non-elevated application it's gives you access denied:

PS> ls 'C:\Program Files\WindowsApps\'
ls : Access to the path 'C:\Program Files\WindowsApps' is denied.

Why would Microsoft do this as it doesn't seem like a security sensitive location? If I had to guess it's to stop a user browsing to the packaged applications and double clicking on the executable files and being confused when that doesn't work. Packaged applications are mostly normal PE files, however, they can't be executed directly. Instead a complex sequence involving COM and/or the container APIs need to be invoked to setup the runtime environment. This guess seems likely because if you know the name of the packaged application there's nothing stopping you listing it's contents, it's only the top level WindowsApps folder which is blocked.

PS> ls 'C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_11.2403.6.0_x64__8wekyb3d8bbwe'

    Directory: C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_11.2403.6.0_x64__8wekyb3d8bbwe

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        2024-05-15     19:42                AppxMetadata
d-----        2024-05-15     19:42                Assets
-a----        2024-05-15     19:42          54073 AppxBlockMap.xml
-a----        2024-05-15     19:42          11431 AppxManifest.xml
-a----        2024-05-15     19:42          12255 AppxSignature.p7x
-a----        2024-05-15     19:42        4179968 CalculatorApp.dll
-a----        2024-05-15     19:42          19456 CalculatorApp.exe
<SNIP>

It's seems likely that the reason you can't access the WindowsApps folder is due to ACLs. Therefore, from an admin PowerShell prompt we can inspect what the ACLs are:

PS> Format-Win32SecurityDescriptor 'C:\Program Files\WindowsApps\' -MapGeneric
Path: C:\Program Files\WindowsApps\
Type: File
Control: DaclPresent, DaclAutoInherited, DaclProtected

<SNIP>

 - Type  : AllowedCallback
 - Name  : BUILTIN\Users
 - SID   : S-1-5-32-545
 - Mask  : 0x001200A9
 - Access: GenericExecute|GenericRead
 - Flags : None
 - Condition: Exists WIN://SYSAPPID

I've removed all of the output baring the final ACE as this is the important one. The rest of the ACL doesn't have any other ACE which would refer to a normal user, only administrators. It shows that the BUILTIN\Users group should get read and execute access, however, only if the WIN://SYSAPPID security attribute exists in the user's access token. I'm not going to explain how this works, see my series on AppLocker for how conditional ACEs are used, or buy my book. This is why you can't just list the folder in explorer, the process token doesn't have the WIN://SYSAPPID attribute and so access is denied.

What's the WIN://SYSAPPID attribute for? It's added to access tokens when a packaged application is executed and contains information about the package identity. This can then be referenced by an application, or the kernel in this case to check for information about the package the process in a member of. As setting this security attribute on a token requires SeTcbPrivilege it's hard to spoof. 

In this case we don't need to spoof a specific value of the attribute, it just has to exist. We can't create it, but perhaps there's already an access token we can borrow to give us access instead.

Finding a Suitable Access Token

There's likely to be a process running as the user with the WIN://SYSAPPID attribute set. As the primary token of that process should be the same user then there's nothing stopping us impersonating it to get access to the WindowsApps folder. First let's find a process with a suitable token:

PS> $ps = Get-NtProcess -FilterScript {
  Use-NtObject($token = Get-NtToken -Process $_ -Access Query, Duplicate) {
     "WIN://SYSAPPID" -in $token.SecurityAttributes.Name -and -not $token.AppContainer
  }
}
PS> $ps.Count
7

This script enumerates all accessible processes, then filters them down to only processes with a token having the WIN://SYSAPPID attribute. We also filter out any App Container tokens as they probably won't be able to access the folder either, plus it's just more hassle to deal with them. We also ensure that we can open the token for Duplicate access, as we'll need to call DuplicateToken to get an impersonation token from the primary token. We can see in this example that there are 7 processes matching our criteria.

Finally we can just test access by getting an impersonation token, impersonating it then enumerating the WindowsApps folder.

PS> $token = Get-NtToken -Process $ps[0] -Duplicate
PS> Invoke-NtToken $token { 
   ls 'C:\Program Files\WindowsApps\' 
}

    Directory: C:\Program Files\WindowsApps

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        2024-05-31     07:50                Deleted
d-----        2020-09-25     07:18                DeletedAllUserPackages
d-----        2019-12-07     01:53                Microsoft.Advertising...
<SNIP>

And it works! However, this isn't actually what the toot originally said. I'm supposed to be able to get the WindowsApps folder to show in a non-elevated explorer window. Let's try and do that then.

Finishing the Job

As the process token is our own user and in our own logon session then we meet the criteria to  duplicate a new primary token from that process and use that with CreateProcessAsUser. Unfortunately there's a big problem, if you run a new copy of explorer it realizes there's an instance already running and calls into the existing instance and shows the new window. Therefore there's never a copy of explorer that would run with a UI which has the token you specified. There is a "/SEPARATE" command line argument you can pass, which does create a new UI instance. Unfortunately it's not the process you started that sticks around, instead a new instance of explorer is spawned via COM and that's the one which hosts the UI.

Instead, the "simplest" approach is to just terminate all instances of explorer and spawn a new one. Bit harsh, but fair IMO. You should terminate with a non-zero exit code, otherwise the explorer instance will be automatically restarted. There is a second problem however. If you specify a primary token with the WIN://SYSAPPID attribute you'll find it's no longer there once the process starts. This is due to the kernel stripping this attribute when building a new token for the process. There are various ways around this but the simplest is to start the process suspended, then use NtSetInformationProcess to swap the token to the one with the attribute. Setting the token after creation does not strip the attributes. Putting it all together:

PS> $ex = Get-NtProcess -Name 'explorer.exe'
PS> $ex.Terminate(1)
PS> $token = Get-NtToken -Process $ps[0] -Duplicate -TokenType Primary
PS> $p = New-Win32Process "explorer.exe" -CreationFlags Suspended
PS> Set-NtToken -Process $p -Token $token
PS> Resume-NtProcess -Process $p

Now you can navigate to the WindowsApps folder and see the results.

A screenshot of an explorer window showing the WindowsApps folder being listed.

Hopefully this might give you a bit of insight into the thought process behind trying to bypass ACLs.

UPDATE 2024/06/05 - Turns out I was wrong about Recall being secure. They use the same technique I describe in this blog post except they need a specific WIN://SYSAPPID, for example "MicrosoftWindows.Client.AIX_cw5n1h2txyewy". You can get a token for this attribute by opening the instance of AIXHost.exe, getting its token and using that to access the database files. Or, as the files are owned by the user you can just rewrite the DACLs for the files and gain access that way, no admin required ;-)

Monday, 29 April 2024

Relaying Kerberos Authentication from DCOM OXID Resolving

Recently, there's been some good research into further exploiting DCOM authentication that I initially reported to Microsoft almost 10 years ago. By inducing authentication through DCOM it can be relayed to a network service, such as Active Directory Certificate Services (ADCS) to elevated privileges and in some cases get domain administrator access.

The important difference with this new research is taking the abuse of DCOM authentication from local access (in the case of the many Potatoes) to fully remote by abusing security configuration changes or over granting group access. For more information I'd recommend reading the slides from Tianze Ding Blackhat ASIA 2024 presentation, or reading about SilverPotato by Andrea Pierini.

This short blog post is directly based on slide 36 of Tianze Ding presentation where there's a mention on trying to relay Kerberos authentication from the initial OXID resolver request. I've reproduced the slide below:

Slide 36 from the Blackhat Asia presentation, discussing Kerberos relay from the ResolveOxid2 call.
The slides says that you can't relay Kerberos authentication during OXID resolving because you can't control the SPN used for the authentication. It's always set to RPCSS/MachineNameFromStringBinding. While you can control the string bindings in the standard OBJREF structure, RPCSS ignores the security bindings and so you can't specify the SPN unlikely with the an object RPC call which happens later.

This description intrigued me, as I didn't think this was true. You just had to abuse a "feature" I described in my original Kerberos relay blog post. Specifically, that the Kerberos SSPI supports a special format for the SPN which includes marshaled target information. This was something I discovered when trying to see if I could get Kerberos relay from the SMB protocol, the SMB client would call the SecMakeSPNEx2 API, which in turn would call CredMarshalTargetInfo to build a marshaled string which appended to the end of the SPN. If the Kerberos SSPI sees an SPN in this format, it calculates the length of the marshaled data, strips that from the SPN and continues with the new SPN string.

In practice what this means is you can build an SPN of the form CLASS/<SERVER><TARGETINFO> and Kerberos will authenticate using CLASS/<SERVER>. The interesting thing about this behavior is if the <SERVER><TARGETINFO> component is coming from the hostname of the server we're authenticating to then you can end up decoupling the SPN used for the authentication from the hostname that's used to communicate. And that's exactly what we got here, the MachineNameFromStringBinding is coming from an untrusted source, the OBJREF we specified. We can specify a machine name in this special format, this will allow the OXID resolver to talk to our server on hostname <SERVER><TARGETINFO> but authenticate using RPCSS/<SERVER> which can be anything we like.

There are some big caveats with this. Firstly, the machine name must not contain any dots, so it must be an intranet address. This is because it's close to impossible to a build a valid TARGETINFO string which represents a valid fully qualified domain name. In many situations this would rule out using this trick, however as we're dealing with domain authentication scenarios and the default for the Windows DNS server is to allow any user to create arbitrary hosts within the domain's DNS Zone this isn't an issue.

This restriction also limits the maximum size of the hostname to 63 characters due to the DNS protocol. If you pass a completely empty CREDENTIAL_TARGET_INFORMATION structure to the CredMarshalTargetInfo API you get the minimum valid target information string, which is 44 characters long. This only leaves 19 characters for the SERVER component, but again this shouldn't be a big issue. Windows component names are typically limited to 15 characters due to the old NetBIOS protocol, and by default SPNs are registered with these short name forms. Finally in our case while there won't be an explicit RPCSS SPN registered, this is one of the service classes which is automatically mapped to the HOST class which will be registered.

To exploit this you'll need to do the following steps:

  1. Build the machine name by appending the hostname for for the target SPN to the minimum string 1UWhRCAAAAAAAAAAAAAAAAAAAAAAAAAAAAwbEAYBAAAA. For example for the SPN RPCSS/ADCS build the string ADCS1UWhRCAAAAAAAAAAAAAAAAAAAAAAAAAAAAwbEAYBAAAA. 
  2. Register the machine name as a host on the domain's DNS server. Point the record to a server you control on which you can replace the listening service on TCP port 135.
  3. Build an OBJREF with the machine name and induce OXID resolving through your preferred method, such as abusing IStorage activation.
  4. Do something useful with the induced Kerberos authentication.

With this information I did some tests myself, and also Andrea checked with SilverPotato and it seems to work. There are limits of course, the big one is the security bindings are ignored so the OXID resolver uses Negotiate. This means the Kerberos authentication will always be negotiated with at least integrity enabled which makes the authentication useless for most scenarios, although it can be used for the default configuration of ADCS (I think).

Thursday, 25 April 2024

Issues Resolving Symbols on Windows 11 on ARM64

This is a short blog post about an issue I encountered during some development work on my OleViewDotNet tool and how I resolved it. It might help others if they come across a similar problem, although I'm not sure if I took the best approach.

OleViewDotNet has the ability to parse the internal COM structures in a process and show important information such as the list of current IPIDs exported by the process and the access security descriptor. 

PS C:\> $p = Get-ComProcess -ProcessId $pid
PS C:\> $p.Ipids
IPID                                 Interface Name  PID    Process Name
----                                 --------------  ---    ------------
00008800-4bd8-0000-c3f9-170a9f197e11 IRundown        19416  powershell.exe
00009401-4bd8-ffff-45b0-a43d5764a731 IRundown        19416  powershell.exe
0000a002-4bd8-5264-7f87-e6cbe82784aa IRundown        19416  powershell.exe

To achieve this task we need access to the symbols of the COMBASE DLL so that we can resolve various root pointers to hash tables and other runtime artifacts. The majority of the code to parse the process information is in the COMProcessParser class, which uses the DBGHELP library to resolve symbols to an address. My code also supports a mechanism to cache the resolved pointers into a text file which can be subsequently used on other systems with the same COMBASE DLL rather than needing to pull down a 30+ MiB symbol file.

This works fine on Windows 11 x64, but I noticed that I would get incorrect results on ARM64. In the past I've encountered similar issues that have been down to changes in the internal structures used during parsing. Microsoft provides private symbols for COMBASE so its pretty easy to check if the structures were different between x64 and ARM64 versions of Windows 11. They were no differences that I could see. In any case, I noticed this also impacted trivial values, for example the symbol gSecDesc contains a pointer to the COM access security descriptor. However, when reading that pointer it was always NULL even though it should have been initialized.

To add the my confusion when I checked the symbol in WinDBG it showed the pointer was correctly initialized. However, if I did a search for the expected symbol using the x command in WinDBG I found something interesting:

0:010> x combase!gSecDesc
00007ffa`d0aecb08 combase!gSecDesc = 0x00000000`00000000
00007ffa`d0aed1c8 combase!gSecDesc = 0x00000180`59fdb750

We can see from the output that there's two symbols for gSecDesc, not one. The first one has a NULL value while the second has the initialized value. When I checked what address my symbol resolver was returning it was the first one, where as WinDBG knew better and would return the second. What on earth is going on?

This is an artifact of a new feature in Windows 11 on ARM64 to simplify the emulation of x64 executables, ARM64X. This is a clever (or terrible) trick to avoid needing separate ARM64 and x64 binaries on the system. Instead both ARM64 and x64 compatible code, referred to as ARM64EC (Emulation Compatible), are merged into a single system binary. Presumably in some cases this means that global data structures need to be duplicated, once for the ARM64 code, and once for the ARM64EC code. In this case it doesn't seem like there should be two separate global data values as a pointer is a pointer, but I suppose there might be edge cases where that isn't true and it's simpler to just duplicate the values to avoid conflicts. The details are pretty interesting and there's a few places where this has been reverse engineered, I'd at least recommend this blog post.

My code is using the SymFromName API to query the symbol address, and this would just return the first symbol it finds which in this case was the ARM64EC one which wasn't initialized in an ARM64 process. I don't know if this is a bug in DBGHELP, perhaps it should try and return the symbol which matches the binary's machine type, or perhaps I'm holding it wrong. Regardless, I needed a way of getting the correct symbol, but after going through the DBGHELP library there was no obvious way of disambiguating the two. However, clearly WinDBG can do it, so there must be a way.

After a bit of hunting around I found that the Debug Interface Access (DIA) library has an IDiaSymbol::get_machineType method which returns the machine type for the symbol, either ARM64 (0xAA64) or ARM64EC (0xA641). Unfortunately I'd intentionally used DBGHELP as it's installed by default on Windows where as DIA needs to be installed separately. There didn't seem to be an equivalent in the DBGHELP library. 

Fortunately after poking around the DBGHELP library looking for a solution an opportunity presented itself. Internally in DBGHELP (at least recent versions) it uses a private copy of the DIA library. That in itself wouldn't be that helpful, except the library exports a couple of private APIs that allow a caller to query the current DIA state. For example, there's the SymGetDiaSession API which returns an instance of the IDiaSession interface. From that interface you can query for an instance of the IDiaSymbol interface and then query the machine type. I'm not sure how compatible the version of DIA inside DBGHELP is relative to the publicly released version, but it's compatible enough for my purposes.

Update 2024/04/26: it was pointed out to me that the machine type is present in the SYMBOL_INFO::Reserved[1] field so you don't need to do this whole approach with the DIA interface. The point still stands that you need to enumerate the symbols on ARM64 platforms as there could be multiple ones and you still need to check the machine type.

To resolve this issue the code in OleViewDotNet takes the following steps on ARM64 systems:

  1. Instead of calling SymFromName the code enumerate all symbols for a name.
  2. The SymGetDiaSession is called to get an instance of the IDiaSession interface.
  3. The IDiaSession::findSymbolByVA method is called to get an instance the IDiaSymbol interface for the symbol.
  4. The IDiaSymbol::get_machineType method is called to get the machine type for the symbol.
  5. The symbol is filtered based on the context, e.g. if parsing an ARM64 process it uses the ARM64 symbol.
This is much more complicated that I think it needs to be, but I've yet to find an alternative approach. Ideally the SYMBOL_INFO structure in DBGHELP should contain a machine type field, but I guess it's hard to change the interface now. The relatively simple code to do the machine type query is here. If anyone has found a better way of doing it with just the public interface to DBGHELP I'd appreciate the information :)


Friday, 9 February 2024

Sudo On Windows a Quick Rundown

Background

The Windows Insider Preview build 26052 just shipped with a sudo command, I thought I'd just take a quick peek to see what it does and how it does it. This is only a short write up of my findings, I think this code is probably still in early stages so I wouldn't want it to be treated too harshly. You can see the official announcement here.


To run a command using sudo you can just type:


C:\> sudo powershell.exe


The first thing to note, if you know anything about the security model of Windows (maybe buy my book, hint hint), is that there's no equivalent to SUID binaries. The only way to run a process with a higher privilege level is to get an existing higher privileged process to start it for you or you have sufficient permissions yourself though say SeImpersonatePrivilege or SeAssignPrimaryToken privilege and have an access token for a more privileged user. Since Vista, the main way of facilitating running more privileged code as a normal user is to use UAC. Therefore this is how sudo is doing it under the hood, it’s just spawning a process via UAC using the ShellExecute runas verb.


This is slightly disappointing as I was hoping the developers would have implemented a sudo service running at a higher privilege level to mediate access. Instead this is really just a fancy executable that you can elevate using the existing UAC mechanisms. 


The other sad thing is, as is Microsoft tradition, this is a sudo command in name only. It doesn’t support any policies which would allow a user to run specific commands elevated, either with a password requirement or without. It’ll just run anything you give it, and only if that user can pass a UAC elevation prompt.


There are four modes of operation that can be configured in system settings, why this needs to be a system setting I don’t really know. 


Initially sudo is disabled, running the sudo command just prints “Sudo is disabled on this machine. To enable it, go to the Developer Settings page in the Settings app”. This isn’t because of some fundamental limit on the behavior of the sudo implementation, instead it’s just an Enabled value in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Sudo which is set to 0.


The next option (value 1) is to run the command in a new window. All this does is pass the command line you gave to sudo to ShellExecute with the runas verb. Therefore you just get the normal UAC dialog showing for that command. Considering the general move to using PowerShell for everything you can already do this easily enough with the command:


PS> Start-Process -Verb runas powershell.exe


The third and fourth options (value 2 and 3) are “With input disabled” and “Inline”. They’re more or less the same, they can run the command and attach it to the current console window by sharing the standard handles across to the new process. They use the same implementation behind the scenes to do this, a copy of the sudo binary is elevated with the command line and the calling PID of the non-elevated sudo. E.g. it might try and running the following command via UAC:


C:\> sudo elevate -p 1234 powershell.exe


Oddly, as we’ll see passing the PID and the command seems to be mostly unnecessary. At best it’s useful if you want to show more information about the command in the UAC dialog, but again as we’ll see this isn’t that useful.


The only difference between the two is “With input disabled” you can only output text from the elevated application, you can’t interact with it. Whereas the Inline mode allows you to run the command elevated in the same console session. This final mode has the obvious risk that the command is running elevated but attached to a low privileged window. Malicious code could inject keystrokes into that console window to control the privileged process. This was pointed out in the Microsoft blog post linked earlier. However, the blog does say that running it with input disabled mitigates this issue somewhat, as we’ll see it does not.

How It Really Works

For the “New Window” mode all sudo is doing is acting as a wrapper to call ShellExecute. For the inline modes it requires a bit more work. Again go back and read the Microsoft blog post, tbh it gives a reasonable overview of how it works. In the blog it has the following diagram, which I’ll reproduce here in case the link dies.


A diagram showing how sudo on windows works. Importantly it shows that there's an RPC channel between a normal sudo process and an elevated one.


What always gets me interested is where there’s an RPC channel involved. The reason a communications channel exists is due to the limitations of UAC, it very intentionally doesn’t allow you to attach elevated console processes to an existing low privileged console (grumble UAC is not a security boundary, but then why did this do this if it wasn’t grumble). It also doesn’t pass along a few important settings such as the current directory or the environment which would be useful features to have in a sudo like command. Therefore to do all that it makes sense for the normal privileged sudo to pass that information to the elevated version.


Let’s check out the RPC server using NtObjectManager:


PS> $rpc = Get-RpcServer C:\windows\system32\sudo.exe

PS> Format-RpcServer $rpc

[

  uuid(F691B703-F681-47DC-AFCD-034B2FAAB911),

  version(1.0)

]

interface intf_f691b703_f681_47dc_afcd_034b2faab911 {

    int server_PrepareFileHandle([in] handle_t _hProcHandle, [in] int p0, [in, system_handle(sh_file)] HANDLE p1);

    int server_PreparePipeHandle([in] handle_t _hProcHandle, [in] int p0, [in, system_handle(sh_pipe)] HANDLE p1);

    int server_DoElevationRequest([in] handle_t _hProcHandle, [in, system_handle(sh_process)] HANDLE p0, [in] int p1, [in, string] char* p2, [in, size_is(p4)] byte* p3[], [in] int p4, [in, string] char* p5, [in] int p6, [in] int p7, [in, size_is(p9)] byte* p8[], [in] int p9);

    void server_Shutdown([in] handle_t _hProcHandle);

}


Of the four functions, the key one is server_DoElevationRequest. This is what actually does the elevation. Doing a quick bit of analysis it seems the parameters correspond to the following:


HANDLE p0 - Handle to the calling process.

int p1 - The type of the new process, 2 being input disabled, 3 being inline.

char* p2 - The command line to execute (oddly, in ANSI characters)

byte* p3[] - Not sure.

int p4 - Size of p3.

char* p5 - The current directory.

int p6 - Not sure, seems to be set to 1 when called.

int p7 - Not sure, seems to be set to 0 when called.

byte* p8 - Pointer to the environment block to use.

int p9 - Length of environment block.


The RPC server is registered to use ncalrpc with the port name being sudo_elevate_PID where PID is just the value passed on the elevation command line for the -p argument. The PID isn’t used for determining the console to attach to, this is instead passed through the HANDLE parameter, and is only used to query its PID to pass to the AttachConsole API.


Also as said before as far as I can tell the command line you want to execute which is also passed to the elevated sudo is unused, it’s in fact this RPC call which is responsible for executing the command properly. This results in something interesting. The elevated copy of sudo doesn’t exit once the new process has started, it in fact keeps the RPC server open and will accept other requests for new processes to attach to. For example you can do the following to get a running elevated sudo instance to attach an elevated command prompt to the current PowerShell console:


PS> $c = Get-RpcClient $rpc

PS> Connect-RpcClient $c -EndpointPath sudo_elevate_4652

PS> $c.server_DoElevationRequest((Get-NtProcess -ProcessId $pid), 3, "cmd.exe", @(), 0, "C:\", 1, 0, @(), 0)


There are no checks for the caller’s PID to make sure it’s really the non-elevated sudo making the request. As long as the RPC server is running you can make the call. Finding the ALPC port is easy enough, you can just enumerate all the ALPC ports in \RPC Control to find them. 


A further interesting thing to note is that the type parameter (p1) doesn’t have to match the configured sudo mode in settings. Passing 2 to the parameter runs the command with input disabled, but passing any other value runs in the inline mode. Therefore even if sudo is configured in new window mode, there’s nothing stopping you running the elevated sudo manually, with a trusted Microsoft signed binary UAC prompt and then attaching the inline mode via the RPC service. E.g. you can run sudo using the following PowerShell:


PS> Start-Process -Verb runas -FilePath sudo -ArgumentList "elevate", "-p", 1111, "cmd.exe"


Fortunately sudo will exit immediately if it’s configured in disabled mode, so as long as you don’t change the defaults it’s fine I guess.


I find it odd that Microsoft would rely on UAC when UAC is supposed to be going away. Even more so that this command could have just been a PowerToy as other than the settings UI changes it really doesn’t need any integration with the OS to function. And in fact I’d argue that it doesn’t need those settings either. At any rate, this is no more a security risk than UAC already is, or is it…


Looking back at how the RPC server is registered can be enlightening:


RPC_STATUS StartRpcServer(RPC_CSTR Endpoint) {

  RPC_STATUS result;


  result = RpcServerUseProtseqEpA("ncalrpc", 

      RPC_C_PROTSEQ_MAX_REQS_DEFAULT, Endpoint, NULL);

  if ( !result )

  {

    result = RpcServerRegisterIf(server_sudo_rpc_ServerIfHandle, NULL, NULL);

    if ( !result )

      return RpcServerListen(1, RPC_C_PROTSEQ_MAX_REQS_DEFAULT, 0);

  }

  return result;

}


Oh no, that’s not good. The code doesn’t provide a security descriptor for the ALPC port and it calls RpcServerRegisterIf to register the server, which should basically never be used. This old function doesn’t allow you to specify a security descriptor or a security callback. What this means is that any user on the same system can connect to this service and execute sudo commands. We can double check using some PowerShell:


PS> $as = Get-NtAlpcServer

PS> $sudo = $as | ? Name -Match sudo

PS> $sudo.Name

sudo_elevate_4652

PS> Format-NtSecurityDescriptor $sudo -Summary

<Owner> : BUILTIN\Administrators

<Group> : DESKTOP-9CF6144\None

<DACL>

Everyone: (Allowed)(None)(Connect|Delete|ReadControl)

NT AUTHORITY\RESTRICTED: (Allowed)(None)(Connect|Delete|ReadControl)

BUILTIN\Administrators: (Allowed)(None)(Full Access)

BUILTIN\Administrators: (Allowed)(None)(Full Access)


Yup, the DACL for the ALPC port has the Everyone group. It would even allow restricted tokens with the RESTRICTED SID set such as the Chromium GPU processes to access the server. This is pretty poor security engineering and you wonder how this got approved to ship in such a prominent form. 


The worst case scenario is if an admin uses this command on a shared server, such as a terminal server then any other user on the system could get their administrator access. Oh well, such is life…


I will give Microsoft props though for writing the code in Rust, at least most of it. Of course it turns out that the likelihood that it would have had any useful memory corruption flaws to be low even if they'd written it in ANSI C. This is a good lesson on why just writing in Rust isn't going to save you if you end up just introducing logical bugs instead.


Saturday, 16 July 2022

Access Checking Active Directory

Like many Windows related technologies Active Directory uses a security descriptor and the access check process to determine what access a user has to parts of the directory. Each object in the directory contains an nTSecurityDescriptor attribute which stores the binary representation of the security descriptor. When a user accesses the object through LDAP the remote user's token is used with the security descriptor to determine if they have the rights to perform the operation they're requesting.

Weak security descriptors is a common misconfiguration that could result in the entire domain being compromised. Therefore it's important for an administrator to be able to find and remediate security weaknesses. Unfortunately Microsoft doesn't provide a means for an administrator to audit the security of AD, at least in any default tool I know of. There is third-party tooling, such as Bloodhound, which will perform this analysis offline but from reading the implementation of the checking they don't tend to use the real access check APIs and so likely miss some misconfigurations.

I wrote my own access checker for AD which is included in my NtObjectManager PowerShell module. I've used it to find a few vulnerabilities, such as CVE-2021-34470 which was an issue with Exchange's changes to AD. This works "online", as in you need to have an active account in the domain to run it, however AFAIK it should provide the most accurate results if what you're interested in what access an specific user has to AD objects. While the command is available in the module it's perhaps not immediately obvious how to use it an interpret the result, therefore I decide I should write a quick blog post about it.

A Complex Process

The access check process is mostly documented by Microsoft in [MS-ADTS]: Active Directory Technical Specification. Specifically in section 5.1.3. However, this leaves many questions unanswered. I'm not going to go through how it works in full either, but let me give a quick overview.  I'm going to assume you have a basic knowledge of the structure of the AD and its objects.

An AD object contains many resources that access might want to be granted or denied on for a particular user. For example you might want to allow the user to create only certain types of child objects, or only modify certain attributes. There are many ways that Microsoft could have implemented security, but they decided on extending the ACL format to introduce the object ACE. For example the ACCESS_ALLOWED_OBJECT_ACE structure adds two GUIDs to the normal ACCESS_ALLOWED_ACE

The first GUID, ObjectType indicates the type of object that the ACE applies to. For example this can be set to the schema ID of an attribute and the ACE will grant access to only that attribute nothing else. The second GUID, InheritedObjectType is only used during ACL inheritance. It represents the schema ID of the object's class that is allowed to inherit this ACE. For example if it's set to the schema ID of the computer class, then the ACE will only be inherited if such a class is created, it will not be if say a user object is created instead. We only need to care about the first of these GUIDs when doing an access check.

To perform an access check you need to use an API such as AccessCheckByType which supports checking the object ACEs. When calling the API you pass a list of object type GUIDs you want to check for access on. When processing the DACL if an ACE has an ObjectType GUID which isn't in the passed list it'll be ignored. Otherwise it'll be handled according to the normal access check rules. If the ACE isn't an object ACE then it'll also be processed.

If all you want to do is check if a local user has access to a specific object or attribute then it's pretty simple. Just get the access token for that user, add the object's GUID to the list and call the access check API. The resulting granted access can be one of the following specific access rights, not the names in parenthesis are the ones I use in the PowerShell module for simplicity:
  • ACTRL_DS_CREATE_CHILD (CreateChild) - Create a new child object
  • ACTRL_DS_DELETE_CHILD (DeleteChild) - Delete a child object
  • ACTRL_DS_LIST (List) - Enumerate child objects
  • ACTRL_DS_SELF (Self) - Grant a write-validated extended right
  • ACTRL_DS_READ_PROP (ReadProp) - Read an attribute
  • ACTRL_DS_WRITE_PROP (WriteProp) - Write an attribute
  • ACTRL_DS_DELETE_TREE (DeleteTree) - Delete a tree of objects
  • ACTRL_DS_LIST_OBJECT (ListObject) - List a tree of objects
  • ACTRL_DS_CONTROL_ACCESS (ControlAccess) - Grant a control extended right
You can also be granted standard rights such as READ_CONTROL, WRITE_DAC or DELETE which do what you'd expect them to do. However, if you want see what the maximum granted access on the DC would be it's slightly more difficult. We have the following problems:
  • The list of groups granted to a local user is unlikely to match what they're granted on the DC where the real access check takes place.
  • AccessCheckByType only returns a single granted access value, if we have a lot of object types to test it'd be quick expensive to call 100s if not 1000s of times for a single security descriptor.
While you could solve the first problem by having sufficient local privileges to manually create an access token and the second by using an API which returns a list of granted access such as AccessCheckByTypeResultList there's an "simpler" solution. You can use the Authz APIs, these allow you to manually build a security context with any groups you like without needing to create an access token and the AuthzAccessCheck API supports returning a list of granted access for each object in the type list. It just so happens that this API is the one used by the AD LDAP server itself.

Therefore to perform a "correct" maximum access check you need to do the following steps.
  1. Enumerate the user's group list for the DC from the AD. Local group assignments are stored in the directory's CN=Builtin container.
  2. Build an Authz security context with the group list.
  3. Read a directory object's security descriptor.
  4. Read the object's schema class and build a list of specific schema objects to check:
    • All attributes from the class and its super, auxiliary and dynamic auxiliary classes.
    • All allowable child object classes
    • All assignable control, write-validated and property set extended rights.
  5. Convert the gathered schema information into the object type list for the access check.
  6. Run the access check and handled the results.
  7. Repeat from 3 for every object you want to check.
Trust me when I say this process is actually easier said than done. There's many nuances that just produce surprising results, I guess this is why most tooling just doesn't bother. Also my code includes a fair amount of knowledge gathered from reverse engineering the real implementation, but I'm sure I could have missed something.

Using Get-AccessibleDsObject and Interpreting the Results

Let's finally get to using the PowerShell command which is the real purpose of this blog post. For a simple check run the following command. This can take a while on the first run to gather information about the domain and the user.

PS> Get-AccessibleDsObject -NamingContext Default
Name   ObjectClass UserName       Modifiable Controllable
----   ----------- --------       ---------- ------------
domain domainDNS   DOMAIN\alice   False      True

This uses the NamingContext property to specify what object to check. The property allows you to easily specify the three main directories, Default, Configuration and Schema. You can also use the DistinguishedName property to specify an explicit DN. Also the Domain property is used to specify the domain for the LDAP server if you don't want to inspect the current user's domain. You can also specify the Recurse property to recursively enumerate objects, in this case we just access check the root object.

The access check defaults to using the current user's groups, based on what they would be on the DC. This is obviously important, especially if the current user is a local administrator as they wouldn't be guaranteed to have administrator rights on the DC. You can specify different users to check either by SID using the UserSid property, or names using the UserName property. These properties can take multiple values which will run multiple checks against the list of enumerated objects. For example to check using the domain administrator you could do the following:

PS> Get-AccessibleDsObject -NamingContext Default -UserName DOMAIN\Administrator
Name   ObjectClass UserName             Modifiable Controllable
----   ----------- --------             ---------- ------------
domain domainDNS   DOMAIN\Administrator True       True

The basic table format for the access check results shows give columns, the common name of the object, it's schema class, the user that was checked and whether the access check resulted in any modifiable or controllable access being granted. Modifiable is things like being able to write attributes or create/delete child objects. Controllable indicates one or more controllable extended right was granted to the user, such as allowing the user's password to be changed.

As this is PowerShell the access check result is an object with many properties. The following properties are probably the ones of most interest when determining what access is granted to the user.
  • GrantedAccess - The granted access when only specifying the object's schema class during the check. If an access is granted at this level it'd apply to all values of that type, for example if WriteProp is granted then any attribute in the object can be written by the user.
  • WritableAttributes - The list of attributes a user can modify.
  • WritablePropertySets - The list of writable property sets a user can modify. Note that this is more for information purposes, the modifiable attributes will also be in the WritableAttributes property which is going to be easier to inspect.
  • GrantedControl - The list of control extended rights granted to a user.
  • GrantedWriteValidated - The list of write validated extended rights granted to a user.
  • CreateableClasses - The list of child object classes that can be created.
  • DeletableClasses - The list of child object classes that can be deleted.
  • DistinguishedName - The full DN of the object.
  • SecurityDescriptor - The security descriptor used for the check.
  • TokenInfo - The user's information used in the check, such as the list of groups.
The command should be pretty easy to use. That said it does come with a few caveats. First you can only use the command with direct access to the AD using a domain account. Technically there's no reason you couldn't implement a gatherer like Bloodhound and doing the access check offline, but I just don't. I've not tested it in weirder setups such as complex domain hierarchies or RODCs.

If you're using a low-privileged user there's likely to be AD objects that you can't enumerate or read the security descriptor from. This means the results are going to depend on the user you use to enumerate with. The best results would be using a domain/enterprise administrator will full access to everything.

Based on my testing when I've found an access being granted to a user that seems to be real, however it's possible I'm not always 100% correct or that I'm missing accesses. Also it's worth noting that just having access doesn't mean there's not some extra checking done by the LDAP server. For example there's an explicit block on creating Group Managed Service Accounts in Computer objects, even though that will seem to be a granted child object.

Sunday, 26 June 2022

Finding Running RPC Server Information with NtObjectManager

When doing security research I regularly use my NtObjectManager PowerShell module to discover and call RPC servers on Windows. Typically I'll use the Get-RpcServer command, passing the name of a DLL or EXE file to extract the embedded RPC servers. I can then use the returned server objects to create a client to access the server and call its methods. A good blog post about how some of this works was written recently by blueclearjar.

Using Get-RpcServer only gives you a list of what RPC servers could possibly be running, not whether they are running and if so in what process. This is where the RpcView does better, as it parses a process' in-memory RPC structures to find what is registered and where. Unfortunately this is something that I'm yet to implement in NtObjectManager

However, it turns out there's various ways to get the running RPC server information which are provided by OS and the RPC runtime which we can use to get a more or less complete list of running servers. I've exposed all the ones I know about with some recent updates to the module. Let's go through the various ways you can piece together this information.

NOTE some of the examples of PowerShell code will need a recent build of the NtObjectManager module. For various reasons I've not been updating the version of the PS gallery, so get the source code from github and build it yourself.

RPC Endpoint Mapper

If you're lucky this is simplest way to find out if a particular RPC server is running. When an RPC server is started the service can register an RPC interface with the function RpcEpRegister specifying the interface UUID and version along with the binding information with the RPC endpoint mapper service running in RPCSS. This registers all current RPC endpoints the server is listening on keyed against the RPC interface. 

You can query the endpoint table using the RpcMgmtEpEltInqBegin and RpcMgmtEpEltInqNext APIs. I expose this through the Get-RpcEndpoint command. Running Get-RpcEndpoint with no parameters returns all interfaces the local endpoint mapper knows about as shown below.

PS> Get-RpcEndpoint
UUID                                 Version Protocol     Endpoint      Annotation
----                                 ------- --------     --------      ----------
51a227ae-825b-41f2-b4a9-1ac9557a1018 1.0     ncacn_ip_tcp 49669         
0497b57d-2e66-424f-a0c6-157cd5d41700 1.0     ncalrpc      LRPC-5f43...  AppInfo
201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0     ncalrpc      LRPC-5f43...  AppInfo
...

Note that in addition to the interface UUID and version the output shows the binding information for the endpoint, such as the protocol sequence and endpoint. There is also a free form annotation field, but that can be set to anything the server likes when it calls RpcEpRegister.

The APIs also allow you to specify a remote server hosting the endpoint mapper. You can use this to query what RPC servers are running on a remote server, assuming the firewall doesn't block you. To do this you'd need to specify a binding string for the SearchBinding parameter as shown.

PS> Get-RpcEndpoint -SearchBinding 'ncacn_ip_tcp:primarydc'
UUID                                 Version Protocol     Endpoint     Annotation
----                                 ------- --------     --------     ----------
d95afe70-a6d5-4259-822e-2c84da1ddb0d 1.0     ncacn_ip_tcp 49664
5b821720-f63b-11d0-aad2-00c04fc324db 1.0     ncacn_ip_tcp 49688
650a7e26-eab8-5533-ce43-9c1dfce11511 1.0     ncacn_np     \PIPE\ROUTER Vpn APIs
...

The big issue with the RPC endpoint mapper is it only contains RPC interfaces which were explicitly registered against an endpoint. The server could contain many more interfaces which could be accessible, but as they weren't registered they won't be returned from the endpoint mapper. Registration will typically only be used if the server is using an ephemeral name for the endpoint, such as a random TCP port or auto-generated ALPC name.

Pros:

  • Simple command to run to get a good list of running RPC servers.
  • Can be run against remote servers to find out remotely accessible RPC servers.
Cons:
  • Only returns the RPC servers intentionally registered.
  • Doesn't directly give you the hosting process, although the optional annotation might give you a clue.
  • Doesn't give you any information about what the RPC server does, you'll need to find what executable it's hosted in and parse it using Get-RpcServer.

Service Executable

If the RPC servers you extract are in a registered system service executable then the module will try and work out what service that corresponds to by querying the SCM. The default output from the Get-RpcServer command will show this as the Service column shown below.

PS> Get-RpcServer C:\windows\system32\appinfo.dll
Name        UUID                                 Ver Procs EPs Service Running
----        ----                                 --- ----- --- ------- -------
appinfo.dll 0497b57d-2e66-424f-a0c6-157cd5d41700 1.0 7     1   Appinfo True
appinfo.dll 58e604e8-9adb-4d2e-a464-3b0683fb1480 1.0 1     1   Appinfo True
appinfo.dll fd7a0523-dc70-43dd-9b2e-9c5ed48225b1 1.0 1     1   Appinfo True
appinfo.dll 5f54ce7d-5b79-4175-8584-cb65313a0e98 1.0 1     1   Appinfo True
appinfo.dll 201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0 7     1   Appinfo True

The output also shows the appinfo.dll executable is the implementation of the Appinfo service, which is the general name for the UAC service. Note here that is also shows whether the service is running, but that's just for convenience. You can use this information to find what process is likely to be hosting the RPC server by querying for the service PID if it's running. 

PS> Get-Win32Service -Name Appinfo
Name    Status  ProcessId
----    ------  ---------
Appinfo Running 6020

The output also shows that each of the interfaces have an endpoint which is registered against the interface UUID and version. This is extracted from the endpoint mapper which makes it again only for convenience. However, if you pick an executable which isn't a service implementation the results are less useful:

PS> Get-RpcServer C:\windows\system32\efslsaext.dll
Name          UUID                   Ver Procs EPs Service Running      
----          ----                   --- ----- --- ------- -------      
efslsaext.dll c681d488-d850-11d0-... 1.0 21    0           False

The efslsaext.dll implements one of the EFS implementations, which are all hosted in LSASS. However, it's not a registered service so the output doesn't show any service name. And it's also not registered with the endpoint mapper so doesn't show any endpoints, but it is running.

Pros:

  • If the executable's a service it gives you a good idea of who's hosting the RPC servers and if they're currently running.
  • You can get the RPC server interface information along with that information.
Cons:
  • If the executable isn't a service it doesn't directly help.
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • Even if the service is running it might not have enabled the RPC servers.

Enumerating Process Modules

Extracting the RPC servers from an arbitrary executable is fine offline, but what if you want to know what RPC servers are running right now? This is similar to RpcView's process list GUI, you can look at a process and find all all the services running within it.

It turns out there's a really obvious way of getting a list of the potential services running in a process, enumerate the loaded DLLs using an API such as EnumerateLoadedModules, and then run Get-RpcServer on each one to extract the potential services. To use the APIs you'd need to have at least read access to the target process, which means you'd really want to be an administrator, but that's no different to RpcView's limitations.

The big problem is just because a module is loaded it doesn't mean the RPC server is running. For example the WinHTTP DLL has a built-in RPC server which is only loaded when running the WinHTTP proxy service, but the DLL could be loaded in any process which uses the APIs.

To simplify things I expose this approach through the Get-RpcServer function with the ProcessId parameter. You can also use the ServiceName parameter to lookup a service PID if you're interested in a specific service.

PS> Get-RpcEndpoint -ServiceName Appinfo
Name        UUID                        Ver Procs EPs Service Running                ----        ----                        --- ----- --- ------- -------
RPCRT4.dll  afa8bd80-7d8a-11c9-bef4-... 1.0 5     0           False
combase.dll e1ac57d7-2eeb-4553-b980-... 0.0 0     0           False
combase.dll 00000143-0000-0000-c000-... 0.0 0     0           False

Pros:

  • You can determine all RPC servers which could be potentially running for an arbitrary process.
Cons:
  • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
  • You can't directly enumerate the module list, except for the main executable, from a protected process (there's are various tricks do so, but out of scope here).

Asking an RPC Endpoint Nicely

The final approach is just to ask an RPC endpoint nicely to tell you what RPC servers is supports. We don't need to go digging into the guts of a process to do this, all we need is the binding string for the endpoint we want to query and then call the RpcMgmtInqIfIds API.

This will only return the UUID and version of the RPC server that's accessible from the endpoint, not the RPC server information. But it will give you an exact list of all supported RPC servers, in fact it's so detailed it'll give you all the COM interfaces that the process is listening on as well. To query this list you only need to access to the endpoint transport, not the process itself.

How do you get the endpoints though? One approach is if you do have access to the process you can enumerate its server ALPC ports by getting a list of handles for the process, finding the ports with the \RPC Control\ prefix in their name and then using that to form the binding string. This approach is exposed through Get-RpcEndpoint's ProcessId parameter. Again it also supports a ServiceName parameter to simplify querying services.

PS> Get-RpcEndpoint -ServiceName AppInfo
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

If you don't have access to the process you can do it in reverse by enumerating potential endpoints and querying each one. For example you could enumerate the \RPC Control object directory and query each one. Since Windows 10 19H1 ALPC clients can now query the server's PID, so you can not only find out the exposed RPC servers but also what process they're running in. To query from the name of an ALPC port use the AlpcPort parameter with Get-RpcEndpoint.

PS> Get-RpcEndpoint -AlpcPort LRPC-0ee3261d56342eb7ac
UUID              Version Protocol Endpoint     
----              ------- -------- --------  
0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
...

Pros:

  • You can determine exactly what RPC servers are running in a process.
Cons:
  • You can't directly determine what the RPC server does as the list gives you no information about which module is hosting it.

Combining Approaches

Obviously no one approach is perfect. However, you can get most of the way towards RpcView process list by combining the module enumeration approach with asking the endpoint nicely. For example, you could first get a list of potential interfaces by enumerating the modules and parsing the RPC servers, then filter that list to only the ones which are running by querying the endpoint directly. This will also get you a list of the ALPC server ports that the RPC server is running on so you can directly connect to it with a manually built client. And example script for doing this is on github.

We are still missing some crucial information that RpcView can access such as the interface registration flags from any approach. Still, hopefully that gives you a few ways to approach analyzing the RPC attack surface of the local system and determining what endpoints you can call.