Monday, 29 April 2024

Relaying Kerberos Authentication from DCOM OXID Resolving

Recently, there's been some good research into further exploiting DCOM authentication that I initially reported to Microsoft almost 10 years ago. By inducing authentication through DCOM it can be relayed to a network service, such as Active Directory Certificate Services (ADCS) to elevated privileges and in some cases get domain administrator access.

The important difference with this new research is taking the abuse of DCOM authentication from local access (in the case of the many Potatoes) to fully remote by abusing security configuration changes or over granting group access. For more information I'd recommend reading the slides from Tianze Ding Blackhat ASIA 2024 presentation, or reading about SilverPotato by Andrea Pierini.

This short blog post is directly based on slide 36 of Tianze Ding presentation where there's a mention on trying to relay Kerberos authentication from the initial OXID resolver request. I've reproduced the slide below:

Slide 36 from the Blackhat Asia presentation, discussing Kerberos relay from the ResolveOxid2 call.
The slides says that you can't relay Kerberos authentication during OXID resolving because you can't control the SPN used for the authentication. It's always set to RPCSS/MachineNameFromStringBinding. While you can control the string bindings in the standard OBJREF structure, RPCSS ignores the security bindings and so you can't specify the SPN unlikely with the an object RPC call which happens later.

This description intrigued me, as I didn't think this was true. You just had to abuse a "feature" I described in my original Kerberos relay blog post. Specifically, that the Kerberos SSPI supports a special format for the SPN which includes marshaled target information. This was something I discovered when trying to see if I could get Kerberos relay from the SMB protocol, the SMB client would call the SecMakeSPNEx2 API, which in turn would call CredMarshalTargetInfo to build a marshaled string which appended to the end of the SPN. If the Kerberos SSPI sees an SPN in this format, it calculates the length of the marshaled data, strips that from the SPN and continues with the new SPN string.

In practice what this means is you can build an SPN of the form CLASS/<SERVER><TARGETINFO> and Kerberos will authenticate using CLASS/<SERVER>. The interesting thing about this behavior is if the <SERVER><TARGETINFO> component is coming from the hostname of the server we're authenticating to then you can end up decoupling the SPN used for the authentication from the hostname that's used to communicate. And that's exactly what we got here, the MachineNameFromStringBinding is coming from an untrusted source, the OBJREF we specified. We can specify a machine name in this special format, this will allow the OXID resolver to talk to our server on hostname <SERVER><TARGETINFO> but authenticate using RPCSS/<SERVER> which can be anything we like.

There are some big caveats with this. Firstly, the machine name must not contain any dots, so it must be an intranet address. This is because it's close to impossible to a build a valid TARGETINFO string which represents a valid fully qualified domain name. In many situations this would rule out using this trick, however as we're dealing with domain authentication scenarios and the default for the Windows DNS server is to allow any user to create arbitrary hosts within the domain's DNS Zone this isn't an issue.

This restriction also limits the maximum size of the hostname to 63 characters due to the DNS protocol. If you pass a completely empty CREDENTIAL_TARGET_INFORMATION structure to the CredMarshalTargetInfo API you get the minimum valid target information string, which is 44 characters long. This only leaves 19 characters for the SERVER component, but again this shouldn't be a big issue. Windows component names are typically limited to 15 characters due to the old NetBIOS protocol, and by default SPNs are registered with these short name forms. Finally in our case while there won't be an explicit RPCSS SPN registered, this is one of the service classes which is automatically mapped to the HOST class which will be registered.

To exploit this you'll need to do the following steps:

  1. Build the machine name by appending the hostname for for the target SPN to the minimum string 1UWhRCAAAAAAAAAAAAAAAAAAAAAAAAAAAAwbEAYBAAAA. For example for the SPN RPCSS/ADCS build the string ADCS1UWhRCAAAAAAAAAAAAAAAAAAAAAAAAAAAAwbEAYBAAAA. 
  2. Register the machine name as a host on the domain's DNS server. Point the record to a server you control on which you can replace the listening service on TCP port 135.
  3. Build an OBJREF with the machine name and induce OXID resolving through your preferred method, such as abusing IStorage activation.
  4. Do something useful with the induced Kerberos authentication.

With this information I did some tests myself, and also Andrea checked with SilverPotato and it seems to work. There are limits of course, the big one is the security bindings are ignored so the OXID resolver uses Negotiate. This means the Kerberos authentication will always be negotiated with at least integrity enabled which makes the authentication useless for most scenarios, although it can be used for the default configuration of ADCS (I think).

Thursday, 25 April 2024

Issues Resolving Symbols on Windows 11 on ARM64

This is a short blog post about an issue I encountered during some development work on my OleViewDotNet tool and how I resolved it. It might help others if they come across a similar problem, although I'm not sure if I took the best approach.

OleViewDotNet has the ability to parse the internal COM structures in a process and show important information such as the list of current IPIDs exported by the process and the access security descriptor. 

PS C:\> $p = Get-ComProcess -ProcessId $pid
PS C:\> $p.Ipids
IPID                                 Interface Name  PID    Process Name
----                                 --------------  ---    ------------
00008800-4bd8-0000-c3f9-170a9f197e11 IRundown        19416  powershell.exe
00009401-4bd8-ffff-45b0-a43d5764a731 IRundown        19416  powershell.exe
0000a002-4bd8-5264-7f87-e6cbe82784aa IRundown        19416  powershell.exe

To achieve this task we need access to the symbols of the COMBASE DLL so that we can resolve various root pointers to hash tables and other runtime artifacts. The majority of the code to parse the process information is in the COMProcessParser class, which uses the DBGHELP library to resolve symbols to an address. My code also supports a mechanism to cache the resolved pointers into a text file which can be subsequently used on other systems with the same COMBASE DLL rather than needing to pull down a 30+ MiB symbol file.

This works fine on Windows 11 x64, but I noticed that I would get incorrect results on ARM64. In the past I've encountered similar issues that have been down to changes in the internal structures used during parsing. Microsoft provides private symbols for COMBASE so its pretty easy to check if the structures were different between x64 and ARM64 versions of Windows 11. They were no differences that I could see. In any case, I noticed this also impacted trivial values, for example the symbol gSecDesc contains a pointer to the COM access security descriptor. However, when reading that pointer it was always NULL even though it should have been initialized.

To add the my confusion when I checked the symbol in WinDBG it showed the pointer was correctly initialized. However, if I did a search for the expected symbol using the x command in WinDBG I found something interesting:

0:010> x combase!gSecDesc
00007ffa`d0aecb08 combase!gSecDesc = 0x00000000`00000000
00007ffa`d0aed1c8 combase!gSecDesc = 0x00000180`59fdb750

We can see from the output that there's two symbols for gSecDesc, not one. The first one has a NULL value while the second has the initialized value. When I checked what address my symbol resolver was returning it was the first one, where as WinDBG knew better and would return the second. What on earth is going on?

This is an artifact of a new feature in Windows 11 on ARM64 to simplify the emulation of x64 executables, ARM64X. This is a clever (or terrible) trick to avoid needing separate ARM64 and x64 binaries on the system. Instead both ARM64 and x64 compatible code, referred to as ARM64EC (Emulation Compatible), are merged into a single system binary. Presumably in some cases this means that global data structures need to be duplicated, once for the ARM64 code, and once for the ARM64EC code. In this case it doesn't seem like there should be two separate global data values as a pointer is a pointer, but I suppose there might be edge cases where that isn't true and it's simpler to just duplicate the values to avoid conflicts. The details are pretty interesting and there's a few places where this has been reverse engineered, I'd at least recommend this blog post.

My code is using the SymFromName API to query the symbol address, and this would just return the first symbol it finds which in this case was the ARM64EC one which wasn't initialized in an ARM64 process. I don't know if this is a bug in DBGHELP, perhaps it should try and return the symbol which matches the binary's machine type, or perhaps I'm holding it wrong. Regardless, I needed a way of getting the correct symbol, but after going through the DBGHELP library there was no obvious way of disambiguating the two. However, clearly WinDBG can do it, so there must be a way.

After a bit of hunting around I found that the Debug Interface Access (DIA) library has an IDiaSymbol::get_machineType method which returns the machine type for the symbol, either ARM64 (0xAA64) or ARM64EC (0xA641). Unfortunately I'd intentionally used DBGHELP as it's installed by default on Windows where as DIA needs to be installed separately. There didn't seem to be an equivalent in the DBGHELP library. 

Fortunately after poking around the DBGHELP library looking for a solution an opportunity presented itself. Internally in DBGHELP (at least recent versions) it uses a private copy of the DIA library. That in itself wouldn't be that helpful, except the library exports a couple of private APIs that allow a caller to query the current DIA state. For example, there's the SymGetDiaSession API which returns an instance of the IDiaSession interface. From that interface you can query for an instance of the IDiaSymbol interface and then query the machine type. I'm not sure how compatible the version of DIA inside DBGHELP is relative to the publicly released version, but it's compatible enough for my purposes.

Update 2024/04/26: it was pointed out to me that the machine type is present in the SYMBOL_INFO::Reserved[1] field so you don't need to do this whole approach with the DIA interface. The point still stands that you need to enumerate the symbols on ARM64 platforms as there could be multiple ones and you still need to check the machine type.

To resolve this issue the code in OleViewDotNet takes the following steps on ARM64 systems:

  1. Instead of calling SymFromName the code enumerate all symbols for a name.
  2. The SymGetDiaSession is called to get an instance of the IDiaSession interface.
  3. The IDiaSession::findSymbolByVA method is called to get an instance the IDiaSymbol interface for the symbol.
  4. The IDiaSymbol::get_machineType method is called to get the machine type for the symbol.
  5. The symbol is filtered based on the context, e.g. if parsing an ARM64 process it uses the ARM64 symbol.
This is much more complicated that I think it needs to be, but I've yet to find an alternative approach. Ideally the SYMBOL_INFO structure in DBGHELP should contain a machine type field, but I guess it's hard to change the interface now. The relatively simple code to do the machine type query is here. If anyone has found a better way of doing it with just the public interface to DBGHELP I'd appreciate the information :)